Performance

RSS for tag

Improve your app's performance.

Posts under Performance tag

52 Posts
Sort by:
Post not yet marked as solved
0 Replies
49 Views
The user reports one issue about my application does not responding. When he right clicks my application icon on Mac dock, it shows "Applicaiton not responding" , but when he click on the buttons, input box or switch windows of my application UI, it works well, which mean my application is not really stuck. Does anyone ever see such "FAKE" not responding issue happen ? and what's the reason? He is on MacOS 12.4.0
Posted
by
Post not yet marked as solved
2 Replies
201 Views
On an iMac Retina 5K, 27-inch, the render time is ridiculously slow and unresponsive. It's not the graph, even though data arrives 3 times per second with 920 data points. I've tested this in standalone code - and its fast. The text and buttons need to be updated at a similar speed. Although this may appear to be working, it's far from being responsive. In fact, it's only possible to quit the program with a Command-Q, and displaying a basic About box takes forever. I think my Views are well structured, with most modifiers factored out, so should I conclude that SwiftUI is not suitable for anything beyond a simple REST app running on IOS? I hope not. If Apple were writing something similar, how would they go about it? I've spent over a year developing this app - wishing and expecting for Apple to come up with something better that doesn't run the entire UI on one single thread, and perhaps able to execute sub-views concurrently as one would expect. I don't wish to sound critical. I just want to know how to get this UI rendering faster, so I'm crying out for help and advise.
Posted
by
Post not yet marked as solved
0 Replies
120 Views
hi, I'm developing a app that uses metal to compute some calculations and to improve the efficiency of the render process i started watching indirect command buffers but there isn't a example that explains the best way to processed. Any one can provide some tips?
Posted
by
Post not yet marked as solved
1 Replies
186 Views
I have a maxed out 2018 MBP with an i9 and 32GB of ram. On Catalina, I was constantly running 2 Linux VMs, had 3-4 separate Chrome user sessions running with 10+ tabs each, giant MS Excel documents open, XCode w/ 2 apps running simultaneously, iOS Simulator, 1-2 instances of Node, MS Teams, Slack, WhatsApp, PostMan, and barely ever experienced any kind of lag or performance issues. Activity monitor would max out all the time, but then it would use SWAP and my user experience was always great. If I noticed a slowdown, all I had to do was close 1 browser window, and I'd be back to full speed. After upgrading to Monterey, I keep all my VMs shutdown, any MS application closed, never run more than 1 node instance, and still, I can barely type into this text box without experiencing lag... Application switching takes FOREVER, both via alt-tab, and selecting the application from the dock. Looking at my Activity Monitor, I'm barely using any resources, and somehow, EVERYTHING IS STILL LAGGING! I am already well aware that Apple purposely curbs their OS performance on "older" hardware, and that this OS is meant to make the M1 chip look better than the Intel chips, but this is a $4,000 laptop, and you've basically turned it into a giant paperweight! As a comparison, I have a 10 year old MBP with an i7, where the OS has not been upgraded, and it runs 1000x better than my i9 on Monterey. Does anyone know of any terminal commands I can run to speed things up, or did Apple basically hardcode this OS to lag for non-ARM chipsets?
Posted
by
Post not yet marked as solved
0 Replies
157 Views
Hi, This topic is about Workgroups. I create child processes and I'd like to communicate a os_workgroup_t to my child process so they can join the work group as well. As far as I understand, the os_workgroup_t value is local to the process. I've found that one can use os_workgroup_copy_port() and os_workgroup_create_with_port(), but I'm not familiar at all with ports and I wonder what would be the minimal effort to achieve that. Thank you very much! Alex
Posted
by
Post not yet marked as solved
0 Replies
144 Views
Hello! I noticed that scrolling through the sidebar (thumbnails view) while reading a PDF document in Preview app is laggy and spikes the CPU. The document is about 20MB and has ca 2000 pages. I'm using MacOS 12.3.1 on Apple Silicon and Preview v11.0. I assume that rendering a PDF document and scrolling aren't hard tasks so this bad performance is unexpected. Is it possible to get a solution for that issue? Thanks in advance
Posted
by
Post not yet marked as solved
0 Replies
222 Views
For me who just switched from android to iphone (iphone11-ios14.4.2), I feel very strange about the animation performance of ios in e-sports games (MOBA), although the animation of ios is very good and very smooth, but I found that In the case of "slightly hot mobile phone", it often happens that the operation does not follow the hand when playing games, that is, under high-speed and frequent finger operations, some operations will always fail or be lost. Pay attention to the game animation at this time. Played strangely slow. [Remarks: There is a character (Luna) in the game of King of Glory that has very high requirements on the operation speed of the fingers. The performance when using this character is demonstrated in the link (bilibili.com/video/BV1Ct411A7X9)], the fingers need to slide very frequently and at high speed . My doubt is when comparing with Android phones. When using the Luna character of the above game, the above situation will always happen under the high speed combo. In MOBA games, even if the operation fails for a moment, you may lose the whole game. This made me feel very frustrated after buying the iPhone 11, because according to the performance of the A13 processor, this should not happen, and it should completely surpass Android's Snapdragon processor. But in fact, a similar situation will never happen in android. As long as the operation is pressed by the player, it will definitely be triggered. Although the animation of android may drop frames or freeze, the operation will never fail and lost. Under ios, the animation will never be stuck, but it will be played slowly, but the finger operation will always fail in extreme cases, that is, there is no response when you press it. This has led to iPhones always missing opportunities on the battlefield when playing MOBA games. I suspect that this is related to some optimizations of the ios system, and I hope that the developers of ios can help to solve the confusion. And whether it is possible to add relevant options in the system settings to turn off this optimization, so that when using specific games that require high fingers, ios can respond quickly to finger operations like android?
Posted
by
Post not yet marked as solved
0 Replies
209 Views
When scrolling a collection view, it looks like all supplementary views (even those that currently not appear) are invalidated. If there is a large number of supplementary views (section headers in my case) this significantly affects scrolling performance. Can I instruct the collection view to invalidate only the visible supplementary header views (rather than all of them) when user scrolls the view? Here is the code I use to create the collection view (tableview-like layout) in my custom UICollectionViewController class: // Create list layout configuration UICollectionLayoutListConfiguration *config = [[UICollectionLayoutListConfiguration alloc] initWithAppearance:UICollectionLayoutListAppearancePlain]; config.headerMode = UICollectionLayoutListHeaderModeSupplementary; // Create compositional layout based on configuration and assign to collection view UICollectionViewCompositionalLayout *layout = [UICollectionViewCompositionalLayout layoutWithListConfiguration:config]; self.collectionView.collectionViewLayout = layout;
Posted
by
Post marked as solved
4 Replies
535 Views
I'm trying to hint the task scheduler that some threads should be scheduled together using the task_policy_set API with THREAD_AFFINITY_POLICY (in lieu of there being no "real" thread to core affinity API). All the examples mention setting the policy after creation but before execution of the task(s). Unfortunately, I'm not creating these tasks (but OpenMP is), and when I then try to use the API on an already running thread, I get a return value of KERN_INVALID_ARGUMENT(= 4) thread_affinity_policy_data_t policy = { 1 }; auto r = thread_policy_set(mach_task_self(), THREAD_AFFINITY_POLICY, (thread_policy_t)&policy, THREAD_AFFINITY_POLICY_COUNT); When I replace mach_task_self() by pthread_mach_thread_np(pthread_self()), I get an KERN_NOT_SUPPORTED error instead (= 46, "Empty thread activation (No thread linked to it)"). Has anyone used these APIs successfully on an already running thread? Background: The code I'm working on divides a problem set into a small number of roughly equal sized pieces (e.g. 8 or 16, this is an input parameter derived from the number of cores to be utilized). These pieces are not entirely independent but need to be processed in lock-step (as occasionally data from neighboring pieces is accessed). Sometimes when a neighboring piece isn't ready yet for a fairly long time, we call std::this_thread::yield() which unfortunately seems to indicate to the scheduler that this thread should move to the efficiency cores (which then wreaks havoc with the assumption of each computation over a piece roughly requiring the same amount of time so all threads can remain in lock-step). :( A similar (?) problem seems to happen with OpenMP barriers, which have terrible performance on the M1 Ultra at least unless KMP_USE_YIELD=0 is used (for the OpenMP run-time from LLVM). Can this automatic migration (note: not the relinquishing of the remaining time-slice) be prevented?
Posted
by
Post not yet marked as solved
0 Replies
223 Views
Scrolling a collection view with a large number of supplementary header views is extremely slow. The more sections, the worst is the scrolling performance. (Tested with 5,000 sections). Best to give it a try and see for yourself. Code is available at https://github.com/yoasha/CollectionViewTest To reproduce, run the demo app with Xcode on either iPhone or iPad simulator (or a real device), select "Collection View" on the main page, and try to scroll. Any thoughts?
Posted
by
Post not yet marked as solved
0 Replies
251 Views
When there is a large number of supplementary views, scrolling is extremely slow. To reproduce: Download Apple's demo of "Implementing Modern Collection Views" from here Open project in Xcode and show PinnedSectionHeaderFooterViewController.swift Go to line 106 and replace it with let sections = Array(0..<3000) Run project, navigate to "Pinned Section Headers" page, and try scrolling. Scrolling is barely possible and extremely slow! While profiling with Time instrument, issue seems to rely in invalidating supplementary views. Screenshot attached below. How can I fix this? (I have many section headers in my view)
Posted
by
Post not yet marked as solved
0 Replies
261 Views
How well does SolidWorks perform using Windows on a MacBook Pro with the M1 Max chip and 64GB unified memory? I’ll likely be running Windows through Parallels.
Posted
by
Post not yet marked as solved
0 Replies
289 Views
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels. When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Posted
by
Post not yet marked as solved
2 Replies
347 Views
Debugging gputrace from M1 Max on older hardware in XCode warns "No compatible devices connected" and to "Connect a device that supports the screen resolution and Metal feature profile that this gputrace file was generated on. Seriously? I was boasting about XCode/Metals ability to get a gputrace and play it back, which is super helpful, but this was quite a let down. Is there no way other than buying a new Mac with M1 Max to get a look at the gputrace?
Posted
by
Post not yet marked as solved
2 Replies
484 Views
In our AR app and appclip made with SceneKit, we experience very significant drops in framerate when we make our 3D content appear at different steps of the experience. For now all of our 3D objects are in our Main Scene. Those which are supposed to appear at some point in the experience have their opacity set to 0.01 at the beginning and then fade in with a SCNAction (the reason why we tried setting their opacity to 0.01 at start was to make sure that these objects are rendered from the start of the experience). However, if the objects all have their opacity set to 1 from the start of the experience, we do not experience any fps drop. It is worth noting that the fps drops only happen the first time the app is opened. If I close it and re-open it, then it unfolds without any freeze. What would be the best way to load (or pre-load) these 3D elements to avoid these freezes? We have conducted our tests on an iPhone X (iOS 15.2.1), on an iPhone 12 Pro (iOS 14), and on an iPad Pro 2020 (iPad OS 14.8.1).
Posted
by
Post not yet marked as solved
1 Replies
475 Views
Below, the sampleBufferProcessor closure is where the Vision body pose detection occurs. /// Transfers the sample data from the AVAssetReaderOutput to the AVAssetWriterInput, /// processing via a CMSampleBufferProcessor. /// /// - Parameters: /// - readerOutput: The source sample data. /// - writerInput: The destination for the sample data. /// - queue: The DispatchQueue. /// - completionHandler: The completion handler to run when the transfer finishes. /// - Tag: transferSamplesAsynchronously private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput, to writerInput: AVAssetWriterInput, onQueue queue: DispatchQueue, sampleBufferProcessor: SampleBufferProcessor, completionHandler: @escaping () -> Void) { /* The writerInput continously invokes this closure until finished or cancelled. It throws an NSInternalInconsistencyException if called more than once for the same writer. */ writerInput.requestMediaDataWhenReady(on: queue) { var isDone = false /* While the writerInput accepts more data, process the sampleBuffer and then transfer the processed sample to the writerInput. */ while writerInput.isReadyForMoreMediaData { if self.isCancelled { isDone = true break } // Get the next sample from the asset reader output. guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true break } // Process the sample, if requested. do { try sampleBufferProcessor?(sampleBuffer) } catch { /* The `readingAndWritingDidFinish()` function picks up this error. */ self.sampleTransferError = error isDone = true } // Append the sample to the asset writer input. guard writerInput.append(sampleBuffer) else { /* The writer could not append the sample buffer. The `readingAndWritingDidFinish()` function handles any error information from the asset writer. */ isDone = true break } } if isDone { /* Calling `markAsFinished()` on the asset writer input does the following: 1. Unblocks any other inputs needing more samples. 2. Cancels further invocations of this "request media data" callback block. */ writerInput.markAsFinished() /* Tell the caller the reader output and writer input finished transferring samples. */ completionHandler() } } } The processor closure runs body pose detection on every sample buffer so that later in the VNDetectHumanBodyPoseRequest completion handler, VNHumanBodyPoseObservation results are fed into a custom Core ML action classifier. private func videoProcessorForActivityClassification() -> SampleBufferProcessor { let videoProcessor: SampleBufferProcessor = { sampleBuffer in do { let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer) try requestHandler.perform([self.detectHumanBodyPoseRequest]) } catch { print("Unable to perform the request: \(error.localizedDescription).") } } return videoProcessor } How could I improve the performance of this pipeline? After testing with an hour long 4K video at 60 FPS, it took several hours to process running as a Mac Catalyst app on M1 Max.
Posted
by