Performance

RSS for tag

Improve your app's performance.

Posts under Performance tag

44 Posts
Sort by:
Post not yet marked as solved
0 Replies
46 Views
Hi all, I am trying to measure the performce of my video game in iOS platform. After launching the debugger and caputre GPU workload (with "frame" selected as scope), I can not do performance profiling. The error message is: "Failed to enable shader profiler. (516)". And if I export the GPU trace and reopen it, the Xcode can not find any compatible devices connected. However, the same device used for capturing is connected. Device informations are: Mac device: MacBook Air M2, 2022 Mac OS version: Sonoma 14.4 XCode version: 15.2 (15C500b) mobile device: iPhone 13 Pro Max iOS version: 15.0 GPU performance counter can be performed using the same mobile device in my colleague's Mac. So I think there might be something wrong with my Xcode.
Posted
by
Post not yet marked as solved
3 Replies
150 Views
Recently we have started refactoring some code to use Swift concurrency in Xcode 15.3. As we have added some new async methods and Tasks, new runtime warnings have emerged titled "Known Hang" and several are listed. None of the stack traces listed with these warnings are in areas directly modified but some of the same types/methods called are also called from the modified areas. So I can sort of understand why they are coming up...but they had to have been there before we added the Swift concurrency. Example of a tooltip with the warnings: My first query is: Are these warnings only issued when Swift concurrency is added/applied (as they were not there when using closures and mostly just off the main thread to network calls)? The documentation indicates these can all be suppressed by turning off the Thread Performance Checker BUT I would rather just suppress the few places as we refactor our codebase (as it is quite large). In that way, any new ones may be documented and we can decide to fix them now or later. I have tried to follow the instructions and added an environment variable PERFC_SUPPRESSION_FILE (in the Scheme) with a full path to a file formatted similarly to the example in the documentation. class:NSManagedObjectContext method:-[NSManagedObjectContext save:] My second query is: I have verified that the variable is set by reading it from the ProcessInfo. However, regardless of my settings, the runtime warnings are still presented. I could not find any examples or even any mention of others using this environment variable. I am reaching out with any advice or ideas to try. Has anyone successfully tried this or found an issue/alternative? Help me Mr. Wizard!
Posted
by
Post not yet marked as solved
0 Replies
131 Views
Our current status is that after three minutes or more in the background, re-opening the app is a restart. Most of the users are claiming that they were automatically redirected to the home page of our app after a certain period of inactivity in the app. I recently upgraded my Xcode version from 12.4 to 15.3. I did not experience the problem with Xcode 12.4. It is an enterprise application, and the majority of users report restart issues. It occurred at random, and the user device contained only our application, with no other app like entertainment or gaming apps. However, I notice that many other apps are running in the background for an extended period of time (such as 20 minutes or 30 minutes). When I open the app, the same page sometimes appears in the background or the app is refreshed (like, Medium) I am not sure how they do it; I follow Apple. The rules did not do anything after entering the background. Is there anything Apple could do? How can I resolve this issue? Or it is default iOS behaviour. Please provide any documentation related to this. Please help me resolve this issue. Note: iOS device Version 15 to 17 is the latest
Posted
by
Post not yet marked as solved
0 Replies
156 Views
Seeing the following, whether initializing Maps() in SwiftUI or using Apple's example Overlay Project since updating to Xcode 15.3: Thread Performance Checker: Thread running at User-interactive quality-of-service class waiting on a thread without a QoS class specified (base priority 0). Investigate ways to avoid priority inversions PID: 2148, TID: 42369 Backtrace ================================================================= 3 VectorKit 0x00007ff81658b145 ___ZN3geo9TaskQueue5applyEmNSt3__18functionIFvmEEE_block_invoke + 38 4 libdispatch.dylib 0x00000001036465c2 _dispatch_client_callout2 + 8 5 libdispatch.dylib 0x000000010365d79b _dispatch_apply_invoke3 + 527 6 libdispatch.dylib 0x000000010364658f _dispatch_client_callout + 8 7 libdispatch.dylib 0x0000000103647c6d _dispatch_once_callout + 66 8 libdispatch.dylib 0x000000010365c89b _dispatch_apply_redirect_invoke + 214 9 libdispatch.dylib 0x000000010364658f _dispatch_client_callout + 8 10 libdispatch.dylib 0x000000010365a67f _dispatch_root_queue_drain + 1047 11 libdispatch.dylib 0x000000010365af9d _dispatch_worker_thread2 + 277 12 libsystem_pthread.dylib 0x00000001036e2b43 _pthread_wqthread + 262 13 libsystem_pthread.dylib 0x00000001036e1acf start_wqthread + 15```
Posted
by
Post not yet marked as solved
1 Replies
135 Views
Dear Sirs, I'm writing an audio application that should show up to 128 horizontal peakmeters (width for each is about 150, height is 8) stacked inside a ScrollViewReader. For the actual value of the peakmeter I have a binding to a CGFloat value. The peakmeter works as expected and is refreshing correct. For testing I added a timer to my swift application that is firing every 0.05 secs, meaning I want to show 20 values per second. Inside the timer func I'm just creating random CGFloat values in range of 0...1 for the bound values. The peakmeters refresh and flicker as expected but I can see a CPU load of 40-50% in the activity monitor on my MacBook Air with Apple M2 even when compiled in release mode. I think this is quite high and I'd like to reduce this CPU load. Should this be possible? I.e. I thought about blocking the refresh until I've set all values? How could this be done and would it help? What else could I do? Thanks and best regards, JFreyberger
Posted
by
Post not yet marked as solved
0 Replies
152 Views
So in looking over the app review guidelines, where it reads, "They may not auto-launch or have other code run automatically at startup or login without consent" We currently present a popup to inform the user if they want to connect to our server. My question is, are we able to provide the user with a way to turn off this startup prompt (thus improving speed of app startup) It would be located within our apps own settings page, upon which the user would have the choice to turn off or leave on this startup prompt. Or would this go against apple's policy in any way?
Posted
by
Post not yet marked as solved
2 Replies
227 Views
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much? Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release. func Score( trial: String, target: String ) -> Int { var score = 0 withUnsafeBytes(of: trial.utf8) { rawTrial in withUnsafeBytes(of: target.utf8) { rawTarget in for i in 0..<5 { let trial_i = rawTrial[i]; if trial_i == rawTarget[i] // strong hit { score += kStrongScore } else // check for weak hit { for j in 0..<5 { if j != i { let target_j = rawTarget[j]; if (trial_i == target_j) && (rawTrial[j] != target_j) { score += kWeakScore break } } } } } } } return score }
Posted
by
Post marked as solved
1 Replies
150 Views
Hi, I need to remove this performance widget. I have not added any code for that. It's showing only after deploying to my phone. I need to remove that. Could you please assist ? Thanks!
Posted
by
Post not yet marked as solved
2 Replies
234 Views
Hi! I watched WWDC 2019 Optimizing App Launch video and can't see the Lifecycle phases when I Profile my App. I'm using Xcode 15.2 with Instruments 15.2, and SwiftUI as UI framework. Here is a screenshot of what I get. It's there another tool or another way to get this information? Thanks! Alfonso.
Posted
by
Post not yet marked as solved
3 Replies
246 Views
I'm implementing a bitonic sort in Metal with a Swift app. This requires 100's kernel dispatch calls for each of the swap stages which touch the whole array, the work required by the GPU is small. I haven't been able to get this to run fast enough in Swift and it seems its due to a high overhead for each dispatchThread command. I rewrote the test program in Objective C with a super-simple kernel function and its runs 25x faster from Objective C! Kernel function kernel void fill(device uint8_t *array [[buffer(0)]], const device uint32_t &N [[buffer(1)]], const device uint8_t &value [[buffer(2)]], uint i [[thread_position_in_grid]]) { if (i < N) { array[i] = value; } } The Swift code is: func fill(pso:MTLComputePipelineState, buffer:MTLBuffer, N: Int, passes: Int) { guard let commandBuffer = commandQueue.makeCommandBuffer() else { return } let gridSize = MTLSizeMake(N, 1, 1) var threadGroupSize = pso.maxTotalThreadsPerThreadgroup if (threadGroupSize > N) { threadGroupSize = N; } let threadgroupSize = MTLSizeMake(threadGroupSize, 1, 1); for pass in 0..<passes { guard let computeEncoder = commandBuffer.makeComputeCommandEncoder() else { return } var value:UInt8 = UInt8(pass); var NN:UInt32 = UInt32(N); computeEncoder.setComputePipelineState(pso) computeEncoder.setBuffer(buffer, offset: 0, index: 0) computeEncoder.setBytes(&NN, length: MemoryLayout<UInt32>.size, index: 1) computeEncoder.setBytes(&value, length: MemoryLayout<UInt8>.size, index: 2) computeEncoder.dispatchThreadgroups(gridSize, threadsPerThreadgroup: threadgroupSize) computeEncoder.endEncoding() } commandBuffer.commit() commandBuffer.waitUntilCompleted() } let device = MTLCreateSystemDefaultDevice()! let library = device.makeDefaultLibrary()! let commandQueue = device.makeCommandQueue()! let funcFill = library.makeFunction(name: "fill")! let pso = try? device.makeComputePipelineState(function: funcFill) var N = 16384 let passes = 100 let buffer = device.makeBuffer(length:N, options: [.storageModePrivate])! for _ in 1...10 { let startTime = DispatchTime.now() fill(pso:pso!, buffer:buffer, N:N, passes:passes) let endTime = DispatchTime.now() let elapsedTime = endTime.uptimeNanoseconds - startTime.uptimeNanoseconds print("Elapsed time:", Float(elapsedTime)/1_000_000, "ms"); } and the Objective C code (which should be almost identical) is void fill(id<MTLCommandQueue> commandQueue, id<MTLComputePipelineState> funcPSO, id<MTLBuffer> A, uint32_t N, int passes) { id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer]; MTLSize gridSize = MTLSizeMake(N, 1, 1); NSUInteger threadGroupSize = funcPSO.maxTotalThreadsPerThreadgroup; if (threadGroupSize > N) { threadGroupSize = N; } MTLSize threadgroupSize = MTLSizeMake(threadGroupSize, 1, 1); for(uint8_t pass=0; pass<passes; pass++) { id<MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoder]; [computeEncoder setComputePipelineState:funcPSO]; [computeEncoder setBuffer:A offset:0 atIndex:0]; [computeEncoder setBytes:&N length:sizeof(uint32_t) atIndex:1]; [computeEncoder setBytes:&pass length:sizeof(uint8_t) atIndex:2]; [computeEncoder dispatchThreads:gridSize threadsPerThreadgroup:threadgroupSize]; [computeEncoder endEncoding]; } [commandBuffer commit]; [commandBuffer waitUntilCompleted]; } int main() { NSError *error; id<MTLDevice> device = MTLCreateSystemDefaultDevice(); id<MTLLibrary> library = [device newDefaultLibrary]; id<MTLCommandQueue> commandQueue = [device newCommandQueue]; id<MTLFunction> funcFill = [library newFunctionWithName:@"fill"]; id<MTLComputePipelineState> pso = [device newComputePipelineStateWithFunction:funcFill error:&error]; // Prepare data int N = 16384; int passes = 100; id<MTLBuffer> bufferA = [device newBufferWithLength:N options:MTLResourceStorageModePrivate]; for(int it=1; it<=10; it++) { CFTimeInterval startTime = CFAbsoluteTimeGetCurrent(); fill(commandQueue, pso, bufferA, N, passes); CFTimeInterval duration = CFAbsoluteTimeGetCurrent() - startTime; NSLog(@"Elapsed time: %.1f ms", 1000*duration); } } The Swift output is: Elapsed time: 89.35556 ms Elapsed time: 63.243744 ms Elapsed time: 62.39568 ms Elapsed time: 62.183224 ms Elapsed time: 63.741913 ms Elapsed time: 63.59463 ms Elapsed time: 62.378654 ms Elapsed time: 61.746098 ms Elapsed time: 61.530384 ms Elapsed time: 60.88774 ms The objective C output is 2024-04-18 19:27:45.704 compute_test[3489:92754] Elapsed time: 3.6 ms 2024-04-18 19:27:45.706 compute_test[3489:92754] Elapsed time: 2.6 ms 2024-04-18 19:27:45.709 compute_test[3489:92754] Elapsed time: 2.6 ms 2024-04-18 19:27:45.712 compute_test[3489:92754] Elapsed time: 2.6 ms 2024-04-18 19:27:45.714 compute_test[3489:92754] Elapsed time: 2.7 ms 2024-04-18 19:27:45.717 compute_test[3489:92754] Elapsed time: 2.8 ms 2024-04-18 19:27:45.720 compute_test[3489:92754] Elapsed time: 2.8 ms 2024-04-18 19:27:45.723 compute_test[3489:92754] Elapsed time: 2.7 ms 2024-04-18 19:27:45.726 compute_test[3489:92754] Elapsed time: 2.5 ms 2024-04-18 19:27:45.728 compute_test[3489:92754] Elapsed time: 2.5 ms I compile the Swift code for Release, optimised for speed. I can't believe there should be a difference here, so what could be different, and what might I be doing wrong? thanks Adrian
Posted
by
Post not yet marked as solved
0 Replies
227 Views
I asked this on StackOverflow too, but did not get a response. Copying verbatim (images might not work as expected). Short question: which instructions other than floating point arithmetic instructions like fmul, fadd, fdiv etc are counted under the hardware event INST_SIMD_ALU in XCode Instruments? Alternatively, how can I count the number of floating point operations in a program using CPU counters? I want to measure/estimate the FLOPs count of my program and thought that CPU counters might be a good tool for this. The closest hardware event mnemonic that I could find is INST_SIMD_ALU, whose description reads. Retired non-load/store Advanced SIMD and FP unit instructions So, as a sanity check I wrote a tiny Swift code with ostensibly predictable FLOPs count. let iterCount = 1_000_000_000 var x = 3.1415926 let a = 2.3e1 let ainv = 1 / a // avoid inf for _ in 1...iterCount { x *= a x += 1.0 x -= 6.1 x *= ainv } So, I expect there to be around 4 * iterCount = 4e9 FLOPs. But, on running this under CPU Counters with the event INST_SIMD_ALU I get a count of 5e9, 1 extra FLOP per loop iteration than expected. See screenshot below. dumbLoop is the name of the function that I wrapped the code in. Here is the assembly for the loop +0x3c fmul d0, d0, d1 <---------------------------------- +0x40 fadd d0, d0, d2 | +0x44 fmov d4, x10 | +0x48 fadd d0, d0, d4 | +0x4c fmul d0, d0, d3 | +0x50 subs x9, x9, #0x1 | +0x54 b.ne "specialized dumbLoop(_:initialValue:)+0x3c" --- Since it's non-load/store instructions, it shouldn't be counting fmov and b.ne. That leaves subs, which is an integer subtraction instruction used for decrementing the loop counter. So, I ran two more "tests" to see if the one extra count comes from subs. On running it again with CPU Counters with the hardware event INST_INT_ALU, I found a count of one billion, which adds up with the number of loop decrements. Just to be sure, I unrolled the loop by a factor of 4, so that the number of loop decrements becomes 250 million from one billion. let iterCount = 1_000_000_000 var x = 3.1415926 let a = 2.3e1 let ainv = 1 / a // avoid inf let n = Int(iter_count / 4) for _ in 1...n { x *= a x += 1.0 x -= 6.1 x *= ainv x *= a x += 1.0 x -= 6.1 x *= ainv x *= a x += 1.0 x -= 6.1 x *= ainv x *= a x += 1.0 x -= 6.1 x *= ainv } print(x) And it adds up, around 250 million integer ALU instructions, and the total ALU instructions is 4.23 billion, somewhat short of the expected 4.25 billion. So, at the moment if I want to count the FLOPs in my program, one estimate I can use is INST_SIMD_ALU - INST_INT_ALU. But, is this description complete, or are there an other instructions that I might spuriously count as floating point operations? Is there a better way to count the number of FLOPs?
Posted
by
cku
Post not yet marked as solved
1 Replies
260 Views
I'm developing an iOS app that displays store locations on a map using Apple Maps (MapKit). I've limit the number of icons that can be displayed on the map to 100, but there's still huge performance issues and the app is very laggy even on modern iPhone models. What's the best practice when displaying a large number of icons on a map, should the icons be in PNG format with a small resolution (~10kb) or should the icons be vector (SVG) for best performance? Should I use the MapKit framework for iOS 17 or the UIKit approach?
Posted
by
Post not yet marked as solved
1 Replies
262 Views
Hi everyone, Our app runs on iOS deployment target 15.0. As per the WWDC 2022 session, page-in linking should be enabled for apps where chained-fixups is enabled. Since chained fixups are enabled from iOS deployment target 13.4+ which is true for our app, will page-in linking be applied to it? Is there some flag which we can use to confirm if it is applied for our app or not? We have tried using dyld_info tool but could not find a command which can provide this information.
Posted
by
Post not yet marked as solved
0 Replies
358 Views
Hello everyone, I've been trying to improve the performance of a grid view that I've made for an app. Basically, it's like one of those sensory boards and there are circles that, when dragged over, change color and play a little haptic feedback. However, I want the grid to span the entire screen and so by increasing the dimensions of the grid to say, 30x30, I am noticing significant performance decreases. CPU usage increases to 99% and the haptic feedback and animation slow down. I've narrowed down the problem to the drag gesture (not the haptics). Just the drag gesture makes the CPU usage approach 40%. The part where I verify if the drag location is within the bounds of any of the circles increase the CPU though. This is like O(n) but with like 900 grid points doesn't sound like it should be that bad? Is there any way that I can improve the code performance? I've tried putting each row of the grid into a Group and also tried switching to UIKit and using CAReplicatorLayers to construct the grid but ran into a wall when I found out you can't do hit testing on those layers. struct SimpleGrid<H: HapticPlaying>: View { @EnvironmentObject private var hapticEngine: H @State private var touchedGridPoints: Set<GridPoint> = Set<GridPoint>() @State private var hapticDotData: Set<HapticDotPreferenceData> = Set<HapticDotPreferenceData>() @State private var gridScale: CGSize = .zero private var gridDim: (Int, Int) = (30, 30) // (row, column) var body: some View { GeometryReader { viewGeo in VStack(spacing: 0) { ForEach(0..<gridDim.0, id: \.self) { row in Group { HStack(spacing: 0) { ForEach(0..<gridDim.1, id: \.self) { column in HapticDot( size: determineIdealDotSize(viewGeo: viewGeo, defaultDotSize: 10, gridDim: gridDim) ) .padding(.all, 5) .foregroundStyle( touchedGridPoints.contains(GridPoint(x: row, y: column)) ? Color.random : Color.primary ) .opacity(touchedGridPoints.contains(GridPoint(x: row, y: column)) ? 0.5 : 1.0) .background( // Use a geometry reader to get this dot's // location in the view. GeometryReader { dotGeo in Rectangle() .fill(.clear) .updateHapticDotPreferenceData(Set([ HapticDotPreferenceData( gridPoint: GridPoint(x: row, y: column), bounds: dotGeo.frame(in: .global) ) ])) } ) } } } } } .scaleEffect(gridScale, anchor: .center) .onAppear { withAnimation(.spring(duration: 0.6, bounce: 0.4)) { gridScale = CGSize(width: 1.0, height: 1.0) } } .frame(width: viewGeo.size.width, height: viewGeo.size.height, alignment: .center) } // This PreferenceKey allows us to monitor the location and index // of each HapticDot and do stuff with that information. .onPreferenceChange(HapticDotPreferenceKey.self) { value in hapticDotData = value } .gesture( DragGesture(minimumDistance: 0, coordinateSpace: .global) .onChanged { dragValue in hapticEngine.asyncPlayHaptic( intensity: 1.0, sharpness: 1.0 ) if let touchedDotData = hapticDotData.first(where: { $0.bounds.contains(dragValue.location) }) { // Don't perform the animation if this haptic dot // is still in touchedGridPoints. if !touchedGridPoints.contains(touchedDotData.gridPoint) { withAnimation(.linear(duration: 0.5)) { let _ = touchedGridPoints.insert(touchedDotData.gridPoint) } DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { // There is a bug here more visible when the animationDuration is long. // When the point is removed, the colors for // the dots still in touchedGridPoints get recalculated, // so they change colors every time one gets removed. withAnimation { _ = touchedGridPoints.remove(touchedDotData.gridPoint) } } } } } ) } private func determineIdealDotSize(viewGeo: GeometryProxy, defaultDotSize: CGFloat, gridDim: (Int, Int)) -> CGFloat { let idealWidth = min(defaultDotSize, (viewGeo.size.width - (CGFloat(gridDim.1)*5*2)) / CGFloat(gridDim.1)) let idealHeight = min(defaultDotSize, (viewGeo.size.height - (CGFloat(gridDim.0)*5*2)) / CGFloat(gridDim.0)) let idealSize = max(0, min(idealWidth, idealHeight)) return idealSize } }
Posted
by
Post not yet marked as solved
1 Replies
505 Views
What's the best way in Instruments, to measure the amount of time spent on large memory copies? For a very simple example, when directly calling memcpy? Memory copying does not show up in the time profiler, it's not a VM cache miss or zeroing event, etc so it doesn't show there, it doesn't (as far as I can tell) show up in the system trace, and there aren't any other choices.
Posted
by
Post not yet marked as solved
1 Replies
503 Views
Hello everyone, I need your help with a performance issue I’m encountering in my app. I’m learning app development in SwiftUI, and I built a simple budgeting app based on the 50/30/20 rule, which consists in dividing your expenses in “Needs”’, “Wants”, and “Savings and debts”. The main objects of my app are months, and transactions, each month containing an array of associated transactions. My app shows graphs for the current month. For example, it can show a pie chart representing the expenses of different categories. Now, in order to create these graphs, I have to compute some numbers from the month. For example, I have to retrieve the percent of the budget that has been spent. For now, I did this by adding various computed variables to my MonthBudget model, which is the model containing the transactions. This looks something like this: @Model final class MonthBudget { @Relationship(deleteRule: .cascade) var transactions: [Transaction]? = [] let identifier: UUID = UUID() var monthlyBudget: Double = 0 var needsBudgetRepartition: Double = 50 var wantsBudgetRepartition: Double = 30 var savingsDebtsBudgetRepartition: Double = 20 // Definition of other variables... } extension MonthBudget { @Transient var totalSpent: Double { spentNeedsBudget + spentWantsBudget + spentSavingsDebtsBudget } @Transient var remaining: Double { totalAvailableFunds + totalSpent } // Negative amount spent for a specific category (ex: -250) @Transient var spentNeedsBudget: Double { transactions!.filter { $0.category == .needs &amp;&amp; $0.amount &lt; 0 }.reduce(0, { $0 + $1.amount }) } @Transient var spentWantsBudget: Double { transactions!.filter { $0.category == .wants &amp;&amp; $0.amount &lt; 0 }.reduce(0, { $0 + $1.amount }) } @Transient var spentSavingsDebtsBudget: Double { transactions!.filter { $0.category == .savingsDebts &amp;&amp; $0.amount &lt; 0 }.reduce(0, { $0 + $1.amount }) } // Definition of multiple other computed properties... } Now, this approach worked fine when I only had one or two months with a few transactions in memory. But now that I’m actually using the app, I see serious performance issues (most notably hangs/freezes) whenever I am trying to display a graph. I used “Instruments” to inspect what was going wrong with my app, and I saw that the hangs happened mostly when trying to get the value of these variables, meaning that the actual computing was taking too long. I’m therefore trying to find a more efficient way of getting these informations (totalSpent, spentNeedsBudget, etc.). Is there a common practice that would help with these performance issues? I thought about caching the last computed property (and persist it using SwiftData), and using a function that would re-compute and persist all of these properties whenever a transaction is added or removed. But this has multiple cons: I’d have to call the function that re-computes the properties and stores them in memory each time I delete/add a transaction, losing the very benefit of using computed properties This would potentially be a bad idea: if there’s some sort of bug with SwiftData and one or multiple transactions are added or deleted without the user actually doing anything, there could be a mismatch between the persisted amount/value and the actual value. Did any of you face the same issue as me, and if so, how did you solve it? Any idea is appreciated! Thanks for reading so far, Louis.
Posted
by
Post not yet marked as solved
1 Replies
611 Views
Hello. When I try to profile a frame capture of an app running on iPad, I get the following error: Unexpected Replayer Termination Abort trap: 6 This is my configuration: macOS 14.0 Xcode 15.0 iPad Pro (10.5-inch) - A10X iPadOS 17.2 Is this expected? Is there a way to fix the issue? Thanks
Posted
by
Post not yet marked as solved
0 Replies
327 Views
Hi, team. Our team is planning to address the app performance issue of the app. And so far I've found the peer group benchmark for crashes, but couldn't find it for hang rates. Can I get the data from App Store Connect or is the team planning to provide this data in the near future? Thank you for reading.
Posted
by