Instruments

RSS for tag

Instruments is a performance-analysis and testing tool for iOS, iPadOS, watchOS, tvOS, and macOS apps.

Instruments Documentation

Posts under Instruments subtopic

Post

Replies

Boosts

Views

Activity

RealityKit Metrics - 3D Render vertex and triangle measurement attributes clarification needed
What exactly is included/calculated in the following metrics within RealityKit Trace - RealityKit Metrics - 3D render attributes: 3D Mesh Triangles Total Triangles Submitted 3D Mesh Vertices Total Vertices Submitted As the second part of the question, what differentiates between: 3D Mesh Triangles vs Total Triangles Submitted 3D Mesh Vertices vs Total Vertices Submitted
0
0
547
Oct ’24
Hi. This “Nearby Devices” control is very limited. I’ll tell you why:
First of all, you cant unlock the device, even if you know the password. Thats one thing. Then, you cant use your main device to navigate trough the device you are controling. You basically can’t do shit to be honest. And Siri is on a bad mood this days. That’s what I have to say to you my friends. Eu amo vocês. beijos. Consertem essa porra. E o “Eye Tracking” está bugado também. Não funciona direito.
0
0
522
Oct ’24
Problem with PODS in React Native app
Hello, I develop app using pure React Native which uses pods under the hood uses pods to install dependencies. With some of dependencies I have such error only on a real device(on a simulator everything is ok): "dyld[53510]: Symbol not found: __ZN5swift39swift51override_conformsToSwiftProtocolEPKNS_14TargetMetadataINS_9InProcessEEEPKNS_24TargetProtocolDescriptorIS1_EEN7__swift9__runtime4llvm9StringRefEPFPKNS_35TargetProtocolConformanceDescriptorIS1_EES4_S8_SC_E Referenced from: <4A3492BF-0479-3124-BE58-05BAED71BB20> /private/var/containers/Bundle/Application/0D9FDF5C-BBC9-4060-972B-B2D6FD91E321/appName/Frameworks/pathToPod Expected in: <0549B906-CB15-3735-AA15-FAEB5F687C8B> /private/var/containers/Bundle/Application/0D9FDF5C-BBC9-4060-972B-B2D6FD91E321/appName/Frameworks/pathToPod" Anyone else having the same problem or have any ideas on how to fix this?
0
0
513
Oct ’24
How to let instruments - app launch template wait for process to launch , rather than actively launch by instruments
Hi there, How to let instruments - app launch template wait for process to launch , rather than actively launch by instruments? I need to profile the app launch performance when clicking a push notification or cold launch via a url link. How to let instruments to wait for the process and collect the data? Currently, I tried the command xctrace record --template "App Launch" --attach MyApp --device-name 'Phone-Dev' --output mytrace.trace But it soon failed with 'Cannot find process matching name: MyApp'. How to make it work?
0
0
461
Jan ’25
Instruments showing incorrect values
Hello, I’m encountering an issue with the Instruments app while running a benchmark on an M2 Ultra Mac Studio. Despite being certain that GPU activities involving memory read and write operations are occurring, all related performance counters consistently return 0. Interestingly, this problem does not occur when using the same code on an M1 MacBook Air, where the counters behave as expected. What could be causing this discrepancy? Any insights or suggestions would be greatly appreciated. Thank you!
0
0
411
Jan ’25
Configuring "high frequency sampling"
Hey folks, We are looking for a way to increase the sampling frequency beyong what's currently called "high frequency sampling" for CPU profiler (or time profiler -- doesn't really matter for us). We are aware that this is not offered through the UI but wondering if we can somehow experimentally enable this via the .tracetemplate (plist). Basically, we see that samplingRate exists (in the plist) but don't see it having an effect on the actual runs. The resulting trace file always lists sample-rate-micro-seconds="1000" for the data table. E.g., <table trigger="time" pmc-events="Cycles" target-pid="SINGLE" schema="counters-profile" needs-kernel-callstack="0" sample-rate-micro-seconds="1000"/> Cheers
0
0
270
Feb ’25
Understanding power usage on a macOS app
It looks like, for some reason, our apps are using a bunch of power sometimes. sysdiagnose has this in the power log: Never mind. Including the output of sysdiagnose has "sensitive language," and it won't tell me what is sensitive, making this a waste of my time. ETA: Ok, I I can attach the file: power.log I've gone through the energy documentation, but it seems geared towards embedded, not macOS, so I'm not sure how I can figure this out more. The extra problem, of course, is that we have a network extension, two daemons, and a GUI app. 😄
0
0
282
Mar ’25
Recording WKWebView
I am trying to record the requests and responses in a WKWebView, but instruments does not seem to record them. Is this to be expected? The webView is set to inspectable, and I am using the HTTP Traffic instrument. All the requests the app is doing are recorded, but neither the original request for the webView, nor subsequent traffic is recorded. When I use Safari to inspect the webView, all I see is the last page (even when I start the inspector before the first request is made). How can I see these requests?
0
0
89
May ’25
trouble with MDLMesh.newBo()
I made a box with MDLMesh.newBox(). I added normals. let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1), segments: SIMD3<UInt32>(2, 2, 2), geometryType: MDLGeometryType.triangles, inwardNormals:false, allocator: allocator) mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.25) After I convert to MTKMesh the normals are (0,0,0) for a group of vertices. I can only inspect the geometry after I convert to MTKMesh. Is there a way you can use Geometry Viewer on a MDLMesh?
0
0
57
May ’25
CoreML memory allocation logic
hello, I got a question about coreml. I loaded the coreml model in the project and set the computing unit to CPU+GPU. When I used instruments to analyze the performance, I found that there was an overhead of prepare gpu request before each inference. I also checked the freezing point graph and found that memory was frequently allocated. Is this as expected? Is there any way to avoid frequent prepares? I have tried some methods, such as memory sharing of predict interface input parameters, but it seems to be ineffective.
0
0
72
May ’25
Request for clarification / Documentation Feedback
Dear Apple Developer Support team, I would like to request an official confirmation regarding the handling of transaction status in the App Store Server API, specifically for the GET /inApps/v1/transactions/{transactionId} endpoint. As per our current understanding from the official documentation (Get Transaction Info), the API’s behavior appears to be: If a transaction is finalized and successfully processed by App Store, querying this API will return HTTP 200 OK along with transaction details. If a transaction is still in a pending or deferred state (such as awaiting Ask to Buy approval or pending authorization), the API will not return a 200, and instead respond with HTTP 404 Not Found or an appropriate error. Could you please confirm if this behavior is accurate and officially supported? Specifically: Does a 200 OK response guarantee that a transaction is finalized and successfully recorded on App Store servers? In cases where a transaction is pending approval (e.g. Ask to Buy), is it correct that GET /transactions/{transactionId} would return 404 Not Found until the transaction is finalized? We would greatly appreciate your confirmation to align our server-side logic for transaction validation accordingly. Thank you very much for your support! Kind regards, cuongnx
0
0
107
Jun ’25
Datadog Mobile Vitals equivalent in Instruments
Hello We use Datadog Mobile Vitals in our app and I'm trying to run some tools in Instruments for comparison. I'm not sure what tool should I use for some of those metrics: Slow Renders Description: With slow renders data, you can monitor which views are taking longer than 16ms or 60Hz to render. Instruments equivalent: Hangs including microhangs (?) CPU ticks per second Description: RUM tracks CPU ticks per second for each view and the CPU utilization over the course of a session. The recommended range is <40 for good and <80 for moderate. Instruments equivalent: CPU Profiler (?) Frozen Frames - Description: Frames that take longer than 700ms to render appear as stuck and unresponsive in your application. These are classified as frozen frames. Instruments equivalent: Hangs with > 500ms (?) Memory Utilization Description: The amount of physical memory used by your application in bytes for each view, over the course of a session. The recommended range is <200MB for good and <400MB for moderate. Instruments equivalent: Allocation (?)
0
0
125
Jul ’25
Xcode 26 - Create ML don't work
I tried using Create ML of Xcode 26.0 beta 7 to generate a model using the "Word Tagging" template, and I received the error: Training progress unavailable - Unexpected error. Using Create ML of XCode 16.4 with the same documentation, I was able to build the model and use it in a test app. I'd like to understand why Create ML of Xcode 26 no longer works.
0
0
80
2w
How to export Allocations report in XML (Call Tree format) with xctrace?
Hello Apple team, I am using xctrace to record an Allocations trace on iOS. For example: xctrace record --template "Allocations" --launch com.example.myapp --time-limit 30s --output alloc.trace After recording, I can export the results in Allocations List format (flat list of allocations) using: xcrun xctrace export --input ./alloc.trace --xpath '/trace-toc/run/tracks/track[@name="Allocations"]/details/detail[@name="Allocations List"]' --output ./alloc.xml This works fine and produces an XML output. However, what I really need is to export the data in Call Tree format (as shown in Instruments GUI). I checked xctrace export --help, but it seems that the Allocations template only supports the List view for export, not the Call Tree breakdown. My question is: 👉 Is there a way to export an Allocations trace in XML with Call Tree details using xctrace? 👉 If not, is there an API or recommended workflow to automate this instead of exporting manually from Instruments GUI? Thanks in advance for your help!
0
0
130
1w
On-device hang detection with threshold lower than 250 ms
There is a WWDC session about this cool instrument to detect hangs on-device: https://developer.apple.com/videos/play/wwdc2022/10082/ I very like this tool; it's handy and gives useful stack traces, but the threshold of 250 ms is practically too high to use in a testing environment. It would be better if there was an option like 100 ms or something like that. I think most of the people can see the scrolling hang with a duration of 100-250 ms, and it would be cool to detect such hangs with this instrument. Is there a way to set a lower threshold, or can Apple consider adding such a threshold? Thank you!
1
0
719
Sep ’24