Post not yet marked as solved
I'm in the process of refactoring a humongous monolithic app with multiple extensions into a setup where we have a service library which is linked to each of the targets.
However, this causes a significant issue when the service library is being referenced from the notification extension, as that extension has a hard memory limit of 12MB on an iPhone 5s, which we are still supporting, and our memory usage is at about 11MB which is way too close for comfort. When the memory usage surpasses the limit, the notification extension is automatically killed by the system.
Profiling with Instruments/Allocations has shown that a significant chunk of that memory is allocated (wasted, really) by the internal AVSecondScreenController.load() class initializer from the AVKit library.
My approach is now to try to figure out what causes AVKit to be loaded in the first palce. I have no explicit imports for AVKit in the notification extension and attempts to figure out what triggers it from the service library failed as well.
I did add a symbolic breakpoint for AVSecondScreenController.load() which is hit, but the backtrace is useless.
Does anyone have any idea how I can pinpoint what causes AVKit to be loaded in the first place?
Thanks!
Post not yet marked as solved
Hi all,
I'm facing a strange timeout problem in my application. I've opened another thread for that [Receiving "The request timed out." when using NSURLSession].(https://developer.apple.com/forums/thread/696762)
Based on Xcode 13 Network Instruments, when a network request receives a timeout, all the subsequent ones receive the timeout too.
Those requests are hitting the same host / domain that works with HTTP/1 (head of line blocking?). Instead, other request that do not involve this domain are working correctly.
I've collected a trace but I cannot understand the meaning of "No Connection" and the red color of the box under inspection.
Are you able to clarify their meaning and when this scenario could happen?
Thanks,
Lorenzo
Hi all,
I've collected a trace in order to understand deeper a problem that occurs in my application.
After completing the trace I've noticed few flags above the timeline (see attachment).
Are you able to tell me the meaning of those flags? What are the events they are related to?
Thank you very much,
Lorenzo
Post not yet marked as solved
In the App we saw a 4 seconds loading time while launching the App. Sometimes this time up to 8 seconds but the average time is 4 seconds. What could be the reason?
In the AppDelegate there isn't server calls.
The breakpoint inside the AppDelegate function didFinishLaunchingWithOptions is taking 4 seconds to get in.
I have some api calls but they are launched after the breakpoint in the AppDelegate.
The Api calls responses are working faster.
In Instruments I have some blocked time, is it normal?
Is something wrong in the log?
Post not yet marked as solved
I can use Xcode 13.1 to install/run my tvOS App on an Apple TV Device (with tvOS 15.1.1) (i.e. Xcode has paired with my Apple TV wirelessly), but the instrument tool (13.0) can't detect my Apple TV and only shows my Mac and my iPhone (which is connected via cable). Anyone has any idea whether wireless instrumentation is supported or not? If not supported, any alternative hack? Thx ahead.
Post not yet marked as solved
Hello,
I'am trying to analyse a hang with our App and asked the customer to do a ktrace artrace ("sudo ktrace artrace") on his system. The problem is that the macOS symbols are missing when I open the trace with Instruments on my system. the customer is running macOS 11.6.1, I'am running macOS 12.0.1.
I asked the customer to do a
sudo ktrace symbolicate
but that does not help.
This is a very important workflow for me to analyse sluggishness and hangs on systems, any help is welcome! :-)
Post not yet marked as solved
I have seen that I can export table data from traces containing Allocations, Leaks, and VM Tracker instruments using xctrace export on the command line in XCode 13 release notes. But I still cannot see the table data when I use xctrace export --input ***.trace --toc. How can I export given .trace using supplied query to the XML file format that can be later read and post-processed
Post not yet marked as solved
Is there a way to launch deferred recording mode using CLI xctrace record?
Thanks in advance,
Post not yet marked as solved
"xcrun xctrace record --device " commands starts recording info through instruments and store it in a file
but sometimes it run properly and sometimes says following
"Waiting for device to boot"
I have also tried xcode 13.1 CLT
Checked on multiples devices
Changed cables as well
I have an App,use Metal-api, it worked well at Xcode12, but when I upgrade Xcode13.0,things changed, it always crash at newFunctionWithName.
How do I get my metal library:
My app is multiple-render-backend, so my shader is .glsl, use glslangValidator turn it to .metal
get metallib,refrence : Building a Library with Metal's Command-Line Tools,
use dispatch_data_create to load metallib, and call newLibraryWithData to get MTLLibrary, then call newFunctionWithNameto get MTLFunction , crash here.
Because it's my company's code,so SORRY, I can't share the code or screenshot.
The same code, worked on Xcode11 and Xcode12,but crash at Xcode13.0 when use Xcode to debug.
Xcode13.0: crash at debug, but successfully run on independent or in Xcode-Instrument.
I have no clue, what should I do to solve this problem on Xcode13.0, or it's the bug on Xcode13.0? @Apple
ps: (that problem above occurred on x86_64-mac,macOS 11.5)
Post not yet marked as solved
filter By ratings Like 1 star or 2 stars or 3 stars.
Post not yet marked as solved
I'm using xcrun xctrace export --output results.xml --input test_trace.trace --xpath '//trace-toc[1]/run[1]/data[1]/table' to export data from a test run with instruments as part of my app's CI. With Xcode 12 this resulted in an xml file that I could parse relatively quickly, but now with Xcode 13 the export process itself is taking 90+ seconds and generating a 160mb xml file for a 10 second recording.
I noticed the table that has increased is the time-sample schema. Just attempting to export this table with --xpath '//trace-toc[1]/run[1]/data[1]/table[4]' takes quite a while. The table has about 790 thousand rows. I'm using a custom instrument based off the time profiler, and still have about the same number of stack trace samples in my output. Did anything change in Xcode 13 that caused instruments to include many more time samples that aren't corresponding to a stack trace? Is it possible to disable this to have fewer time samples in my trace (while preserving the stack trace frequency) so the xml can be parsed quicker?
Post not yet marked as solved
When measuring "Game Performance" from Instruments in "Deferred" mode with a connected device, I'm seeing huge regions of data go missing. I'm trying to figure out why this might be the case, and what I can do to capture more reliable data.
Ideally I would like to capture 60 seconds of gameplay and analyze the performance, but very often only a fraction of the graph has data. The amount varies wildly, even on a very basic tutorial-level Metal application. Sometimes I'll get the full 60 seconds, other times only 5 seconds (with 55 seconds of empty space), and I'm always running with the same settings.
Any ideas? Is anyone else able to reliably get the full set of data?
**For reference, I'll state that I'm using an iPhone 13 Pro and an iPad Pro 11, so I don't think the devices are too underpowered. I even tested on two different computers and got similar results.
Post not yet marked as solved
Instruments CoreAnimation FPS don't display 120,why???
Post not yet marked as solved
Hi, we're profiling our iOS app with XCode13. We found there are cache miss counters like L1D_CACHE_MISS_LD and L1D_CACHE_MISS_ST. However, we couldn't find a counter related to L1 data cache access. Therefore, we couldn't calculate the cache miss rate.
But we found there is a L1D_TLB_ACCESS counter. Can we assume that every TLB access will lead to a data cache access, so we can use L1D_TLB_ACCESS as L1 data cache access?
Post not yet marked as solved
Hello,
I'm attempting to automate some performance tests we currently do manually using signposts and Instruments.
It looks like XCTOSSignpostMetric is the perfect tool for the job, but I can't get it to play nicely. If I use the pre-defined signpost metric constants (customNavigationTransitionMetric, scrollDecelerationMetric, etc), it works fine. If I use a custom signpost using the
XCTOSSignpostMetric.init(subsystem: category: name:)
initializer, nothing happens.
The documentation is very sparse on this topic and Googling, Binging, Githubing and Twittering have come up empty.
I reduced the issue to the smallest example I could ( https://github.com/tspike/SignpostTest ).
What am I doing wrong?
Thanks,
Tres
Environment details:
macOS 11.6
Xcode 12.5.1
iOS 14.6
iPhone SE 1st Gen
In the app:
...
let signpostLog = OSLog(subsystem: "com.tspike.signpost", category: "signpost")
let signpostName: StaticString = "SignpostTest"
@main
struct SignpostTestApp: App {
init() {
os_signpost(.begin, log: signpostLog, name: signpostName)
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(500), execute: {
os_signpost(.end, log: signpostLog, name: signpostName)
})
}
...
}
In the test
func testSignposts() throws {
let app = XCUIApplication()
// No performance data
let metric = XCTOSSignpostMetric.init(subsystem: "com.tspike.signpost",
category: "signpost",
name: "SignpostTest")
// Works as expected
// let metric = XCTOSSignpostMetric.applicationLaunch
let options = XCTMeasureOptions()
options.iterationCount = 1
measure(metrics: [metric], options: options, block: {
app.launch()
})
}
Post not yet marked as solved
Use instrument to test the frame rate of iPhone 13 Pro App, Core Animation FPS only displays 60 at most, not 120。please tell me why. vsync displays 8ms, but the frame rate does not display 120,why???
Post not yet marked as solved
Hii,
im having troubles updating the software of my ipad 6thgen the reason is, there are no software Update button on my Ipad settings/general, and if i try to update it using our computer and itunes it is also not working..can you please make a solution for this thank you😊🥺
Post not yet marked as solved
I wrote simple NSMutableData test project.
I profiled with allocations instruments. It shows alloc1() total bytes are 55MB.
But alloc1() only called once and alloced byte should be 1MB. I cannot find the reason of 55MB allocation in alloc1()
Replace this code with fresh macOS App project on Xcode13.
#import "ViewController.h"
@implementation ViewController {
NSTimer *mTimer;
NSMutableData *mData1;
NSMutableData *mData2;
}
- (void)viewDidLoad {
[super viewDidLoad];
mData1 = nil;
mData2 = nil;
mTimer = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self
selector:@selector(timer_cb) userInfo:nil repeats:YES];
}
- (void) timer_cb {
if (mData1 == nil) {
[self alloc1];
}
if (mData2 == nil) {
[self alloc2];
}
[self copy1];
}
- (void) alloc1 {
NSLog(@"alloc1");
mData1 = [NSMutableData dataWithCapacity:1024*1024];
}
- (void) alloc2 {
NSLog(@"alloc2");
mData2 = [NSMutableData dataWithCapacity:1024*1024];
[mData2 resetBytesInRange:NSMakeRange(0, 1024*1024)];
}
- (void) copy1 {
[mData1 replaceBytesInRange:NSMakeRange(0, 1024*1024) withBytes:mData2.bytes];
}
@end
Post not yet marked as solved
As per Xcode 13 Release Notes -
To support the new JSON-format crash logs generated in macOS Monterey and iOS 15, Instruments includes a new CrashSymbolicator.py script. This Python 3 script replaces the symbolicatecrash utility for JSON-format logs and supports inlined frames with its default options. For more information, see: CrashSymbolicator.py --help. CrashSymbolicator.py is located in the Contents/SharedFrameworks/CoreSymbolicationDT.framework/Resources/ subdirectory within Xcode 13. (78891800)
usage: CrashSymbolicator.py [-h] [-d dSYM] [-s SEARCH_PATH] [-o OUTPUT_FILE] [-p] [-w N] [--no-inlines] [--no-source-info] [--only-missing] [--no-system-frameworks]
[--no-demangle] [-v]
LOGFILE
Symbolicate a crash log
python3 CrashSymbolicator.py -d PATH_TO_DSYMS -o PATH_TO_OUTPUT CRASH_LOG_FILE
When we run this command getting the following errors
Traceback (most recent call last):
File "CrashSymbolicator.py", line 502, in
symbolicate(args)
File "CrashSymbolicator.py", line 482, in symbolicate
crash_log.write_to(args.output, args.pretty)
File "/Applications/Xcode.app/Contents/SharedFrameworks/CoreSymbolicationDT.framework/Versions/A/Resources/JSONCrashLog/JSONCrashLog.py", line 167, in write_to
ips_header_dictionary = vars(self.ips_header)
TypeError: vars() argument must have dict attribute