Posts

Post not yet marked as solved
5 Replies
0 Views
It seems that if I compile DeeplabV3FP16 model as objc class, the memory leak disappears, while as swift class, the memory leaks. My comparison codes as below Objc version @implementation ViewController (void)dealloc { 		[DeepLabV3Generator destroy]; } (void)viewDidLoad { 		[super viewDidLoad]; 		// Do any additional setup after loading the view. 		 		 		NSString *name = [NSString stringWithFormat:@"DeeplabV3-%tu", nameIndex]; 		nameIndex ++; 		CIImage *image = [[CIImage alloc]initWithImage:[UIImage imageNamed:name]]; 		CVPixelBufferRef pb; 		CVPixelBufferCreate(kCFAllocatorDefault, image.extent.size.width, image.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pb); 		[CIContext.context render:image toCVPixelBuffer:pb]; 		 		[DeepLabV3Generator.si loadDeeplabV3From:pb]; 		 		CVPixelBufferRelease(pb); } #import "DeepLabV3Generator.h" @implementation DeepLabV3Generator static DeepLabV3Generator* sharedInstance = nil; (DeepLabV3Generator*)si{ 		if(sharedInstance != nil){ 				return sharedInstance; 		} 		static dispatch_once_t onceToken; 		dispatch_once(&onceToken, ^{ 				sharedInstance = [[DeepLabV3Generator alloc]init]; 		}); 		 		return sharedInstance; } (void)destroy{ 		sharedInstance = nil; } (instancetype)init { 		self = [super init]; 		if (self) { 				NSError *error = nil; 				MLModelConfiguration *config = [MLModelConfiguration new]; 				self.v3 = [[DeepLabV3FP16 alloc]initWithConfiguration:config error:&error]; 		} 		return self; } (MLMultiArray *)loadDeeplabV3From:(CVPixelBufferRef)pixeBuffer{ 		NSError *error = nil; 		DeepLabV3FP16Output *output = [_v3 predictionFromImage:pixeBuffer error:&error]; 		if (error) { 				NSLog(@"error %@",error.localizedDescription); 		} 		return output.semanticPredictions; } Swift verisoin @objc class DeepLabV3Generator: NSObject { 		private static var sharedInstance : DeepLabV3Generator? 		@objc class func si() -> DeepLabV3Generator { // change class to final to prevent override 				guard let uwShared = sharedInstance else { 						sharedInstance = DeepLabV3Generator() 						return sharedInstance! 				} 				return uwShared 		} 		@objc class func destroy() { 				sharedInstance = nil 		} 		let v3 : DeepLabV3FP16 		 		private override init() { 				let config = MLModelConfiguration() 				config.computeUnits = .all 				v3 = try! DeepLabV3FP16(configuration: config) 		} 		@objc func loadDeeplabV3(from pixelBuffer:CVPixelBuffer) -> MLMultiArray? { 				let output = try! v3 .prediction(image: pixelBuffer) 				return output.semanticPredictions 		} }
Post not yet marked as solved
3 Replies
0 Views
The image is 60px square jpg, very common file type. This problem only occurs on my iPhone7 Plus, not on my iPhoneXS. I think it only happens on old devices. Updated: under iOS13, I found only on iPhone7 Plus (not on XS), it calls -[CUIStructuredThemeStore _canGetRenditionWithKey:isFPO:lookForSubstitutions:] which is a hight cost API. Any information about that?
Post not yet marked as solved
3 Replies
0 Views
Under iOS12, the call stack is as below 	+[UIImage imageNamed:inBundle:withConfiguration:] 	-[_UIAssetManager imageNamed:configuration:] 	-[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:] 	-[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:]_block_invoke 	-[_UIAssetManager _lookUpObjectForTraitCollection:withAccessorWithAppearanceName:] 	-[UITraitCollection _enumerateThemeAppearanceNamesForLookup:] 	-[_UIAssetManager _lookUpObjectForTraitCollection:withAccessorWithAppearanceName:]_block_invoke 	-[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:]_block_invoke_2 	-[CUICatalog namedVectorGlyphWithName:scaleFactor:deviceIdiom:layoutDirection:glyphSize:glyphWeight:glyphPointSize:appearanceName:] 	-[CUICatalog _resolvedRenditionKeyFromThemeRef:withBaseKey:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:memoryClass:graphicsClass:graphicsFallBackOrder:deviceSubtypeFallBackOrder:adjustRenditionKeyWithBlock:] 	-[CUICatalog _private_resolvedRenditionKeyFromThemeRef:withBaseKey:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:memoryClass:graphicsClass:graphicsFallBackOrder:deviceSubtypeFallBackOrder:localizationIdentifier:adjustRenditionKeyWithBlock:] 	-[CUIStructuredThemeStore copyLookupKeySignatureForKey:] it's much faster than iOS13
Post not yet marked as solved
5 Replies
0 Views
Finally I found what is actually leaking, it's the MTLTexture. I profiled the app via Game Performance, checked what actually was happening when the leaking code was executing. I found that in the GPU section, a 48M MTLTexture was created, obviously it's the leak object. But how can I release the MTL texture created internally by the CoreML model. I have found a method called "CVMetalTextureCacheFlush", seems it may work from the naming, but how to retrieve the texture cache then? (there is no related method for the model to get a texture or texture cache)
Post not yet marked as solved
5 Replies
0 Views
Sorry, what I mean is that the MLModel object is released, but the MTLIOAccelResource it has created during initialization leaks. I have also tried Xcode Memory Debugger, run the potential leaking code for several times. The result in the memory graph shows that the count of VM:IOAccelerator is not increasing, but the number of MTLIOAccelResource object is increasing. MTLIOAccelResource object is held by objects of class like AGXA10FamilyHeap, AGXA10FamilyBuffer, MTLIOMemoryInfo, MTLIOAccelPooledResource. I am not familiar with Metal, maybe I have to call some Metal related method to release the pooled resources, or is it a CoreML issue?
Post not yet marked as solved
2 Replies
0 Views
Where is the submit button for the IAP itself now? I remember there was one before.