Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

130 Posts
Sort by:
Post not yet marked as solved
2 Replies
424 Views
I convert a pytorch model to mlmodel with a custom layer, and create a test app in swift to test my model. When i implement the custom layer by swift, it works well. However when i implement the custom layer by object-C, the code return 2022-01-14 17:58:49.964377+0800 CustomLayers[2547:968723] [coreml] Error in adding network -1. 2022-01-14 17:58:49.965023+0800 CustomLayers[2547:968723] [coreml] MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} 2022-01-14 17:58:49.965085+0800 CustomLayers[2547:968723] [coreml] MLModelAsset: modelWithError: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}: file CustomLayers/model_2.swift, line 114 2022-01-14 17:58:49.966267+0800 CustomLayers[2547:968723] Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}: file CustomLayers/model_2.swift, line 114 (lldb) It seems the model load failed with object-C custom layer. So i wonder does object-C custom layer implementation can't work with swift project? Although i try to set the CustomLayers-Bridging-Header.h. It still doesn't work. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
Posted
by stx-000.
Last updated
.
Post not yet marked as solved
0 Replies
339 Views
I'm having trouble reasoning about and modifying the Detecting Human Actions in a Live Video Feed sample code since I'm new to Combine. // ---- [MLMultiArray?] -- [MLMultiArray?] ---- // Make an activity prediction from the window. .map(predictActionWithWindow) // ---- ActionPrediction -- ActionPrediction ---- // Send the action prediction to the delegate. .sink(receiveValue: sendPrediction) These are the final two operators of the video processing pipeline, where the action prediction occurs. In either the implementation for private func predictActionWithWindow(_ currentWindow: [MLMultiArray?]) -> ActionPrediction or for private func sendPrediction(_ actionPrediction: ActionPrediction), how might I access the results of a VNHumanBodyPoseRequest that's retrieved and scoped in a function called earlier in the daisy chain? When I did this imperatively, I accessed results in the VNDetectHumanBodyPoseRequest completion handler, but I'm not sure how data flow would work with Combine's programming model. I want to associate predictions with the observation results they're based on so that I can store the time range of a given prediction label.
Posted
by Curiosity.
Last updated
.
Post marked as solved
3 Replies
763 Views
MLCustomLayer implementation always dispatches to CPU instead of GPU Background: I am trying to run my CoreML model with a custom layer on the iPhone 13 Pro. My custom layer runs successfully on the CPU, however it still dispatches to the CPU instead of the mobile's GPU despite the encodeToCommandBuffer member function being defined in the application's binding class for the custom layer. I have been following the CoreMLTools documentation's suggested Swift example to get this working, but note that my implementation is purely in Objective-C++. Despite reading in depth into the documentation, I still have not come across any resolution to the problem. Any help looking into this issue (or perhaps even bug in CoreML) would be much appreciated! Below, I provide a minimal example based off of the Swift example mentioned above. Implementation My toy Objective C++ implementation is based off of the Swift example here. This implements the Swish activation function for both the CPU and GPU. PyTorch model to CoreML MLModel transformation For brevity, I will not define my toy PyTorch model, nor the Python bindings to allow the custom Swish layer to be scripted/traced and then converted to a CoreML MLModel, but I can provide these if necessary. Just note that the Python layer's name and bindings should match the name in the class defined below, ie. ToySwish. To convert the scripted/traced PyTorch model (called torchscript_model in the listing below) to a CoreML MLModel, I use CoreMLTools (from Python) and then save the model as follows; input_shapes = [[1,64,256,256]] mlmodel = coremltools.converters.convert( torchscript_model, source='pytorch', inputs=[coremltools.TensorType(name=f'input_{i}', shape=input_shape) for i, input_shape in enumerate(input_shapes)], add_custom_layers = True, minimum_deployment_target = coremltools.target.iOS14, compute_units = coremltools.ComputeUnit.CPU_AND_GPU, ) mlmodel.save('toy_swish_model.mlmodel') Metal shader I use the same Metal shader function swish from Swish.metal here. MLCustomLayer binding class for Swish MLModel layer I define an analogous Objective-C++ class to the Swift example. This class inherits from NSObject and the MLCustomLayer protocol. This class follows the guidelines in the Apple documentation for integrating a CoreML MLModel with a custom layer. This is defined as follows; Class definition and resource setup; #import <Foundation/Foundation.h> #include <CoreML/CoreML.h> #import <Metal/Metal.h> @interface ToySwish : NSObject<MLCustomLayer>{} @end @implementation ToySwish{ id<MTLComputePipelineState> swishPipeline; } - (instancetype) initWithParameterDictionary:(NSDictionary<NSString *,id> *)parameters error:(NSError *__autoreleasing _Nullable *)error{    NSError* errorPSO = nil;   id<MTLDevice> device = MTLCreateSystemDefaultDevice();   id<MTLLibrary> defaultlibrary = [device newDefaultLibrary];   id<MTLFunction> swishFunction = [defaultlibrary newFunctionWithName:@"swish"];   swishPipeline = [device newComputePipelineStateWithFunction:swishFunction error:&errorPSO]; assert(errorPSO == nil);   return self; } - (BOOL) setWeightData:(NSArray<NSData *> *)weights error:(NSError *__autoreleasing _Nullable *) error{   return YES; } - (NSArray<NSArray<NSNumber *> * > *) outputShapesForInputShapes:(NSArray<NSArray<NSNumber *> *> *)inputShapes error:(NSError *__autoreleasing _Nullable *) error{   return inputShapes; } CPU compute method (this is only shown for completeness); - (BOOL) evaluateOnCPUWithInputs:(NSArray<MLMultiArray *> *)inputs outputs:(NSArray<MLMultiArray *> *)outputs error:(NSError *__autoreleasing _Nullable *)error{   NSLog(@"Dispatching to CPU");   for(NSInteger i = 0; i < inputs.count; i++){    NSInteger num_elems = inputs[i].count;    float* input_ptr = (float *) inputs[i].dataPointer;    float* output_ptr = (float *) outputs[i].dataPointer;        for(int j = 0; j < num_elems; j++){     output_ptr[j] = 1.0/(1.0 + exp(-input_ptr[j]));    }   }   return YES; } Encode GPU commands to command buffer; Note, according to documentation, this command buffer should not be committed, as it is executed by CoreML after this method returns. - (BOOL) encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer inputs:(NSArray<id<MTLTexture>> *)inputs outputs:(NSArray<id<MTLTexture>> *)outputs error:(NSError *__autoreleasing _Nullable *)error{      NSLog(@"Dispatching to GPU");      id<MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoderWithDispatchType:MTLDispatchTypeSerial];   assert(computeEncoder != nil); for(int i = 0; i < inputs.count; i++){      [computeEncoder setComputePipelineState:swishPipeline];   [computeEncoder setTexture:inputs[i] atIndex:0];   [computeEncoder setTexture:outputs[i] atIndex:1];      NSInteger w = swishPipeline.threadExecutionWidth;   NSInteger h = swishPipeline.maxTotalThreadsPerThreadgroup / w;   MTLSize threadGroupSize = MTLSizeMake(w, h, 1);   NSInteger groupWidth = (inputs[0].width    + threadGroupSize.width - 1) / threadGroupSize.width;   NSInteger groupHeight = (inputs[0].height   + threadGroupSize.height - 1) / threadGroupSize.height;   NSInteger groupDepth = (inputs[0].arrayLength + threadGroupSize.depth - 1) / threadGroupSize.depth;   MTLSize threadGroups = MTLSizeMake(groupWidth, groupHeight, groupDepth);   [computeEncoder dispatchThreads:threadGroups threadsPerThreadgroup:threadGroupSize];   [computeEncoder endEncoding];    }   return YES; } Run inference for a given input The MLModel is loaded and compiled in the application. I check to ensure that the model configuration's computeUnits are set to MLComputeUnitsAll as desired (this should allow dispatching to CPU, GPU and ANU) of the MLModel layers. I define a MLDictionaryFeatureProvider object called feature_provider from a NSMutableDictionary of input features (input tensors in this case), and then pass this to the predictionFromFeatures method of my loaded model model as follows; @autoreleasepool { [model predictionFromFeatures:feature_provider error:error]; } This computes a single forward pass of my model. When this executes, you can see that the 'Dispatching to CPU' string is printed instead of the 'Dispatching to GPU' string. This (along with the slow execution time) indicates the Swish layer is being run from the evaluateOnCPUWithInputs method and thus on the CPU, instead of the GPU as expected. I am quite new to developing for iOS and to Objective-C++, so I might have missed something that is quite simple, however from reading the documentation and examples, it is not at all clear to me what the issue is. Any help or advice would be really appreciated :) Environment XCode 13.1 iPhone 13 iOS 15.1.1 iOS deployment target 15.0
Posted Last updated
.
Post not yet marked as solved
1 Replies
598 Views
It seems that a DataFrame (TabularData framework) can be used in CreateML, instead of an MLDataTable - which makes sense, given the description of the TabularData API. However, there are differences. One is that when using a DataFrame, the randomSplit method creates a tuple of DataFrame slices, which cannot then be used in MLLinearRegressor without first converting back to DataFrame (i.e. initialising a new DataFrame with the required slice). Using an MLDataTable as the source data, the output from randomSplit can be used directly in MLLinearRegressor. I'm interested to hear of any other differences and whether the behaviour described above is a feature or a bug. TabularData seems to have more features for data manipulation, although I haven't done any systematic comparison. I'm a bit puzzled as to why there are 2 similar, but separate, frameworks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
253 Views
I implement a custom pytorch layer on both CPU and GPU following [Hollemans amazing blog] (https://machinethink.net/blog/coreml-custom-layers ). The cpu version works good, but when i implemented this op on GPU it cannot activate "encode" function. Always run on CPU. I have checked the coremltools.convert() options with compute_units=coremltools.ComputeUnit.CPU_AND_GPU, but it still not work. This problem also mentioned in https://stackoverflow.com/questions/51019600/why-i-enabled-metal-api-but-my-coreml-custom-layer-still-run-on-cpu and https://developer.apple.com/forums/thread/695640. Any idea on help this would be grateful. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
Posted
by stx-000.
Last updated
.
Post not yet marked as solved
2 Replies
463 Views
I just got an app feature working where the user imports a video file, each frame is fed to a custom action classifier, and then only frames with a certain action classified are exported. However, I'm finding that testing a one hour 4K video at 60 FPS is taking an unreasonably long time - it's been processing for 7 hours now on a MacBook Pro with M1 Max running the Mac Catalyst app. Are there any techniques or general guidance that would help with improving performance? As much as possible I'd like to preserve the input video quality, especially frame rate. One hour length for the video is expected, as it's of a tennis session (could be anywhere from 10 minutes to a couple hours). I made the body pose action classifier with Create ML.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
2 Replies
529 Views
Good day people! I'm currently working on my master thesis in media informatics. I'd really appreciate to discuss my topic with you guys, so I may get some interesting ideas or new information. The goal is to implement an app, specifically designed for places like museums where the envrionment isn't perfect for AR tracking. (Darkness, no network connection, maybe exhibits made out of glass...) Therefore, i'd like to develop a neuronal network for the new ipad pro that takes rgb-d data to predict a pose estimation in a scene for an object, so that it matches the real world object perfectly. This placed object will be a perfect 3d model replica of the real object. (hand modeled or scanned and revised) This should allow me to place AR Content precisely over the real world object, even in difficult lightlings and stuff. Maybe it will improve occlusion, too. I can imagine that the neuronal network may also detect structures, edges and semantic coherences better than the usual approach. My first thought was to work with CoreML, Metal, maybe Vision and ARKit. I will also try out XCode for the first time. Maybe you guys have interesting ideas for improvement or can guide me a little bit, since i fell a bit lost at the moment. Would you use rather point clouds or the raw depth buffer to train the model? Would you also train with edge filter images and stuff? Why or why not? Thanks in advance, it would mean the world to me! Kind regards, Miri :-)
Posted
by MiriamJo.
Last updated
.
Post not yet marked as solved
1 Replies
368 Views
Is it possible to do any of the following: Export a model created using MetalPerformanceShadersGraph to a CoreML file; Failing 1., save a trained MetalPerformanceShadersGraph model in any other way for deployment; Import a CoreML model and use it as a part of a MetalPerformanceShadersGraph model. Thanks!
Posted
by Alan_Z.
Last updated
.
Post not yet marked as solved
1 Replies
202 Views
I am excited about Create ML and tried to train a detector for feet. I gave it training data two sets of objects: The left foot and the right foot. However, I was surprized that also just one model (left or right) detected both type of feet: left and right. I really have no deep understanding of ML but I was wondering if this means the resulting model cannot be trained to distinguish if an object is mirrored ? Do you see any chance to train a model that could be used to find an object - but but its mirrored counterpart?
Posted
by cschultz.
Last updated
.
Post not yet marked as solved
0 Replies
321 Views
After creating a custom action classifier in Create ML, previewing it (see the bottom of the page) with an input video shows the label associated with a segment of the video. What would be a good way to store the duration for a given label, say, each CMTimeRange of segment of video frames that are classified as containing "Jumping Jacks?" I previously found that storing time ranges of trajectory results was convenient, since each VNTrajectoryObservation vended by Apple had an associated CMTimeRange. However, using my custom action classifier instead, each VNObservation result's CMTimeRange has a duration value that's always 0. func completionHandler(request: VNRequest, error: Error?) { guard let results = request.results as? [VNHumanBodyPoseObservation] else { return } if let result = results.first { storeObservation(result) } do { for result in results where try self.getLastTennisActionType(from: [result]) == .playing { var fileRelativeTimeRange = result.timeRange fileRelativeTimeRange.start = fileRelativeTimeRange.start - self.assetWriterStartTime self.timeRangesOfInterest[Int(fileRelativeTimeRange.start.seconds)] = fileRelativeTimeRange } } catch { print("Unable to perform the request: \(error.localizedDescription).") } } In this case I'm interested in frames with the label "Playing" and successfully classify them, but I'm not sure where to go from here to track the duration of video segments with consecutive frames that have that label.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
362 Views
Modifying guidance given in an answer on AVFoundation + Vision trajectory detection, I'm instead saving time ranges of frames that have a specific ML label from my custom action classifier: private lazy var detectHumanBodyPoseRequest: VNDetectHumanBodyPoseRequest = { let detectHumanBodyPoseRequest = VNDetectHumanBodyPoseRequest(completionHandler: completionHandler) return detectHumanBodyPoseRequest }() var timeRangesOfInterest: [Int : CMTimeRange] = [:] private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter, asset completionHandler: @escaping FinishHandler) { if isCancelled { completionHandler(.success(.cancelled)) return } // Handle any error during processing of the video. guard sampleTransferError == nil else { assetReaderWriter.cancel() completionHandler(.failure(sampleTransferError!)) return } // Evaluate the result reading the samples. let result = assetReaderWriter.readingCompleted() if case .failure = result { completionHandler(result) return } /* Finish writing, and asynchronously evaluate the results from writing the samples. */ assetReaderWriter.writingCompleted { result in self.exportVideoTimeRanges(timeRanges: self.timeRangesOfInterest.map { $0.value }) { result in completionHandler(result) } } } func exportVideoTimeRanges(timeRanges: [CMTimeRange], completion: @escaping (Result<OperationStatus, Error>) -> Void) { let inputVideoTrack = self.asset.tracks(withMediaType: .video).first! let composition = AVMutableComposition() let compositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)! var insertionPoint: CMTime = .zero for timeRange in timeRanges { try! compositionTrack.insertTimeRange(timeRange, of: inputVideoTrack, at: insertionPoint) insertionPoint = insertionPoint + timeRange.duration } let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)! try? FileManager.default.removeItem(at: self.outputURL) exportSession.outputURL = self.outputURL exportSession.outputFileType = .mov exportSession.exportAsynchronously { var result: Result<OperationStatus, Error> switch exportSession.status { case .completed: result = .success(.completed) case .cancelled: result = .success(.cancelled) case .failed: // The `error` property is non-nil in the `.failed` status. result = .failure(exportSession.error!) default: fatalError("Unexpected terminal export session status: \(exportSession.status).") } print("export finished: \(exportSession.status.rawValue) - \(exportSession.error)") completion(result) } } This worked fine with results vended from Apple's trajectory detection, but using my custom action classifier TennisActionClassifier (Core ML model exported from Create ML), I get the console error getSubtractiveDecodeDuration signalled err=-16364 (kMediaSampleTimingGeneratorError_InvalidTimeStamp) (Decode timestamp is earlier than previous sample's decode timestamp.) at MediaSampleTimingGenerator.c:180. Why might this be?
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
289 Views
I followed Apple's guidance in their articles Creating an Action Classifier Model, Gathering Training Videos for an Action Classifier, and Building an Action Classifier Data Source. With this Core ML model file now imported in Xcode, how do use it to classify video frames? For each video frame I call do { let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer) try requestHandler.perform([self.detectHumanBodyPoseRequest]) } catch { print("Unable to perform the request: \(error.localizedDescription).") } But it's unclear to me how to use the results of the VNDetectHumanBodyPoseRequest which come back as the type [VNHumanBodyPoseObservation]?. How would I feed to the results into my custom classifier, which has an automatically generated model class TennisActionClassifier.swift? The classifier is for making predictions on the frame's body poses, labeling the actions as either playing a rally/point or not playing.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
257 Views
My goal is to mark any tennis video's timestamps of both the start of each rally/point and the end of each rally/point. I tried trajectory detection, but the "end time" is when the ball bounces rather than when the rally/point ends. I'm not quite sure what direction to go from here to improve on this. Would action classification of body poses in each frame (two classes, "playing" and "not playing") be the best way to split the video into segments? A different technique?
Posted
by Curiosity.
Last updated
.
Post marked as solved
5 Replies
1.5k Views
Hello everybody, I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context". Has this ever happened to anyone? How did you solve it? Here is my code: import Foundation import Vision import UIKit import ImageIO final class ButterflyClassification {          var classificationResult: Result?          lazy var classificationRequest: VNCoreMLRequest = {                  do {             let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)                          return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in                                  self?.processClassification(for: request, error: error)             })         }         catch {             fatalError("Failed to lead model.")         }     }()     func processClassification(for request: VNRequest, error: Error?) {                  DispatchQueue.main.async {                          guard let results = request.results else {                 print("Unable to classify image.")                 return             }                          let classifications = results as! [VNClassificationObservation]                          if classifications.isEmpty {                                  print("No classification was provided.")                 return             }             else {                                  let firstClassification = classifications[0]                 self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))             }         }     }     func classifyButterfly(image: UIImage) - Result? {                  guard let ciImage = CIImage(image: image) else {             fatalError("Unable to create ciImage")         }                  DispatchQueue.global(qos: .userInitiated).async {                          let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])             do {                 try handler.perform([self.classificationRequest])             }             catch {                 print("Failed to perform classification.\n\(error.localizedDescription)")             }         }                  return classificationResult     } } Thank you for your help!
Posted
by tmsm1999.
Last updated
.
Post not yet marked as solved
0 Replies
162 Views
A CoreML model with conv3d layers runs very slow on my mac. If I set usesCPUOnly to True, it will take about the same running time. So it seems that the conv3d layer only supports the CPU mode in a CoreML. When I replace conv3d layers with conv2d layers, the model will be many times faster than before. The macOS version is 11.2.3. The CoreML model is converted from a pytorch model.
Posted
by xwxw.
Last updated
.
Post not yet marked as solved
0 Replies
281 Views
I'm building a feature to automatically edit out all the downtime of a tennis video. I have a partial implementation that stores the start and end times of Vision trajectory detections and writes only those segments to an AVFoundation export session. I've encountered a major issue, which is that the trajectories returned end whenever the ball bounce, so each segment is just one tennis shot and nowhere close to an entire rally with multiple bounces. I'm ensure if I should continue done the trajectory route, maybe stitching together the trajectories and somehow only splitting at the start and end of a rally. Any general guidance would be appreciated. Is there a different Vision or ML approach that would more accurately model the start and end time of a rally? I considered creating a custom action classifier to classify frames to be either "playing tennis" or "inactivity," but I started with Apple's trajectory detection since it was already built and trained. Maybe a custom classifier would be needed, but not sure.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
139 Views
With CoreML5 besides precision of FP16 or FP32 do i have to specific the Compute block selection like .all, .CPU and .CPu&amp;GPU ?, is it mandatory?
Posted Last updated
.
Post not yet marked as solved
2 Replies
326 Views
Beginner at using CreateML, so please forgive me if this question isn't asked correctly. As i understand it, the image classification projects are meant to detect certain objects in an image (giraffe vs elephant). My question is is there a way to use image classification to "score" or bin images that share qualities with my training dataset? As an example; let's say I want to find a perfect square inside another square (like a white border around an image). What are the things that could make a "non-perfect" image? maybe one of the corners of the square is rounded, maybe a corner is not 90 degrees, maybe the inner square is not perfectly centered within the white frame / border. Now let's say I want to take a picture of this object and have my app tell me how close this image is to a perfect square inside a square and rate them 1-5 My thought was to setup my training data to have a set of images that show perfect squares in a "rated 5" folder, a set of slightly imperfect squares in a "rated 4" folder, and a set of even less perfect squares in a "rated 3" folder, etc. Long winded question, i apologize; will the CreateML image classifier be able to look at my image for those qualities that make them 3,4,5, or will it only look at the content of the square itself and detect: Giraffe, race car, boat, person? I'm looking agin for the metric of "perfectness" regardless of what the content is within the inner square. Am I on the right train of thought, or is there a better approach to take?
Posted
by dbg925.
Last updated
.
Post not yet marked as solved
1 Replies
291 Views
I'm working on an iOS framework that will be integrated into customer applications and I've added some ML functionality for the host apps to use. My framework is open source and licensed to my customers, therefore I would like to bundle the CoreML model as an encrypted and compiled asset within the framework. According to this WWDC video, it seems like it should be possible: https://developer.apple.com/videos/play/wwdc2020/10152/ I've made a quick test app, compiled and encrypted the model and added it to the "Copy Files Build Phase" of my Framework. However upon calling the load method of my model class, I receive the following error: Error Domain=com.apple.CoreML Code=3 "failed to invoke mremap_encrypted with result = -1 I've checked the package contents of my test app, and I see within it my Framework.bundle (with _CodeSignature) is there and within that I see my Model.mlmodelc file. Everything seems to be in place and signed correctly. The video states that model decryption keys are associated with a Developer Team and are automatically downloaded when needed. How is this done exactly? Do the keys need to belong to the team of the running application or can they belong to the Team of a framework developer? Any help would be much appreciated. Thanks.
Posted
by nick_fio.
Last updated
.