FB:FB16079804
Hello,
I've made the FastAI's Cat vs Dog model into model that distinguishes lemons from limes and it all works fine in a notebook.
I am now looking to transform this model into Core ML for my iOS app using TorchScript and Apple official guidelines for coremltools.
Model converts but I cannot see the Preview Tab in. Xcode. Have anyone of you tried to convert to Core ML? I guess my input types are not matching with coremltools expectations for preview but I am stuck . Here is my code.
import torch
import coremltools as ct
from fastai.vision.all import *
import json
from torchvision import transforms
# Load your Fastai model (replace with your actual path)
learn = load_learner('lemonmodel.pkl')
# Example input image (you can use any image from your dataset)
input_image = PILImage.create('example.jpg')
# Preprocess the image (assuming you used these transforms during training)
to_tensor = transforms.ToTensor()
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
input_tensor = to_tensor(input_image)
input_tensor = normalize(input_tensor) # Apply normalization
# Add a batch dimension
input_tensor = input_tensor.unsqueeze(0)
# Ensure float32 type
input_tensor = input_tensor.float()
# Trace the model
trace = torch.jit.trace(learn.model, input_tensor)
# Define the Core ML input type (considering your model's input shape)
_input = ct.ImageType(
name="input_1",
shape=input_tensor.shape,
bias=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
scale=1./(255*0.226)
)
# Convert the model to Core ML format
mlmodel = ct.convert(
trace,
inputs=[_input],
minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target
)
# Set model type as 'imageClassifier' for the Preview tab
mlmodel.type = 'imageClassifier'
# Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime')
labels_json = {
"imageClassifier": {
"labels": ["lemon", "lime"],
"input": {
"shape": list(input_tensor.shape), # Provide the actual input shape
"mean": [0.485, 0.456, 0.406], # Match normalization mean
"std": [0.229, 0.224, 0.225] # Match normalization std
},
"output": {
"shape": [1, 2] # Output shape for your model (2 classes)
}
}
}
# Setting up the metadata with correct 'preview' params
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
# Save the model as .mlmodel
mlmodel.save("LemonClassifierGemini.mlmodel")
mlmodel = ct.convert(
trace,
inputs=[_input],
minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target
)
# Set model type as 'imageClassifier' for the Preview tab**
mlmodel.type = 'imageClassifier'
# Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime')
labels_json = {
"imageClassifier": {
"labels": ["lemon", "lime"],
"input": {
"shape": list(input_tensor.shape), # Provide the actual input shape
"mean": [0.485, 0.456, 0.406], # Match normalization mean
"std": [0.229, 0.224, 0.225] # Match normalization std
},
"output": {
"shape": [1, 2] # Output shape for your model (2 classes)
}
}
}
# Setting up the metadata with correct 'preview' params**
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
# Save the model as .mlmodel
mlmodel.save("LemonClassifierGemini.mlmodel")
My model is :
Input batch shape: torch.Size([32, 3, 192, 192])
Labels batch shape: torch.Size([32])
Validation Loss: None, Validation Metric: None
Predictions shape: torch.Size([63, 2])
Targets shape: torch.Size([63])
Code for the model :
searches = 'lemon','lime'
path = Path('lemon_or_not')
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
time.sleep(5)
resize_images(path/o, max_size=400, dest=path/o)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path, bs=32)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_lemon,_,probs = learn.predict(PILImage.create('lemon.jpg'))
print(f"This is a: {is_lemon}.")
print(f"Probability it's a lemon: {probs[0]:.4f}")
This is a: lemon.
Probability it's a lemon: 1.0000
learn.export('lemonmodel.pkl')
I am stuck to why it doest show the Preview Tab.
Xcode
RSS for tagBuild, test, and submit your app using Xcode, Apple's integrated development environment.
Posts under Xcode tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We need the pinned messages to be size customizable.
Hi!
Trying to move some code into a local package - with Xcode 16 I am unable to move my code to the new package as drag and drop will always result in a copy. This contradicts the demo in the original intro session at WWDC19, and also the current documentation.. Of course I can delete the original files but this feels somewhat wrong...
Am I missing something? Did the behaviour of moving files in the Project navigator change in the recent Xcode releases?
In Xcode 16, when I use File > New > Project to create a new iOS app project, Xcode automatically creates an asset catalog file called Assets.xcassets in the project.
However, if I later use File > New > File from Template, for example to add an asset catalog to an embedded Swift package within a project, Xcode suggests the default filename Media.xcassets for the file, instead of Assets.xcassets.
Why is the default name for this file different in each context?
Thank you for explaining this troubling consistency issue which causes me to lie awake at night.
Hi, so a little context, by environment variables I mean like $HOME in linux. I know that there are some standard and some app specific environment variables in the above mentioned platforms.
Is is possible for the user to set environment variables? If so, what are the ways in which users can set environment variables in the platforms mentioned across the system or for the app as they would in macOS/linux/windows?
And is it possible for developers of the app do the same? If so how?
What I have found so far is that it cannot be done by users, and there is one place in xcode where I, as a developer can set environment variables for my app from the scheme. This isn't available when I ship just the installation binary to the end user, but rather is available only when the app is run using xcode. I just need validation that what I understand is correct and that there aren't other ways to do this by the user/developer without jailbreaking.
I am currently investigating an issue with my app's slow launch time in Xcode 16. However, when I open the Launches pane in the Xcode Organizer, I can only see the report list, but I encounter an error message like the one below in the stack trace.
This issue also occurs in the Disk Writes, Energy, and Hangs panes, but not in the Crashes pane.
Can someone advise me on what steps I should take? Any help would be greatly appreciated.
It looks like Xcode 16 has changed this behavior so I'm not sure if this is a bug or not.
When a SwiftUI Button wraps a UIButton, the button doesn't work on iOS 18.0+
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Button(action: {
print("Not called on iOS 18")
}) {
WrapperButton()
.frame(width: 200, height: 50)
}
}
}
}
struct WrapperButton: UIViewRepresentable {
func makeUIView(context: Context) -> some UIView {
let button = UIButton(type: .system)
button.setTitle("OK", for: .normal)
return button
}
func updateUIView(_ uiView: UIViewType, context: Context) {}
}
This occurs with the app build with Xcode 16 and running on iOS 18
but it was worked with Xcode 15 builds and running on iOS 18
My preferred way of setting up an app project no longer works in Xcode 16.
I like to have my apps massively modularized - a separate SPM module for every significant feature. I use a project grouped together with a package that contains all the feature modules. The project's app target imports a consolidated AppFeature from the package that is all the logic of the app.
In Xcode 15 and before, I had a process for creating this kind of setup (described below -- unfortunately the forums don't support collapsible sections). Where I used to be able to drag a directory from Finder into the Xcode files navigator (step 6 in my process), it now rejects the drag.
What I'm looking for is a way to have a single window open that has my Swift package and below that a separate folder for each executable target. Creating new modules in the package automatically creates new schemes in Xcode. Executable targets in the project can reference any module in the package. Source control treats the entire thing as one repository.
I've tried all the approaches I can think of to accomplish the same goal, but no luck. The projects I've already built work fine in Xcode 16 -- I just can't make a new one. Unfortunately I can't revert to Xcode 15 for this purpose since it apparently doesn't run on my work machine with macOS 15.
Here's my process that worked great till now:
In Terminal, create the project folder Foo:
$ mkdir Foo
$ cd Foo
Create the package:
$ swift package init
Creating library package: Foo
Creating Package.swift
...
Create a directory for the project:
$ mkdir App
In Xcode, create a new app project called Foo and put it in the App folder
Open the Foo project
Drag the top level Foo directory from Finder into the Xcode project and drop it immediately under the project name
Close the Xcode project
Create a file called Package.swift and place it in the App folder. Edit it to have an empty package content. This ensures Xcode won’t display that folder in the source navigator under the package header.
import PackageDescription
let package = Package(
name: "",
products: [],
dependencies: [],
targets: []
)
Open the Xcode project. You should have top-level project name* Foo*. Under that will be the package, also named Foo. Then there will be the app target, also named Foo. You can rename the app target folder Foo to something like iOS if you want to have other targets for other platforms like macOS.
I created a new iOS project (storyboard if it matters) and added a bunch of C files to it. Some portion of the C files depend on libcurl. I would like to be able to build for both simulator and device if possible. Google claims that Xcode can provide the dependency as part of the inbuilt libraries however I do not see libcurl.4.tbd (or any version) as an option to choose. Is this feature no longer available or is there something I am missing here?
For context here is a screen shot of my build error situation
Hi, after a software update of our previously successfully notarizated installion .pkg we now receive a cryptic notarization issue refusing the entire .pkg:
{
"logFormatVersion": 1,
"jobId": "5cff2d71-7228-4fb4-a39d-329084cd2713",
"status": "Invalid",
"statusSummary": "Archive contains critical validation errors",
"statusCode": 4000,
"archiveFilename": "my_installer.pkg.zip",
"uploadDate": "2024-12-04T23:17:14.016Z",
"sha256": "2f26d0376506abe130ac904d7cb0d0cd5428666624428da9f44da7756352844f",
"ticketContents": null,
"issues": [
{
"severity": "error",
"code": null,
"path": "my_installer.pkg.zip",
"message": "Package my_installer.pkg.zip has no signed executables or bundles. No tickets can be generated.",
"docUrl": null,
"architecture": null
},
{
"severity": "warning",
"code": null,
"path": "my_installer.pkg.zip/my_installer.pkg",
"message": "The contents of the package at my_installer.pkg.zip/my_installer.pkg could not be extracted.",
"docUrl": null,
"architecture": null
}
]
}
What could be the reason for that? We've also submitted the .pkg (not zipped) with the same result. We build it on different macOS versions, including Sonoma 14.7 with latest developer tools installed, without any change in outcome.
But when extracting it via the undocumented:
pkgutil --expand-full
and .zipping the raw contents (without re-packaging it as .pkg) and sending it again, notarization succeeds for all components.
However the installation for the .pkg still fails with the notarization dialog. I was under the assumption that it is sufficient to notarize the .pkg contents but this does not seem to be true, or?
Any hints or help is highly appreciated, Thanks!
I am a complete newbie when it comes to Swift and MacOS development. So apologies, I don't even know what is the right thing to search for.
I have an app which uses ScreenCaptureKit. I had a preview working which showed the different windows available, it initially required me to give my app permissions for screen and system audio recording which I did.
However now whenever I rebuild the app it asks for permission again and fails - despite the permission already being given.
I'm building an iOS app using Xcode, and I frequently modify code within a package. To do this, I drag and drop a locally cloned version of the package into my project as a local package. However, when working across multiple branches of my app, I need different versions of the package as well. To achieve this, I use Git worktrees to create branch-specific copies of the package. Unfortunately, when I drag and drop these worktree copies into Xcode, the IDE doesn't seem to recognize them. Could you kindly guide me on resolving this?
We are implementing Apple Pay and Wallet features in our app and using mocked data for testing purposes. Specifically, in the status(completion:) method of PKIssuerProvisioningExtensionHandler, we return:
passEntriesAvailable: true,
remotePassEntriesAvailable: true,
requiresAuthentication: true,
In the passEntries(completion:) method, we provide mocked data for our card.
The issue is that the app icon inconsistently appears under the "From Apps on Your iPhone" section in the Wallet app. Sometimes it shows up as expected, but other times it does not.
On recent occasions, when the app is selected and mocked authorization is paased, the behavior includes a system error 'Cannot Add Card' even though we provided status that indicates that the app has available cards for Wallet.
For reference:
The app uses two bundle IDs supporting in-app provisioning and
PNO Pass Metadata has not been configured yet.
Could you help clarify the potential reasons for this inconsistent behavior?
Hello, when I open the "Cloud" tab in the Report navigator, it shows "Failed to analyze workspace" and I cannot do anything. I can't start a build, nor can I manage or create workflows. All options are greyed out.
When loading the project, in Console I see the following errors from Xcode:
fault 11:07:19.805349+0100 Xcode ❌❌❌ Assertion failure #0 in <private>
error 11:07:19.816641+0100 Xcode ❌❌❌ Assertion backtrace: >><private>0 XcodeCloudKit 0x00000001195309d8 $s13XcodeCloudKit17AssertionHandlingPAAE16logFullBacktraceyyFAA0D0O7HandlerV_Tg5Tf4d_n + 48
1 XcodeCloudKit 0x00000001192583c4 $s13XcodeCloudKit26ShellBasedProjectDescriberPAAE17describePublishery7Combine03AnyI0Vy6OutputQzs5Error_pG5InputQzFAKs5Int32V_10Foundation4DataVARtcfU_AA021AllArchivableProductsdG0V_Tg5 + 2012
2 XcodeCloudKit 0x000000011925ec90 $ss5Int32V10Foundation4DataVAE7Combine12AnyPublisherVy6Output13XcodeCloudKit26ShellBasedProjectDescriberPQzs5Error_pGIegyggo_AB8exitCode_AE6stdoutAE6stderrtAOIegnr_AjKRzlTRAJ021AllArchivableProductskN0V_TG5TA + 40
3 Combine 0x00000001a072351c $s7Combine10PublishersO7FlatMapV5Outer33_E91C3F00A6DFAAFEA2009FAF507AE039LLC7receiveyAA11SubscribersO6DemandV6OutputQy_F + 300
4 Combine 0x00000001a0725128 $s7Combine10PublishersO7FlatMapV5Outer33_E91C3F00A6DFAAFEA2<…>
<<
fault 11:07:19.822299+0100 Xcode ❌❌❌ Assertion failure #0 in <private>
fault 11:07:19.822405+0100 Xcode ❌❌❌ Assertion failure #0 in <private>
error 11:07:19.824577+0100 Xcode ❌❌❌ Assertion backtrace: >><private>0 XcodeCloudKit 0x00000001195309d8 $s13XcodeCloudKit17AssertionHandlingPAAE16logFullBacktraceyyFAA0D0O7HandlerV_Tg5Tf4d_n + 48
1 XcodeCloudKit 0x00000001195e088c $s13XcodeCloudKit17DiscoveredProductO16DescriptorsCacheC13updatePlanner11environmentyAA21DeploymentEnvironmentO_tFys6ResultOySayAC0E10DescriptorVGs5Error_pGcfU_ + 740
2 XcodeCloudKit 0x0000000119289fa8 $s7Combine9PublisherP13XcodeCloudKitE18sinkAtMostOneValue4file4line6resultAA14AnyCancellableCs12StaticStringV_Suys6ResultOy6OutputQzs5Error_pGctFyAA11SubscribersO10CompletionOy_sAQ_pGcfU0_AA0nB0VySay0cD3API10ComponentsO7SchemasO13BuildArtifactVGsAQ_pG_Tg5Tm + 68
3 XcodeCloudKit 0x00000001192a1ef4 $s7Combine9PublisherP13XcodeCloudKitE18sinkAtMostOneValue4file4line6resultAA14AnyCancellableCs12StaticStringV_Suys6ResultOy6OutputQzs5Error_pGctFyAA11SubscribersO10CompletionOy_sAQ_pGcfU0_AA0nB0VySay0cD3API10ComponentsO7Sc<…>
<<
Obtained by filtering by xcode-cloud-assertion category.
Is there anything I can do to fix this? Thanks!
Greetings all,
Would anyone be able to assist and tell me why on my desktop Button( auto completion shows 1 choice
() (action: @escaptin () -> Void, label: @escaping () -> Label)
while on my laptop, Button( auto completion shows more choices, starting with
I (_ configuration:)
I (action:label:)
etc....
How do I get my desktop to act like my laptop?
I’ve noticed a good number of strange problems in the IOS development process, so I’m going to track them here. If they’re resolved at some point, I’ll note it.
At least in my first attempt, letting Appleconnect build my code resulted in a build with no errors that could not be submitted for review. The solution for me was to build from Xcode. It took a while to stumble on that, so if you have problems with one way, try the other.
Xcode doesn’t show Info.plist by default. This is a particularly nasty bug that caused a great deal of trouble, being the biggest reason I had so much trouble getting my first app submitted for review. The only way I found to get around it was to make a small change to one of the items under the Info tab under the project name/icon in Xcode. Then, the Info.plist showed up!
SwiftUI and SwiftData are siloed off from each other, so if you have a list, the view is controlled by SwiftUI. This means that if a user moves an item in a list, SwiftData is not informed. If the user quits the app, the order will revert to the last saved version! So you must resort to tracking the order in your code and adjusting your query accordingly.
I try to do unit test for an ES system extension's code using Unit Testing Bundle.
the code tries to get NSEndpointSecurityMachServiceName in info.plist by [bundle objectForInfoDictionaryKey:] but failed.
I have added NSEndpointSecurityMachService to info.plist of Test Bundle, and I did check info.plist file in bundle, NSEndpointSecurityMachServiceName was listed there.
I have an old app that I just got a notice will be pulled form the App Store if I don't upgrade. I tried to open in Xcode but it says I need to use Xcode 10.1 to convert to Swift 4.
Exact message - "Use Xcode 10.1 to migrate the code to Swift 4."
I downloaded Xcode 10.1 , now the OS (Sequoia ) says can't do it, have to use the latest version of Xcode.
Exact message - "The version of Xcode installed on this Mac is not compatible with macOS Sequoia. Download the latest version for free from the App Store."
Any experience with this and suggestions would be greatly appreciated.
I am building a video conferencing app using LiveKit in Flutter and want to implement Picture-in-Picture (PiP) mode on iOS. My goal is to display a view showing the speaker's initials or avatar during PiP mode. I successfully implemented this functionality on Android but am struggling to achieve it on iOS.
I am using a MethodChannel to communicate with the native iOS code. Here's the Flutter-side code:
import 'package:flutter/foundation.dart';
import 'package:flutter/services.dart';
class PipController {
static const _channel = MethodChannel('pip_channel');
static Future<void> startPiP() async {
try {
await _channel.invokeMethod('enterPiP');
} catch (e) {
if (kDebugMode) {
print("Error starting PiP: $e");
}
}
}
static Future<void> stopPiP() async {
try {
await _channel.invokeMethod('exitPiP');
} catch (e) {
if (kDebugMode) {
print("Error stopping PiP: $e");
}
}
}
}
On the iOS side, I am using AVPictureInPictureController. Since it requires an AVPlayerLayer, I had to include a dummy video URL to initialize the AVPlayer. However, this results in the dummy video’s audio playing in the background, but no view is displayed in PiP mode.
Here’s my iOS code:
import Flutter
import UIKit
import AVKit
@main
@objc class AppDelegate: FlutterAppDelegate {
var pipController: AVPictureInPictureController?
var playerLayer: AVPlayerLayer?
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
let controller: FlutterViewController = window?.rootViewController as! FlutterViewController
let pipChannel = FlutterMethodChannel(name: "pip_channel", binaryMessenger: controller.binaryMessenger)
pipChannel.setMethodCallHandler { [weak self] (call: FlutterMethodCall, result: @escaping FlutterResult) in
if call.method == "enterPiP" {
self?.startPictureInPicture(result: result)
} else if call.method == "exitPiP" {
self?.stopPictureInPicture(result: result)
} else {
result(FlutterMethodNotImplemented)
}
}
GeneratedPluginRegistrant.register(with: self)
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
private func startPictureInPicture(result: @escaping FlutterResult) {
guard AVPictureInPictureController.isPictureInPictureSupported() else {
result(FlutterError(code: "UNSUPPORTED", message: "PiP is not supported on this device.", details: nil))
return
}
// Set up the AVPlayer
let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!)
let playerLayer = AVPlayerLayer(player: player)
self.playerLayer = playerLayer
// Create a dummy view
let dummyView = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 1))
dummyView.isHidden = true
window?.rootViewController?.view.addSubview(dummyView)
dummyView.layer.addSublayer(playerLayer)
playerLayer.frame = dummyView.bounds
// Initialize PiP Controller
pipController = AVPictureInPictureController(playerLayer: playerLayer)
pipController?.delegate = self
// Start playback and PiP
player.play()
pipController?.startPictureInPicture()
print("Picture-in-Picture started")
result(nil)
}
private func stopPictureInPicture(result: @escaping FlutterResult) {
guard let pipController = pipController, pipController.isPictureInPictureActive else {
result(FlutterError(code: "NOT_ACTIVE", message: "PiP is not currently active.", details: nil))
return
}
pipController.stopPictureInPicture()
playerLayer = nil
self.pipController = nil
result(nil)
}
}
extension AppDelegate: AVPictureInPictureControllerDelegate {
func pictureInPictureControllerDidStartPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) {
print("PiP started")
}
func pictureInPictureControllerDidStopPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) {
print("PiP stopped")
}
}
Questions:
How can I implement PiP mode on iOS without using a video URL (or AVPlayerLayer)?
Is there a way to display a custom UIView (like a speaker’s initials or an avatar) in PiP mode instead of requiring a video?
Why does PiP not display any view, even though the dummy video URL is playing in the background?
I am new to iOS development and would greatly appreciate any guidance or alternative approaches to achieve this functionality. Thank you!
I have a problem uploading to testflight. When I archived my app, it announced success, but when I went to the appstoreconnect website I didn't see any builds, while I waited a long time for it to be pushed up. And one more problem, when the upload notification is uploaded, after a while there is a notification that my app for ios is not a valid binary, I don't understand this problem. Can you please ask me how to solve it?