Issue type: Bug
TensorFlow metal version: 1.1.1
TensorFlow version: 2.18
OS platform and distribution: MacOS 15.2
Python version: 3.11.11
GPU model and memory: Apple M2 Max GPU 38-cores
Standalone code to reproduce the issue:
import tensorflow as tf
if __name__ == '__main__':
gpus = tf.config.experimental.list_physical_devices('GPU')
print(gpus)
Current behavior
Apple silicone GPU with tensorflow-metal==1.1.0 and python 3.11 works fine with tensorboard==2.17.0
This is normal output:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Process finished with exit code 0
But if I upgrade tensorflow to 2.18 I'll have error:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
Traceback (most recent call last):
File "/Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py", line 1, in <module>
import tensorflow as tf
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/__init__.py", line 437, in <module>
_ll.load_library(_plugin_dir)
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN3tsl8internal10LogMessageC1EPKcii
Referenced from: <D2EF42E3-3A7F-39DD-9982-FB6BCDC2853C> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2814A58E-D752-317B-8040-131217E2F9AA> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Process finished with exit code 1
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
I'm working on my Swift Student Challenge submission and developing a Vision framework-based image classifier. I want to ensure I'm following best practices for training data and follow to guidelines for what images I use to train my image classifier.
What types of images can I use for training my model?
Are there specific image databases or resources recommended by Apple that are safe to use for Swift Student Challenge submissions?
Currently considering images used from wikipedia, and my own images
I have exported a Pytorch model into a CoreML mlpackage file and imported the model file into my iOS project. The model is a Music Source Separation model - running prediction on audio-spectrogram blocks and returning separated audio source spectrograms.
Model produces correct results vs. desktop+GPU+Python but the inference on iPhone 15 Pro Max is really, really slow. Using Xcode model Performance tool I can see that the inference isn't automatically managed between compute units - all of it runs on CPU. The Performance tool notation hints all that ops should be supported by both the GPU and Neural Engine.
One thing to note, that when initializing the model with MLModelConfiguration option .cpuAndGPU or .cpuAndNeuralEngine there is an error in Xcode console:
`Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: Failed to determine convolution kernel at location at /private/var/containers/Bundle/Application/2E3C4AFF-1FA4-4C95-AAE4-ECEBC0FB0BF9/mymss.app/mymss.mlmodelc/model.mil:2453:12
@ CreateBnnsGraphProgramFromMIL`
Before going back hammering the model in Python, are there any tips/strategies I could try in CoreMLTools export phase or in configuring the model for prediction on iOS?
My export toolchain is currently Linux with CoreMLTools v8.1, export target iOS16.
Hello,
I am currently developing an application that requires barcode scanning using Apple’s Vision framework (VNBarcodeSymbology). I noticed that the framework supports several GS1 DataBar symbologies, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, I could not find any explicit reference to support for GS1 DataBar Stacked (both regular and expanded variants).
Could you confirm whether GS1 DataBar Stacked is currently supported in VisionKit's DataScannerViewController or VNBarcodeObservation? If not, are there any plans to include support for this symbology in a future iOS update?
This functionality is critical for my use case, as GS1 DataBar Stacked barcodes are widely used in retail, pharmaceuticals, and logistics, where space constraints prevent the use of standard GS1 DataBar formats.
I appreciate any clarification on this matter and would be happy to provide additional details if needed.
Hi, The most recent version of tensorflow-metal is only available for macosx 12.0 and python up to version 3.11. Is there any chance it could be updated with wheels for macos 15 and Python 3.12 (which is the default version supported for tensrofllow 2.17+)? I'd note that even downgrading to Python 3.11 would not be sufficient, as the wheels only work for macos 12.
Thanks.
Hi everyone😊, I want to implement facial recognition into my app. I was planning to use createML's image classification, but there seams to be a lot of hassle to implement (the JSON file etc.). Are there some other easy to implement options that don't involve advanced coding. Thanks, Oliver
基于iPhone 14 Max相机,实现模型识别,并在识别对象周围画一个矩形框。宽度和高度使用激光雷达计算,并在实时更新的图像上以厘米为单位显示。
swift code
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem:
For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all.
Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string?
Button("Generate", systemImage: "apple.intelligence") {
isPresented = true
}
.imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in
imageURL = url
}
Hello!
I have a swift program that tracks the location of a ball (through the back camera). It seems to be working fine, but the only issue is the run time, particularly my concatenate, normalize, and argmax functions, which are meant to be a 1 to 1 copy of the PyTorch argmax function and the following python lines:
imgs = np.concatenate((img, img_prev, img_preprev), axis=2)
imgs = imgs.astype(np.float32)/255.0
imgs = np.rollaxis(imgs, 2, 0)
inp = np.expand_dims(imgs, axis=0) # used to pass into model
However, I need my program to run in real time and in an ideal world, I want it to run way under real time. Below is a run down of the run times that result from my code:
Starting model inference
Setup took: 0.0 seconds
Resize took: 0.03741896152496338 seconds
Concatenation took: 0.3359949588775635 seconds
Normalization took: 0.9906361103057861 seconds
Model prediction took: 0.3425499200820923 seconds
Argmax took: 28.17007803916931 seconds
Postprocess took: 0.054128050804138184 seconds
Model inference took 29.934185028076172 seconds
Here are the concatenateBuffers, normalizeBuffers, and argmax functions that I use:
func concatenateBuffers(_ buffers: [CVPixelBuffer?]) -> CVPixelBuffer? {
guard buffers.count == 3, let first = buffers[0] else { return nil }
let width = CVPixelBufferGetWidth(first)
let height = CVPixelBufferGetHeight(first)
let targetChannels = 9
var concatenated: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attrs, &concatenated)
guard let output = concatenated else { return nil }
CVPixelBufferLockBaseAddress(output, [])
defer { CVPixelBufferUnlockBaseAddress(output, []) }
guard let outputData = CVPixelBufferGetBaseAddress(output) else { return nil }
let outputPtr = UnsafeMutablePointer<UInt8>(OpaquePointer(outputData))
// Lock all input buffers at once
buffers.forEach { buffer in
guard let buffer = buffer else { return }
CVPixelBufferLockBaseAddress(buffer, .readOnly)
}
defer {
buffers.forEach { CVPixelBufferUnlockBaseAddress($0!, .readOnly) }
}
// Process each input buffer
for (frameIdx, buffer) in buffers.enumerated() {
guard let buffer = buffer,
let inputData = CVPixelBufferGetBaseAddress(buffer) else { continue }
let inputPtr = UnsafePointer<UInt8>(OpaquePointer(inputData))
let bytesPerRow = CVPixelBufferGetBytesPerRow(buffer)
let totalPixels = width * height
// Process all pixels in one go for this frame
for i in 0..<totalPixels {
let y = i / width
let x = i % width
let inputOffset = y * bytesPerRow + x * 4
let outputOffset = i * targetChannels + frameIdx * 3
// BGR order to match numpy
outputPtr[outputOffset] = inputPtr[inputOffset + 2] // B
outputPtr[outputOffset + 1] = inputPtr[inputOffset + 1] // G
outputPtr[outputOffset + 2] = inputPtr[inputOffset] // R
}
}
return output
}
func normalizeBuffer(_ buffer: CVPixelBuffer?) -> MLMultiArray? {
guard let input = buffer else { return nil }
let width = CVPixelBufferGetWidth(input)
let height = CVPixelBufferGetHeight(input)
let channels = 9
CVPixelBufferLockBaseAddress(input, .readOnly)
defer { CVPixelBufferUnlockBaseAddress(input, .readOnly) }
guard let inputData = CVPixelBufferGetBaseAddress(input) else { return nil }
let shape = [1, NSNumber(value: channels), NSNumber(value: height), NSNumber(value: width)]
guard let output = try? MLMultiArray(shape: shape, dataType: .float32) else { return nil }
let inputPtr = inputData.assumingMemoryBound(to: UInt8.self)
let bytesPerRow = CVPixelBufferGetBytesPerRow(input)
let ptr = UnsafeMutablePointer<Float>(OpaquePointer(output.dataPointer))
let totalSize = width * height
for c in 0..<channels {
for idx in 0..<totalSize {
let h = idx / width
let w = idx % width
let inputIdx = h * bytesPerRow + w * channels + c
ptr[c * totalSize + idx] = Float(inputPtr[inputIdx]) / 255.0
}
}
return output
}
func argmax(_ array: MLMultiArray) -> MLMultiArray? {
let shape = array.shape.map { $0.intValue }
guard shape.count == 3,
shape[0] == 1,
shape[1] == 256,
shape[2] == 230400 else {
return nil
}
guard let output = try? MLMultiArray(shape: [1, NSNumber(value: 230400)], dataType: .int32) else { return nil }
let ptr = UnsafePointer<Float>(OpaquePointer(array.dataPointer))
let outputPtr = UnsafeMutablePointer<Int32>(OpaquePointer(output.dataPointer))
let channelSize = 230400
for pos in 0..<230400 {
var maxValue = -Float.infinity
var maxIndex: Int32 = 0
for channel in 0..<256 {
let value = ptr[channel * channelSize + pos]
if value > maxValue {
maxValue = value
maxIndex = Int32(channel)
}
}
outputPtr[pos] = maxIndex
}
return output
}
Are there any glaring areas of inefficiencies that can be reduced to allow for under real time processing whilst following the same logic as found in the python code exactly? Would using Obj-C speed things up for some reason? Are there any tools I can use so I don't have to write these functions myself?
Additionally, in the classes init, function, I tried to check the compute units being used since I feel 0.34 seconds for a singular model prediction is also far too long, but no print statements are showing for some reason:
init() {
guard let loadedModel = try? BallTrackerModel() else {
fatalError("Could not load model")
}
let config = MLModelConfiguration()
config.computeUnits = .all
guard let configuredModel = try? BallTrackerModel(configuration: config) else {
fatalError("Could not configure model")
}
self.model = configuredModel
print("model loaded with compute units \(config.computeUnits.rawValue)")
}
Thanks!
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect.
I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
We are building an app which can reads texts. It can read english and Japanese normal texts successfully. But in some cases, we need to read Japanese tategaki (vertically aligned texts). But in that times, the same code gives no output. So, is there any need to change any configuration to read Japanese tategaki? Or is it really possible to read Japanese tategaki using vision framework?
lazy var detectTextRequest = VNRecognizeTextRequest { request, error in
self.resStr="\n"
self.words = [:]
// Get OCR result
guard let res = request.results as? [VNRecognizedTextObservation] else { return }
// separate the words by space
let text = res.compactMap({$0.topCandidates(1).first?.string}).joined(separator: " ")
var n = 0
self.wordArr=[[]]
self.xs = 1
self.ys = 1
var hs = 0.0 // To compare the heights of the words
// To get the original axis (top most word's axis), only once
for r in res {
var word = r.topCandidates(1).first?.string
self.words[word ?? ""] = [r.topLeft.x, r.topLeft.y]
if(self.cartLabelType == 1){
if(word?.components(separatedBy: CharacterSet(charactersIn: "//")).count ?? 0>2){
self.xs = r.topLeft.x
self.ys = r.topLeft.y
}
}
}
}
}
I have an image based app with albums, except in my app, albums are known as galleries.
When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain.
Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”.
Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings:
Parameter argument title of a required Assistant schema intent parameter target should not be overridden
Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user.
FB16283840
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console:
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
GP extension could not be loaded: Extension (platform: 2) could not be found (in update)
dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89]
ImagePlaygroundViewController.isAvailable returns true, however.
In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app.
Is this a bug?
I've checked on pypi.org and it appears to only have arm64 packages, has x86 with AMD been deprecated?
In an App Playground Xcode project there is no Targets menu in the UI, When I try use the model, it says the model is not in scope. When I did it in a regular project it automatically generated a Swift Class and had no erorrs because it had a target but I see no place to add a target on an App playground.
Hello!
I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly.
I have some code that may be relevant in debugging this issue: How I use the model:
model = BallTrackerNet() # this is the model architecture which will be referenced later
device = self.device # cpu
model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights
model = model.to(device)
model.eval()
Here is the BallTrackerNet() model itself:
import torch.nn as nn
import torch
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias),
nn.ReLU(),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
return self.block(x)
class BallTrackerNet(nn.Module):
def __init__(self, out_channels=256):
super().__init__()
self.out_channels = out_channels
self.conv1 = ConvBlock(in_channels=9, out_channels=64)
self.conv2 = ConvBlock(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = ConvBlock(in_channels=64, out_channels=128)
self.conv4 = ConvBlock(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv5 = ConvBlock(in_channels=128, out_channels=256)
self.conv6 = ConvBlock(in_channels=256, out_channels=256)
self.conv7 = ConvBlock(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv8 = ConvBlock(in_channels=256, out_channels=512)
self.conv9 = ConvBlock(in_channels=512, out_channels=512)
self.conv10 = ConvBlock(in_channels=512, out_channels=512)
self.ups1 = nn.Upsample(scale_factor=2)
self.conv11 = ConvBlock(in_channels=512, out_channels=256)
self.conv12 = ConvBlock(in_channels=256, out_channels=256)
self.conv13 = ConvBlock(in_channels=256, out_channels=256)
self.ups2 = nn.Upsample(scale_factor=2)
self.conv14 = ConvBlock(in_channels=256, out_channels=128)
self.conv15 = ConvBlock(in_channels=128, out_channels=128)
self.ups3 = nn.Upsample(scale_factor=2)
self.conv16 = ConvBlock(in_channels=128, out_channels=64)
self.conv17 = ConvBlock(in_channels=64, out_channels=64)
self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels)
self.softmax = nn.Softmax(dim=1)
self._init_weights()
def forward(self, x, testing=False):
batch_size = x.size(0)
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ups1(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.ups2(x)
x = self.conv14(x)
x = self.conv15(x)
x = self.ups3(x)
x = self.conv16(x)
x = self.conv17(x)
x = self.conv18(x)
# x = self.softmax(x)
out = x.reshape(batch_size, self.out_channels, -1)
if testing:
out = self.softmax(out)
return out
def _init_weights(self):
for module in self.modules():
if isinstance(module, nn.Conv2d):
nn.init.uniform_(module.weight, -0.05, 0.05)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
elif isinstance(module, nn.BatchNorm2d):
nn.init.constant_(module.weight, 1)
nn.init.constant_(module.bias, 0)
Here is also the meta data of my model:
[
{
"metadataOutputVersion" : "3.0",
"storagePrecision" : "Float16",
"outputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 256 × 230400)",
"shortDescription" : "",
"shape" : "[1, 256, 230400]",
"name" : "var_462",
"type" : "MultiArray"
}
],
"modelParameters" : [
],
"specificationVersion" : 6,
"mlProgramOperationTypeHistogram" : {
"Cast" : 2,
"Conv" : 18,
"Relu" : 18,
"BatchNorm" : 18,
"Reshape" : 1,
"UpsampleNearestNeighbor" : 3,
"MaxPool" : 3
},
"computePrecision" : "Mixed (Float16, Float32, Int32)",
"isUpdatable" : "0",
"availability" : {
"macOS" : "12.0",
"tvOS" : "15.0",
"visionOS" : "1.0",
"watchOS" : "8.0",
"iOS" : "15.0",
"macCatalyst" : "15.0"
},
"modelType" : {
"name" : "MLModelType_mlProgram"
},
"userDefinedMetadata" : {
"com.github.apple.coremltools.source_dialect" : "TorchScript",
"com.github.apple.coremltools.source" : "torch==2.5.1",
"com.github.apple.coremltools.version" : "8.1"
},
"inputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)",
"shortDescription" : "",
"shape" : "[1, 9, 360, 640]",
"name" : "input_frames",
"type" : "MultiArray"
}
],
"generatedClassName" : "BallTracker",
"method" : "predict"
}
]
I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well.
Thanks!
Michael
I am trying to run TinyLlama directly using Swift Playgrounds for iOS. I have tried multiple solutions, like libraries (LLM.swift, swift-transformers, ...) which never worked due to import issues, and also tried importing an exported mlmodel.
For the later, I followed the article about Llama 3.1 on CoreML. It was hard to understand how to do the inference with it, but I was able to export a mlpackage, that I then placed in a xcode project to generate the mlmodelc (compiled model) and the model class. I had to go with the first version described in the article, without optimizations, as I got errors during model loading with the flexible input shapes. I was able to run the model for one token generation.
But my biggest problem is that, though the mlmodelc is only 550 MiB, th model loads 24+GiB of memory, largely exceeding what I can have on an iOS device.
Is there a way to use do LLM inferences on Swift Playgrounds at a reasonable speed (even 1 token / s would be sufficient)?
Hello!
I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly.
I have some code that may be relevant in debugging this issue:
How I use the model:
model = BallTrackerNet() # this is the model architecture which will be referenced later
device = self.device # cpu
model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights
model = model.to(device)
model.eval()
Here is the BallTrackerNet() model itself
import torch.nn as nn
import torch
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias),
nn.ReLU(),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
return self.block(x)
class BallTrackerNet(nn.Module):
def __init__(self, out_channels=256):
super().__init__()
self.out_channels = out_channels
self.conv1 = ConvBlock(in_channels=9, out_channels=64)
self.conv2 = ConvBlock(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = ConvBlock(in_channels=64, out_channels=128)
self.conv4 = ConvBlock(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv5 = ConvBlock(in_channels=128, out_channels=256)
self.conv6 = ConvBlock(in_channels=256, out_channels=256)
self.conv7 = ConvBlock(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv8 = ConvBlock(in_channels=256, out_channels=512)
self.conv9 = ConvBlock(in_channels=512, out_channels=512)
self.conv10 = ConvBlock(in_channels=512, out_channels=512)
self.ups1 = nn.Upsample(scale_factor=2)
self.conv11 = ConvBlock(in_channels=512, out_channels=256)
self.conv12 = ConvBlock(in_channels=256, out_channels=256)
self.conv13 = ConvBlock(in_channels=256, out_channels=256)
self.ups2 = nn.Upsample(scale_factor=2)
self.conv14 = ConvBlock(in_channels=256, out_channels=128)
self.conv15 = ConvBlock(in_channels=128, out_channels=128)
self.ups3 = nn.Upsample(scale_factor=2)
self.conv16 = ConvBlock(in_channels=128, out_channels=64)
self.conv17 = ConvBlock(in_channels=64, out_channels=64)
self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels)
self.softmax = nn.Softmax(dim=1)
self._init_weights()
def forward(self, x, testing=False):
batch_size = x.size(0)
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ups1(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.ups2(x)
x = self.conv14(x)
x = self.conv15(x)
x = self.ups3(x)
x = self.conv16(x)
x = self.conv17(x)
x = self.conv18(x)
# x = self.softmax(x)
out = x.reshape(batch_size, self.out_channels, -1)
if testing:
out = self.softmax(out)
return out
def _init_weights(self):
for module in self.modules():
if isinstance(module, nn.Conv2d):
nn.init.uniform_(module.weight, -0.05, 0.05)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
elif isinstance(module, nn.BatchNorm2d):
nn.init.constant_(module.weight, 1)
nn.init.constant_(module.bias, 0)
I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated!
Thanks!
Michael
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development.
I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects.
Thanks in advance!
I see the solution is simple "just change the language in the build settings" but the build settings are not a thing in an App Playground project. It also says duplicated tasks.