Hi People :)
I'm experimenting with Swift/C++ interoperability these days.
I'd like to understand how could I conform a Swift class to Cxx header:
Like this:
import Application
class App: Application {
public func run() {
let app = NSApplication.shared
let delegate = AppDelegate()
app.delegate = delegate
app.run()
}
}
But I got this error:
/Users/tonygo/oss/native-research/App.swift:27:7: error: inheritance from non-protocol, non-class type 'Application'
class App: Application {
^
ninja: build stopped: subcommand failed.
That seems normal indeed.
Reproductible example: https://github.com/tony-go/native-research/tree/conform-swift-to-cxx-header (Just run make)
I also have another branch on that repo where I use an intermediate Cxx bridge file that conforms to the :Application class and use the Swift API, like this: https://github.com/tony-go/native-research/tree/main/app
Bit I think that its a lot of boilerplate.
So I wonder which approach could I take for this?
Cheers :)
Dive into the world of programming languages used for app development.
Post
Replies
Boosts
Views
Activity
On android there is a way for my app to know when the device has been restarted or powered up after a restart or powering off. I wonder if there is a way to listen for the restart/power up even on the iphone and the Apple Watch?
Hi,
Using iOS 17.2 trying to build an ios app with Content Filter Network extension.
My problem is that when I build it on a device and go to Settings --> VNP & Device Management the Content Filter with my identifier is showing BUT is shows invalid.
Appname is Privacy Monitor and Extension name is Social Filter Control
Here is is my code
`//
// PrivacyMonitorApp.swift
import SwiftUI
@main
struct PrivacyMonitorApp: App {
@UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
import NetworkExtension
class AppDelegate: UIResponder, UIApplicationDelegate {
static private(set) var instance: AppDelegate! = nil
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
configureNetworkFilter()
return true
}
func configureNetworkFilter() {
let manager = NEFilterManager.shared()
manager.loadFromPreferences { error in
if let error = error {
print("Error loading preferences: \(error.localizedDescription)")
return
}
// Assume configuration is absent or needs update
if manager.providerConfiguration == nil {
let newConfiguration = NEFilterProviderConfiguration()
newConfiguration.filterBrowsers = true
newConfiguration.filterSockets = true
// newConfiguration.vendorConfiguration = ["someKey": "someValue"]
manager.providerConfiguration = newConfiguration
}
manager.saveToPreferences { error in
if let error = error {
print("Error saving preferences: \(error.localizedDescription)")
} else {
print("Filter is configured, prompt user to enable it in Settings.")
}
}
}
}
}
Next the FilterManager.swift
`import NetworkExtension
class FilterManager {
static let shared = FilterManager()
init() {
NEFilterManager.shared().loadFromPreferences { error in
if let error = error {
print("Failed to load filter preferences: \(error.localizedDescription)")
return
}
print("Filter preferences loaded successfully.")
self.setupAndSaveFilterConfiguration()
}
}
private func setupAndSaveFilterConfiguration() {
let filterManager = NEFilterManager.shared()
let configuration = NEFilterProviderConfiguration()
configuration.username = "MyConfiguration"
configuration.organization = "SealdApps"
configuration.filterBrowsers = true
configuration.filterSockets = true
filterManager.providerConfiguration = configuration
filterManager.saveToPreferences { error in
if let error = error {
print("Failed to save filter preferences: \(error.localizedDescription)")
} else {
print("Filter configuration saved successfully. Please enable the filter in Settings.")
}
}
}
}
Next The PrivacyMonitor.entitlements
`
The Network Extension capabilties are on and this is the SocialFilterControl
`import NetworkExtension
class FilterControlProvider: NEFilterControlProvider {
override func startFilter(completionHandler: @escaping (Error?) -> Void) {
// Initialize the filter, setup any necessary resources
print("Filter started.")
completionHandler(nil)
}
override func stopFilter(with reason: NEProviderStopReason, completionHandler: @escaping () -> Void) {
// Clean up filter resources
print("Filter stopped.")
completionHandler()
}
override func handleNewFlow(_ flow: NEFilterFlow, completionHandler: @escaping (NEFilterControlVerdict) -> Void) {
// Determine if the flow should be dropped or allowed, potentially downloading new rules if required
if let browserFlow = flow as? NEFilterBrowserFlow,
let url = browserFlow.url,
let hostname = browserFlow.url?.host {
print("Handling new browser flow for URL: \(url.absoluteString)")
if shouldBlockDomain(hostname) {
print("Blocking access to \(hostname)")
completionHandler(.drop(withUpdateRules: false)) // No rule update needed immediately
} else {
completionHandler(.allow(withUpdateRules: false))
}
} else {
// Allow other types of flows, or add additional handling for other protocols
completionHandler(.allow(withUpdateRules: false))
}
}
// Example function to determine if a domain should be blocked
private func shouldBlockDomain(_ domain: String) -> Bool {
// Add logic here to check the domain against a list of blocked domains
let blockedDomains = ["google.com", "nu.nl"]
return blockedDomains.contains(where: domain.lowercased().contains)
}
}
And it's info.plist
`
and the entitlements file
`
In Xcode using automatically manage signing and both targets have the same Team
Please explain the missing part
Hello everyone!
I'm currently working on an iOS app developed with Swift that involves connecting to a specific bluetooth device and exchanging data even when the app is terminated or running in the background.
I just want to understand why the CBCenterManager should be Implicitly unwrapped to use it. I have check out couple off apple developer sample project it was set to Implicitly unwrapped. can some one help to understand the reason behind this, also what are possible issues/scenario trigger if we set the centermanager to optional "CBcentermanager?"
Thanks in Advance!
Hi everyone!
We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom.
We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation.
Any leads on this would be greatly appreciated!
Thanks
Hi, I'm trying to implement a type conforming to WKScriptMessageHandlerWithReply while having Swift's strict concurrency checking enabled. It's not been fun.
The protocol contains the following method (there's also one with a callback, but we're in 2024):
func userContentController(
controller: WKUserContentController,
didReceive message: WKScriptMessage
) async -> (Any?, String?)
WKScriptMessage's properties like body must be accessed on the main thread. But since WKScriptMessageHandlerWithReply is not @MainActor, neither can this method be so marked (same for the conforming type).
At the same time WKScriptMessage is not Sendable, so I can't handle it in Task { @MainActor in this method, because that leads to
Capture of 'message' with non-sendable type 'WKScriptMessage' in a `@Sendable` closure
That leaves me with @preconcurrency import - is that the way to go? Should I file a feedback for this or is it somehow working as intended?
I have my app already in live before the privacy manifest introduced. Now, I want to migrate from cocoapods to Swift Package Manager. Will this be considered like adding the third party SDKs as new ones or will it be considered existing ones? So far I have not received any emails from Apple regarding the privacy manifest. I do not want any issues with the privacy manifest.
Hi, in my Extension FilterDataProvider class that is inherited from NEFilterDataProvider i am trying to insert logs into my CoreData entity, but when i insert it gives me error
"NSCocoaErrorDomain: -513
"reason": Unable to write to file opened Readonly
Any suggestions please to update the read write permission
i already have tried this way but no luck
let description = NSPersistentStoreDescription(url: storeURL) description.shouldInferMappingModelAutomatically = true description.shouldMigrateStoreAutomatically = true description.setOption(false as NSNumber, forKey: NSReadOnlyPersistentStoreOption)
?
Hello, i am trying to record logs in my network extension class, and then i want to read it in my application class, i.e. viewModel. However, i am unable to read the data. I have tried different ways like UserDefaults, Keychain, FileManager, NotificationCenter and CoreData. I have also used Appgroups but still there is blocker for reading data outside the scope of Extension class.
How can we cast video, Images, Screen Mirroring from iOS to Smart TV Device(Apple, Android or etc.) i am using GoogleCast SDK iOS but it's cast only server URL video.
Is there any SDK or code for cast video and Images.
Hello,
I have class file, which should save data coreData and Im only able to save data via ui.
Do you have any example, how can I save data in core data via class files?
Greeting Fabian
To resume the rotation/dragGesture is working only on défaut xyz point
I tried to add a custom anchor and apply the same code but cannot be dragged
My guess is that the red line is causing the issue (see screenshot)
I have a Swift project with some C code in it. The C code creates a byte array with about 600K elements. Compiling under Xcode, the compilation takes a really long time. When I try to run the code, it fails immediately upon startup. When I cut this large array out of the build, everything else works fine. Does anyone know what's going on here, and what I might do about it?
I'm encountering an issue with the barcode reader on my iPad 6th generation running iOS 17.4.1. Specifically, when I attempt to use the barcode reader in landscape mode, I do not receive any output or response. However, when I rotate my iPad to portrait mode, the barcode is successfully scanned.
I've tried restarting my iPad, checking for software updates, and adjusting the settings within the barcode scanning app, but the issue persists. I've also tested with different barcode scanning apps, and the problem remains consistent across apps.
This issue seems to be specific to my iPad model and iOS version, as I haven't encountered it on other devices or with previous iOS versions.
Has anyone else experienced a similar issue with barcode scanning in landscape mode on the iPad 6th generation running iOS 17.4.1? Are there any known solutions or workarounds for this problem?
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed.
CameraView.swift
import SwiftUI
import AVFoundation
struct CameraView: View {
let selectedImage: UIImage
var body: some View {
ZStack {
CameraPreview()
Image(uiImage: selectedImage)
.resizable()
.aspectRatio(contentMode: .fill)
.opacity(0.5) // Adjust the opacity as needed
.edgesIgnoringSafeArea(.all)
}
}
}
struct CameraPreview: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
let cameraPreview = CameraPreviewView()
return cameraPreview
}
func updateUIView(_ uiView: UIView, context: Context) {}
}
class CameraPreviewView: UIView {
private let captureSession = AVCaptureSession()
override init(frame: CGRect) {
super.init(frame: frame)
setupCamera()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private func setupCamera() {
guard let backCamera = AVCaptureDevice.default(for: .video) else {
print("Unable to access camera")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = .resizeAspectFill
previewLayer.frame = bounds
layer.addSublayer(previewLayer)
captureSession.startRunning()
}
} catch {
print("Error setting up camera input:", error.localizedDescription)
}
}
}
Thanks for helping and your time.
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much?
Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release.
func Score( trial: String, target: String ) -> Int
{
var score = 0
withUnsafeBytes(of: trial.utf8) { rawTrial in
withUnsafeBytes(of: target.utf8) { rawTarget in
for i in 0..<5
{
let trial_i = rawTrial[i];
if trial_i == rawTarget[i] // strong hit
{
score += kStrongScore
}
else // check for weak hit
{
for j in 0..<5
{
if j != i
{
let target_j = rawTarget[j];
if (trial_i == target_j) &&
(rawTrial[j] != target_j)
{
score += kWeakScore
break
}
}
}
}
}
}
}
return score
}
Hello,
I have created a Neural Network → K Nearest Neighbors Classifier with python.
# followed by k-Nearest Neighbors for classification.
import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft
from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder
import copy
# Take the SqueezeNet feature extractor from the Turi Create model.
base_model = coremltools.models.MLModel("SqueezeNet.mlmodel")
base_spec = base_model._spec
layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers)
# Delete the softmax and innerProduct layers. The new last layer is
# a "flatten" layer that outputs a 1000-element vector.
del layers[-1]
del layers[-1]
preprocessing = base_spec.neuralNetworkClassifier.preprocessing
# The Turi Create model is a classifier, which is treated as a special
# model type in Core ML. But we need a general-purpose neural network.
del base_spec.neuralNetworkClassifier.layers[:]
base_spec.neuralNetwork.layers.extend(layers)
# Also copy over the image preprocessing options.
base_spec.neuralNetwork.preprocessing.extend(preprocessing)
# Remove other classifier stuff.
base_spec.description.ClearField("metadata")
base_spec.description.ClearField("predictedFeatureName")
base_spec.description.ClearField("predictedProbabilitiesName")
# Remove the old classifier outputs.
del base_spec.description.output[:]
# Add a new output for the feature vector.
output = base_spec.description.output.add()
output.name = "features"
output.type.multiArrayType.shape.append(1000)
output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32
# Connect the last layer to this new output.
base_spec.neuralNetwork.layers[-1].output[0] = "features"
# Create the k-NN model.
knn_builder = KNearestNeighborsClassifierBuilder(input_name="features",
output_name="label",
number_of_dimensions=1000,
default_class_label="???",
number_of_neighbors=3,
weighting_scheme="inverse_distance",
index_type="linear")
knn_spec = knn_builder.spec
knn_spec.description.input[0].shortDescription = "Input vector"
knn_spec.description.output[0].shortDescription = "Predicted label"
knn_spec.description.output[1].shortDescription = "Probabilities for each possible label"
knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10))
# Use the same name as in the neural network models, so that we
# can use the same code for evaluating both types of model.
knn_spec.description.predictedProbabilitiesName = "labelProbability"
knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName
# Put it all together into a pipeline.
pipeline_spec = coremltools.proto.Model_pb2.Model()
pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION
pipeline_spec.isUpdatable = True
pipeline_spec.description.input.extend(base_spec.description.input[:])
pipeline_spec.description.output.extend(knn_spec.description.output[:])
pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName
pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName
# Add inputs for training.
pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]])
pipeline_spec.description.trainingInput[0].shortDescription = "Example image"
pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]])
pipeline_spec.description.trainingInput[1].shortDescription = "True label"
pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec)
pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec)
pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"])
coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel")
it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/
It Works and I were am to include it into my project:
I want to train the model via the MLUpdateTask:
ar batchInputs: [MLFeatureProvider] = []
let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint)
let imageOptions: [MLFeatureValue.ImageOption: Any] = [
.cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue]
var featureProviders = [MLFeatureProvider]()
//URLS where images are stored
let trainingData = ImageManager.getImagesAndLabel()
for data in trainingData{
let label = data.key
for imgURL in data.value{
let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions)
if let pixelBuffer = featureValue.imageBufferValue{
let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label)
batchInputs.append(featureProvider)}}
let trainingData = MLArrayBatchProvider(array: batchInputs)
When calling the MLUpdateTask as follows, the context.model from completionHandler is null.
Unfortunately there is no other Information available from the compiler.
do{
debugPrint(context)
try context.model.write(to: ModelManager.targetURL)
}
catch{
debugPrint("Error saving the model \(error)")
}
})
updateTask.resume()
I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0)
Can some1 more experienced tell me how to fix this?
It seems like I am missing some parameters?
I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels.
Thanks!
Hi, wondering if IOS supports WebTransport (HTTP/3) yet?
If so, where can I find information on implementing it in my app?
Hi. I plan to use a WebView in an iOS app (SWIFT) and this should run a web app with WASM and using IndexedDB for permanent credentials.
I found rumors and information on Apple deleting data in IndexedDB and localStorage after 7 days (see links below). But I found no official information that tells me if this is true for my WebView in my ordinary mobile App (not PWA).
A test cycle over a week to find out is hard to do...
Is there any reliable and clear information on this and am I affected?
Thank you!
.
Links about this topic:
https://news.ycombinator.com/item?id=28158407
https://www.reddit.com/r/javascript/comments/foqxp9/webkit_will_delete_all_local_storage_including/
https://searchengineland.com/what-safaris-7-day-cap-on-script-writeable-storage-means-for-pwa-developers-332519
Hello everyone,
So I will start off by saying I am a very amateur developer with some experience in C++ mostly. Over the summer I want to build an app similar to a board game and launch it on the App Store for me and my friends to play when we don't have the game's physical board. Basically, there would be one person who hosts a "game" while everyone else joins through a code or something like that (maybe there's an easier way if you know everyone would be playing in person with each other). Once a game begins I want cards to show up on peoples's screens and that's it, no fancy graphics or anything like that.
So, to the root of my issue. I am brand new to Swift and Xcode. I began googling and tinkering with it and made a little app where a user can add names and then pick letters from the names to display (very very basic stuff). I also figured out how to import and manipulate images a little bit. My question is about the process of making a game, connecting it to GameKit/Game Center, and then how to actually launch it on the App Store so my friends can also download it.
If anyone has any resources they particularly found useful when starting out using Swift, please let me know. I really really don't like reading straight from the documentation (although who does honestly). Anything helps!! Thank you!