Dive into the world of programming languages used for app development.

All subtopics

Post

Replies

Boosts

Views

Activity

How to conform Swift class to C++ header
Hi People :) I'm experimenting with Swift/C++ interoperability these days. I'd like to understand how could I conform a Swift class to Cxx header: Like this: import Application class App: Application { public func run() { let app = NSApplication.shared let delegate = AppDelegate() app.delegate = delegate app.run() } } But I got this error: /Users/tonygo/oss/native-research/App.swift:27:7: error: inheritance from non-protocol, non-class type 'Application' class App: Application { ^ ninja: build stopped: subcommand failed. That seems normal indeed. Reproductible example: https://github.com/tony-go/native-research/tree/conform-swift-to-cxx-header (Just run make) I also have another branch on that repo where I use an intermediate Cxx bridge file that conforms to the :Application class and use the Swift API, like this: https://github.com/tony-go/native-research/tree/main/app Bit I think that its a lot of boilerplate. So I wonder which approach could I take for this? Cheers :)
3
0
318
May ’24
Content Filter extension always invalid
Hi, Using iOS 17.2 trying to build an ios app with Content Filter Network extension. My problem is that when I build it on a device and go to Settings --> VNP & Device Management the Content Filter with my identifier is showing BUT is shows invalid. Appname is Privacy Monitor and Extension name is Social Filter Control Here is is my code `// // PrivacyMonitorApp.swift import SwiftUI @main struct PrivacyMonitorApp: App { @UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } import NetworkExtension class AppDelegate: UIResponder, UIApplicationDelegate { static private(set) var instance: AppDelegate! = nil func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { configureNetworkFilter() return true } func configureNetworkFilter() { let manager = NEFilterManager.shared() manager.loadFromPreferences { error in if let error = error { print("Error loading preferences: \(error.localizedDescription)") return } // Assume configuration is absent or needs update if manager.providerConfiguration == nil { let newConfiguration = NEFilterProviderConfiguration() newConfiguration.filterBrowsers = true newConfiguration.filterSockets = true // newConfiguration.vendorConfiguration = ["someKey": "someValue"] manager.providerConfiguration = newConfiguration } manager.saveToPreferences { error in if let error = error { print("Error saving preferences: \(error.localizedDescription)") } else { print("Filter is configured, prompt user to enable it in Settings.") } } } } } Next the FilterManager.swift `import NetworkExtension class FilterManager { static let shared = FilterManager() init() { NEFilterManager.shared().loadFromPreferences { error in if let error = error { print("Failed to load filter preferences: \(error.localizedDescription)") return } print("Filter preferences loaded successfully.") self.setupAndSaveFilterConfiguration() } } private func setupAndSaveFilterConfiguration() { let filterManager = NEFilterManager.shared() let configuration = NEFilterProviderConfiguration() configuration.username = "MyConfiguration" configuration.organization = "SealdApps" configuration.filterBrowsers = true configuration.filterSockets = true filterManager.providerConfiguration = configuration filterManager.saveToPreferences { error in if let error = error { print("Failed to save filter preferences: \(error.localizedDescription)") } else { print("Filter configuration saved successfully. Please enable the filter in Settings.") } } } } Next The PrivacyMonitor.entitlements ` The Network Extension capabilties are on and this is the SocialFilterControl `import NetworkExtension class FilterControlProvider: NEFilterControlProvider { override func startFilter(completionHandler: @escaping (Error?) -> Void) { // Initialize the filter, setup any necessary resources print("Filter started.") completionHandler(nil) } override func stopFilter(with reason: NEProviderStopReason, completionHandler: @escaping () -> Void) { // Clean up filter resources print("Filter stopped.") completionHandler() } override func handleNewFlow(_ flow: NEFilterFlow, completionHandler: @escaping (NEFilterControlVerdict) -> Void) { // Determine if the flow should be dropped or allowed, potentially downloading new rules if required if let browserFlow = flow as? NEFilterBrowserFlow, let url = browserFlow.url, let hostname = browserFlow.url?.host { print("Handling new browser flow for URL: \(url.absoluteString)") if shouldBlockDomain(hostname) { print("Blocking access to \(hostname)") completionHandler(.drop(withUpdateRules: false)) // No rule update needed immediately } else { completionHandler(.allow(withUpdateRules: false)) } } else { // Allow other types of flows, or add additional handling for other protocols completionHandler(.allow(withUpdateRules: false)) } } // Example function to determine if a domain should be blocked private func shouldBlockDomain(_ domain: String) -> Bool { // Add logic here to check the domain against a list of blocked domains let blockedDomains = ["google.com", "nu.nl"] return blockedDomains.contains(where: domain.lowercased().contains) } } And it's info.plist ` and the entitlements file ` In Xcode using automatically manage signing and both targets have the same Team Please explain the missing part
0
1
159
May ’24
why CBCenterManager! is set to Implicitly unwrapped
Hello everyone! I'm currently working on an iOS app developed with Swift that involves connecting to a specific bluetooth device and exchanging data even when the app is terminated or running in the background. I just want to understand why the CBCenterManager should be Implicitly unwrapped to use it. I have check out couple off apple developer sample project it was set to Implicitly unwrapped. can some one help to understand the reason behind this, also what are possible issues/scenario trigger if we set the centermanager to optional "CBcentermanager?" Thanks in Advance!
1
0
316
May ’24
Voice Processing in multiple apps simultaneously
Hi everyone! We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom. We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation. Any leads on this would be greatly appreciated! Thanks
0
0
327
May ’24
WKScriptMessageHandlerWithReply and strict concurrency checking
Hi, I'm trying to implement a type conforming to WKScriptMessageHandlerWithReply while having Swift's strict concurrency checking enabled. It's not been fun. The protocol contains the following method (there's also one with a callback, but we're in 2024): func userContentController( controller: WKUserContentController, didReceive message: WKScriptMessage ) async -> (Any?, String?) WKScriptMessage's properties like body must be accessed on the main thread. But since WKScriptMessageHandlerWithReply is not @MainActor, neither can this method be so marked (same for the conforming type). At the same time WKScriptMessage is not Sendable, so I can't handle it in Task { @MainActor in this method, because that leads to Capture of 'message' with non-sendable type 'WKScriptMessage' in a `@Sendable` closure That leaves me with @preconcurrency import - is that the way to go? Should I file a feedback for this or is it somehow working as intended?
3
0
322
May ’24
Migration from Cocoapods to Swift Package Manager
I have my app already in live before the privacy manifest introduced. Now, I want to migrate from cocoapods to Swift Package Manager. Will this be considered like adding the third party SDKs as new ones or will it be considered existing ones? So far I have not received any emails from Apple regarding the privacy manifest. I do not want any issues with the privacy manifest.
0
0
233
May ’24
Read and write permission error in FilterDataProvider Network Extension class
Hi, in my Extension FilterDataProvider class that is inherited from NEFilterDataProvider i am trying to insert logs into my CoreData entity, but when i insert it gives me error "NSCocoaErrorDomain: -513 "reason": Unable to write to file opened Readonly Any suggestions please to update the read write permission i already have tried this way but no luck let description = NSPersistentStoreDescription(url: storeURL) description.shouldInferMappingModelAutomatically = true description.shouldMigrateStoreAutomatically = true description.setOption(false as NSNumber, forKey: NSReadOnlyPersistentStoreOption) ?
2
0
298
May ’24
Unable to access logs and data from Network extension class
Hello, i am trying to record logs in my network extension class, and then i want to read it in my application class, i.e. viewModel. However, i am unable to read the data. I have tried different ways like UserDefaults, Keychain, FileManager, NotificationCenter and CoreData. I have also used Appgroups but still there is blocker for reading data outside the scope of Extension class.
7
0
379
May ’24
Large array failure
I have a Swift project with some C code in it. The C code creates a byte array with about 600K elements. Compiling under Xcode, the compilation takes a really long time. When I try to run the code, it fails immediately upon startup. When I cut this large array out of the build, everything else works fine. Does anyone know what's going on here, and what I might do about it?
1
0
237
Apr ’24
Barcode reader not working in landscape mode on iPad 6th generation running iOS 17.4.1
I'm encountering an issue with the barcode reader on my iPad 6th generation running iOS 17.4.1. Specifically, when I attempt to use the barcode reader in landscape mode, I do not receive any output or response. However, when I rotate my iPad to portrait mode, the barcode is successfully scanned. I've tried restarting my iPad, checking for software updates, and adjusting the settings within the barcode scanning app, but the issue persists. I've also tested with different barcode scanning apps, and the problem remains consistent across apps. This issue seems to be specific to my iPad model and iOS version, as I haven't encountered it on other devices or with previous iOS versions. Has anyone else experienced a similar issue with barcode scanning in landscape mode on the iPad 6th generation running iOS 17.4.1? Are there any known solutions or workarounds for this problem?
0
0
143
Apr ’24
Camera Feed is not working using AVFoundation
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed. CameraView.swift import SwiftUI import AVFoundation struct CameraView: View { let selectedImage: UIImage var body: some View { ZStack { CameraPreview() Image(uiImage: selectedImage) .resizable() .aspectRatio(contentMode: .fill) .opacity(0.5) // Adjust the opacity as needed .edgesIgnoringSafeArea(.all) } } } struct CameraPreview: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let cameraPreview = CameraPreviewView() return cameraPreview } func updateUIView(_ uiView: UIView, context: Context) {} } class CameraPreviewView: UIView { private let captureSession = AVCaptureSession() override init(frame: CGRect) { super.init(frame: frame) setupCamera() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setupCamera() { guard let backCamera = AVCaptureDevice.default(for: .video) else { print("Unable to access camera") return } do { let input = try AVCaptureDeviceInput(device: backCamera) if captureSession.canAddInput(input) { captureSession.addInput(input) let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = bounds layer.addSublayer(previewLayer) captureSession.startRunning() } } catch { print("Error setting up camera input:", error.localizedDescription) } } } Thanks for helping and your time.
2
0
380
Apr ’24
Swift performance, debug build hundreds of times slower than release
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much? Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release. func Score( trial: String, target: String ) -> Int { var score = 0 withUnsafeBytes(of: trial.utf8) { rawTrial in withUnsafeBytes(of: target.utf8) { rawTarget in for i in 0..<5 { let trial_i = rawTrial[i]; if trial_i == rawTarget[i] // strong hit { score += kStrongScore } else // check for weak hit { for j in 0..<5 { if j != i { let target_j = rawTarget[j]; if (trial_i == target_j) && (rawTrial[j] != target_j) { score += kWeakScore break } } } } } } } return score }
2
0
386
Apr ’24
MLUpdateTask returning no model
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
0
0
346
Apr ’24
IndexedDB in WebView, get deleted?
Hi. I plan to use a WebView in an iOS app (SWIFT) and this should run a web app with WASM and using IndexedDB for permanent credentials. I found rumors and information on Apple deleting data in IndexedDB and localStorage after 7 days (see links below). But I found no official information that tells me if this is true for my WebView in my ordinary mobile App (not PWA). A test cycle over a week to find out is hard to do... Is there any reliable and clear information on this and am I affected? Thank you! . Links about this topic: https://news.ycombinator.com/item?id=28158407 https://www.reddit.com/r/javascript/comments/foqxp9/webkit_will_delete_all_local_storage_including/ https://searchengineland.com/what-safaris-7-day-cap-on-script-writeable-storage-means-for-pwa-developers-332519
0
1
395
Apr ’24
Complete Beginner - Any help is greatly appreciated
Hello everyone, So I will start off by saying I am a very amateur developer with some experience in C++ mostly. Over the summer I want to build an app similar to a board game and launch it on the App Store for me and my friends to play when we don't have the game's physical board. Basically, there would be one person who hosts a "game" while everyone else joins through a code or something like that (maybe there's an easier way if you know everyone would be playing in person with each other). Once a game begins I want cards to show up on peoples's screens and that's it, no fancy graphics or anything like that. So, to the root of my issue. I am brand new to Swift and Xcode. I began googling and tinkering with it and made a little app where a user can add names and then pick letters from the names to display (very very basic stuff). I also figured out how to import and manipulate images a little bit. My question is about the process of making a game, connecting it to GameKit/Game Center, and then how to actually launch it on the App Store so my friends can also download it. If anyone has any resources they particularly found useful when starting out using Swift, please let me know. I really really don't like reading straight from the documentation (although who does honestly). Anything helps!! Thank you!
1
0
300
Apr ’24