Swift is a powerful and intuitive programming language for Apple platforms and beyond.

Swift Documentation

Posts under Swift tag

2,026 Posts
Sort by:
Post not yet marked as solved
0 Replies
43 Views
I have a container view implementation that reads preference values from child views: public struct Reader<Content>: View where Content: View { public var content: () -> Content public init(@ViewBuilder content: @escaping () -> Content) { self.content = content } public var body: some View { content() .onPreferenceChange(NumericPreferenceKey.self) { value in // ... } } } This works fine until the content passed in to the container view is a Group. At that point the onPreferenceChanged modifier is applied to every child of the group, which leads to bugs in my situation. One thing I can do is simply put the content in a VStack: public var body: some View { VStack(content: content) .onPreferenceChange(NumericPreferenceKey.self) { value in // ... } } And that works fine to "Ungroup" before applying the onPreferenceChanged modifier. However, is this best practice? Is there a better way to apply a modifier to content as a whole instead of to each member of a potential group? Is it concerning that I might have an extra VStack in the view hierarchy with this fix?
Posted
by rolson.
Last updated
.
Post not yet marked as solved
1 Replies
243 Views
@Query(filter: #Predicate<Note>{ note in note.isDeleted == false && (note.title != "" || note.content != "") } ,sort: [SortDescriptor(\Note.isPinned, order: .reverse),SortDescriptor(\Note.createdAt, order: .reverse)] , animation: .smooth(duration: 0.3) ) private var notes: [Note] if I delete filter part navigationLink works properly
Posted
by AzizK.
Last updated
.
Post not yet marked as solved
0 Replies
39 Views
Hello, i am trying to record logs in my network extension class, and then i want to read it in my application class, i.e. viewModel. However, i am unable to read the data. I have tried different ways like UserDefaults, Keychain, FileManager, NotificationCenter and CoreData. I have also used Appgroups but still there is blocker for reading data outside the scope of Extension class.
Posted
by mabubakar.
Last updated
.
Post marked as solved
4 Replies
183 Views
On my shop and content views of my app, I have a shopping cart SF symbol that I've modified with a conditional to show the number of items in the cart if the number of items is above zero. However, whenever I change tabs and back again, that icon disappears even though there should be an item in the cart. I have a video of the error, but I have no idea how to post it. Here is some of the code, let me know if you need to see more of it: CartManager.swift import Foundation import SwiftUI @Observable class CartManager { /*private(set)*/ var products: [Product] = [] private(set) var total: Int = 0 private(set) var numberofproducts: Int = 0 func count() -> Int { numberofproducts = products.count return numberofproducts } func addToCart(product: Product) { products.append(product) total += product.price numberofproducts = products.count } func removeFromCart(product: Product) { products = products.filter { $0.id != product.id } total -= product.price numberofproducts = products.count } } ShopPage.swift import SwiftUI struct ShopPage: View { @Environment(CartManager.self) private var cartManager var columns = [GridItem(.adaptive(minimum: 135), spacing: 0)] @State private var searchText = "" let items = ["LazyHeadphoneBean", "ProperBean", "BabyBean", "RoyalBean", "SpringBean", "beanbunny", "CapBean"] var filteredItems: [Bean] { guard searchText.isEmpty else { return beans } return beans.filter { $0.imageName.localizedCaseInsensitiveContains(searchText) } } var body: some View { NavigationStack { ZStack(alignment: .top) { Color.white .ignoresSafeArea(edges: .all) VStack { AppBar() .environment(cartManager) ScrollView() { LazyVGrid(columns: columns, spacing: 20) { ForEach(productList, id: \.id) { product in NavigationLink { beanDetail(product: product) .environment(cartManager) } label: { ProductCardView(product: product) .environment(cartManager) } } } } } .navigationBarDrawer(displayMode: .always)) } } .environment(cartManager) } var searchResults: [String] { if searchText.isEmpty { return items } else { return items.filter { $0.contains(searchText)} } } } #Preview { ShopPage() .environment(CartManager()) } struct AppBar: View { @Environment(CartManager.self) private var cartManager var body: some View { NavigationStack { VStack (alignment: .leading){ HStack { Spacer() NavigationLink(destination: CartView() .environment(cartManager) ) { CartButton(numberOfProducts: cartManager.products.count) } } Text("Shop for Beans") .font(.largeTitle .bold()) } } .padding() .environment(CartManager()) } } CartButton.swift import SwiftUI struct CartButton: View { var numberOfProducts: Int var body: some View { ZStack(alignment: .topTrailing) { Image(systemName: "cart.fill") .foregroundStyle(.black) .padding(5) if numberOfProducts > 0 { Text("\(numberOfProducts)") .font(.caption2).bold() .foregroundStyle(.white) .frame(width: 15, height: 15) .background(Color(hue: 1.0, saturation: 0.89, brightness: 0.835)) .clipShape(RoundedRectangle(cornerRadius: 50)) } } } } #Preview { CartButton(/*numberOfProducts: 1*/numberOfProducts: 1) }
Posted
by KittyCat.
Last updated
.
Post not yet marked as solved
0 Replies
49 Views
Hello, I have class file, which should save data coreData and Im only able to save data via ui. Do you have any example, how can I save data in core data via class files? Greeting Fabian
Posted Last updated
.
Post not yet marked as solved
0 Replies
52 Views
I am getting this error, only in iOS 17 (all version) on old code that had been working since iOS 15. The error occurs whenever the collectionView need to increase it's height beyond its initial height. The collectionView is inside a tableView. All autolayout constraints are set and everything use to work fine in previous version of iOS. *** Assertion failure in -[MYAPP.MYCollectionView _updateLayoutAttributesForExistingVisibleViewsFadingForBoundsChange:], UICollectionView.m:6218
Posted
by 2018code.
Last updated
.
Post not yet marked as solved
1 Replies
109 Views
Is the timeout for session-level authentication challenge handling documented somewhere? For example, if I get the urlSession(_:didReceive:) callback for server trust authentication, how long do I have to invoke the completion handler (or return from the callback if using Swift Concurrency)? Or is this completely dependent on the server's settings?
Posted
by Aqua_Geek.
Last updated
.
Post marked as solved
1 Replies
113 Views
I have a Swift project with some C code in it. The C code creates a byte array with about 600K elements. Compiling under Xcode, the compilation takes a really long time. When I try to run the code, it fails immediately upon startup. When I cut this large array out of the build, everything else works fine. Does anyone know what's going on here, and what I might do about it?
Posted Last updated
.
Post not yet marked as solved
1 Replies
129 Views
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed. CameraView.swift import SwiftUI import AVFoundation struct CameraView: View { let selectedImage: UIImage var body: some View { ZStack { CameraPreview() Image(uiImage: selectedImage) .resizable() .aspectRatio(contentMode: .fill) .opacity(0.5) // Adjust the opacity as needed .edgesIgnoringSafeArea(.all) } } } struct CameraPreview: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let cameraPreview = CameraPreviewView() return cameraPreview } func updateUIView(_ uiView: UIView, context: Context) {} } class CameraPreviewView: UIView { private let captureSession = AVCaptureSession() override init(frame: CGRect) { super.init(frame: frame) setupCamera() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setupCamera() { guard let backCamera = AVCaptureDevice.default(for: .video) else { print("Unable to access camera") return } do { let input = try AVCaptureDeviceInput(device: backCamera) if captureSession.canAddInput(input) { captureSession.addInput(input) let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = bounds layer.addSublayer(previewLayer) captureSession.startRunning() } } catch { print("Error setting up camera input:", error.localizedDescription) } } } Thanks for helping and your time.
Posted
by jhems.
Last updated
.
Post not yet marked as solved
0 Replies
49 Views
I'm encountering an issue with the barcode reader on my iPad 6th generation running iOS 17.4.1. Specifically, when I attempt to use the barcode reader in landscape mode, I do not receive any output or response. However, when I rotate my iPad to portrait mode, the barcode is successfully scanned. I've tried restarting my iPad, checking for software updates, and adjusting the settings within the barcode scanning app, but the issue persists. I've also tested with different barcode scanning apps, and the problem remains consistent across apps. This issue seems to be specific to my iPad model and iOS version, as I haven't encountered it on other devices or with previous iOS versions. Has anyone else experienced a similar issue with barcode scanning in landscape mode on the iPad 6th generation running iOS 17.4.1? Are there any known solutions or workarounds for this problem?
Posted Last updated
.
Post not yet marked as solved
2 Replies
405 Views
Hi, I am running into an error on XCode 15 (iOS 17+). When I am trying to play an iframe on the app. I see this error popup. Warning: -[BETextInput attributedMarkedText] is unimplemented Failed to request allowed query parameters from WebPrivacy. How do I fix this issue? I never saw this before so I am sure it is new. The app use to run fine as well.
Posted Last updated
.
Post not yet marked as solved
2 Replies
155 Views
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much? Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release. func Score( trial: String, target: String ) -> Int { var score = 0 withUnsafeBytes(of: trial.utf8) { rawTrial in withUnsafeBytes(of: target.utf8) { rawTarget in for i in 0..<5 { let trial_i = rawTrial[i]; if trial_i == rawTarget[i] // strong hit { score += kStrongScore } else // check for weak hit { for j in 0..<5 { if j != i { let target_j = rawTarget[j]; if (trial_i == target_j) && (rawTrial[j] != target_j) { score += kWeakScore break } } } } } } } return score }
Posted
by JWWalker.
Last updated
.
Post not yet marked as solved
0 Replies
108 Views
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
132 Views
I've found a strange leak, that looks like a bug. When we open two sheets, or fullscreenCovers and the last one has a TextField, then after closing both, @StateObject property is not released. If you delete TextField, there will be no memory leak. It works well and memory is releasing on iOS 16 built with Xcode 15 (simulators) Memory is leaking and not releasing on iOS 17 built with Xcode 15 (simulators, device 17.4.1) import SwiftUI struct ContentView: View { @State var isFirstOpen: Bool = false var body: some View { Button("Open first") { isFirstOpen = true } .sheet(isPresented: $isFirstOpen) { FirstView() } } } struct FirstView: View { @StateObject var viewModel = LeakedViewModel() var body: some View { ZStack { Button("Open second") { viewModel.isSecondOpen = true } } .sheet(isPresented: $viewModel.isSecondOpen) { SecondView(onClose: { viewModel.isSecondOpen = false }) } } } final class LeakedViewModel: ObservableObject { @Published var isSecondOpen: Bool = false init() { print("LeakedViewModel init") } deinit { print("LeakedViewModel deinit") } } struct SecondView: View { @State private var text: String = "" private let onClose: () -> Void init(onClose: @escaping () -> Void) { self.onClose = onClose } var body: some View { Button("Close second"){ onClose() } TextField("text: $text", text: $text) // Comment TextField and the leak will disappear, viewModel deinit called } } @main struct LeaksApp: App { var body: some Scene { WindowGroup { ContentView() } } } May be related to https://forums.developer.apple.com/forums/thread/738840
Posted
by Andreynt.
Last updated
.
Post not yet marked as solved
1 Replies
103 Views
Hello together, i want to us some classes to manage informations, which get fetched from a firestore database. My idea was, that I will have different classes for the different state of informations. The information which will be common for the different states should have the same properties. Therefore it made sense for me to have a super class which stores the main informations and derive a subclass with extra properties to store the more informations. My question is, how to define the initializer method properly, so that I can store these data informations fetched from firestore at once without any loss. Superclass (I reduced it to a minimum, just to show my principal problem): class GameInfo: Codable, Identifiable { @DocumentID var id: String? // -> UUID of firestore document var league: String var homeTeam: String var guestTeam: String enum CodingKeys: String, CodingKey { case league case homeTeam case guestTeam } init(league: String, homeTeam: String, guestTeam: String) { self.league = league self.homeTeam = homeTeam self.guestTeam = guestTeam } } the subclass should contain the GameInfo Properties and some others ... class Game: GameInfo { var startTime: Date? enum CodingKeys: String, CodingKey { case startTime } init(league: String, homeTeam: String, guestTeam: String, startTime: Date) { self.startTime = startTime super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam) } required init(from decoder: any Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) self.startTime = try values.decodeIfPresent(Date.self, forKey: .startTime) super.init(league: "", homeTeam: "",, guestTeam: "") } With the required init-method, informations get decoded and stored. (the data from firestore contain the league, homeTeam, guestTeam and startTime informations). The super.init() method as defined results in empty strings. But what I want is, that the league, homeTeam and guestTeam values will also be decoded from the firestore informations. But I don't know how. If I use the code super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam) within the required init() than I get the compiler error message 'self' used in property access 'league' before 'super.init' call What is wrong in my thinking ? Any help appreciated. Thanks and best regards Peter
Posted
by Luggi71.
Last updated
.
Post not yet marked as solved
0 Replies
96 Views
i'm struct dynamic island detail content dynamicIsland: { context in DynamicIsland { expandedContent(context: context) } compactLeading: { .... } compactTrailing: { ... } i want show different content based on context. private func expandedContent(context: ActivityViewContext<xxxx>)->DynamicIslandExpandedContent<some View> { if (context.state.style == 0) { return expandedControlContent1(context: context) } else if (context.state.style == 1) { return expandedControlContent2(context: context) } else { return expandedControlContent3(context: context) } } compiles error Function declares an opaque return type 'some View', but the return statements in its body do not have matching underlying types
Posted
by Highmore.
Last updated
.