Swift is a powerful and intuitive programming language for Apple platforms and beyond.

Posts under Swift tag

200 Posts

Post

Replies

Boosts

Views

Activity

Clarification on App Tracking Transparency (ATT) and Cookie Banner Integration
We are currently using Single Sign-On (SSO) for user authentication within our app, which is presented through a web view. This web view includes a cookie banner that allows users to either accept, reject all, or manage cookies. In some reviews, Apple suggests implementing App Tracking Transparency (ATT) if cookies are used. In other reviews, Apple may refer to guideline 5.1.2, which states: “Revise the app so that users are not required to enable tracking in order to access the app's content and functionality.” I have a few questions regarding the interaction between ATT and the cookie banner: 1 Is App Tracking Transparency required for the cookie banner?
If yes, iOS developers have no direct control over the cookies used on the webpage when the user selects "Ask App Not to Track" or "Allow". Despite this selection, the cookie banner still appears, prompting the user to accept or reject cookies. 2 How should App Tracking Transparency be implemented when a cookie banner is presented on a web page within an iOS app?
Since iOS developers do not have control over the cookies stored in the web view, is there a way to manage this interaction so that users aren't repeatedly prompted by the cookie banner after selecting their tracking preference in ATT? I would appreciate any guidance you can provide on how to properly implement ATT in this scenario, particularly when a web page within the app displays a cookie consent banner.
2
2
681
Jan ’25
Scrolling to a `Section` cuts off header
Hello I was wondering if this is expected behavior or if there is a way I can fix this to get the behavior I am expecting. I have a Form that has many sections in it and when the content of the section is selected I would like that section to be scrolled to the top to make it easier for the user to know that they selected that section. But when the Section is selected, it is anchored to the top of the form but the header of the Section is cut off. When the Section is anchored to the top I would like the whole section to be seen (the header, content, and footer). I also tried applying an ID to the section and using that to scroll to and that also didn't work. Any help would be appreciated. Here is some code to repo this: struct ContentView: View { @State private var selectionSectionContent: SectionContent? var body: some View { ScrollViewReader { proxy in Form { ForEach(contents, id: \.self) { content in Section { Text(content.text) .onTapGesture { selectionSectionContent = content } } header: { Text("Header") } footer: { Text("Footer") } } } .onChange(of: selectionSectionContent) { _, newValue in if let newValue { // When text is tapped, scroll that section to the top. withAnimation { proxy.scrollTo(newValue, anchor: .top) } } } .padding() } } let contents: [SectionContent] = [ SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent(), SectionContent() ] } class SectionContent: Hashable { let text = "Fun Section" public var id: ObjectIdentifier { ObjectIdentifier(self) } static func == (lhs: SectionContent, rhs: SectionContent) -> Bool { lhs.id == rhs.id } func hash(into hasher: inout Hasher) { hasher.combine(id) } } Here is a GIF of the header getting cut off when it is pinned to the top.
1
2
358
Jan ’25
Best `AVMediaType` for depth data.
Dear Apple Developer Forum, I have a question regarding the AVCaptureDevice on iOS. We're trying to capture photos in the best quality possible along with depth data with the highest accuracy possible. We were delighted when we saw AVCaptureDevice could be initialized with the AVMediaType=.depthData which works as expected (depthData is a part of the AVCapturePhoto). When setting to AVMediaType=.video, we still receive depth data (of same quality according to our own internal tests). That confused us. Mind you, we set the device format and depth format as well: private func getDeviceFormat() throws -> AVCaptureDevice.Format { // Ensures high video format and an appropriate color profile. let format = camera?.formats.first(where: { $0.isHighPhotoQualitySupported && $0.supportedDepthDataFormats.count > 0 && $0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }) // Check and see if it's available. guard format != nil else { throw CaptureDeviceError.necessaryFormatNotAvailable } return format! } private func getDepthDataFormat(for format: AVCaptureDevice.Format) throws -> AVCaptureDevice.Format { // Access the depth format. let depthDataFormat = format.supportedDepthDataFormats.first(where: { $0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat32 }) // Check if it exists guard depthDataFormat != nil else { throw CaptureDeviceError.necessaryFormatNotAvailable } // Returns it. return depthDataFormat! } We're wondering, what steps we can take to ensure the best quality photo, along with the most accurate depth data? What properties are the most important, which have an effect, which don't? Are there any ways we can optimize our current configuration? We find it difficult as there's very limited guides and explanations on the media subtypes, for example kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. Is it the best? Is it the best for our use case of high quality photo + most accurate depth data? Important comment: Our App only runs on iPhone 14 Pro, iPhone 15 Pro, iPhone 16 Pro on the latest iOS versions. We hope someone with greater knowledge at Apple can help us and guide us on how we can have the photos of best quality and depth data with most accuracy. Thank you very much! Kind regards.
0
0
364
Jan ’25
Content Filter Permission Prompt Not Appearing in TestFlight
I added a Content Filter to my app, and when running it in Xcode (Debug/Release), I get the expected permission prompt: "Would like to filter network content (Allow / Don't Allow)". However, when I install the app via TestFlight, this prompt doesn’t appear at all, and the feature doesn’t work. Is there a special configuration required for TestFlight? Has anyone encountered this issue before? Thanks!
1
0
275
Jan ’25
Using protocols with XPC C API instead of dictionaries for sending and receiving messages
I have followed this post for creating a Launch Agent that provides an XPC service on macOS using Swift- post link - https://rderik.com/blog/creating-a-launch-agent-that-provides-an-xpc-service-on-macos/ In the swift code the interface of the XPC service is defined by protocols which makes the code nice and neat. I want to implement the XPC service using C APIs for XPC, and C APIs send and receive messages using dictionaries, which need manual handling with conditional statements. I want to know if its possible to go with the protocol based approach with C APIs.
2
0
454
Jan ’25
Releasing a TextureObject from memory
Hi there! I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures. My question is. How do I release the memory from the current texture before loading the next? In theory the garbagecollector should erase it eventually? Hope someone can help =) Thanks in advance! Best regards, Kim
6
0
673
Jan ’25
Learn Metal
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
2
0
797
Jan ’25
Detection of Sync Drives such as OneDrive, DropBox etc.
I'm working on a cross-platform application that needs to access file attributes, specifically for files and directories in sync drives like OneDrive. On Windows, I use the GetFileInformationByHandle API to retrieve attributes such as FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS and FILE_ATTRIBUTE_RECALL_ON_OPEN to identify files that are stored remotely or in the cloud. Is there an equivalent API or mechanism on macOS to achieve the same? Specifically, I’m looking for a way to: Identify attributes similar to cloud/offline storage status for files in synced drives (e.g., OneDrive, iCloud Drive). Retrieve metadata to distinguish files/folders stored locally versus those stored remotely and downloaded on access. If there’s a preferred macOS framework (like Core Services or FileManager in Swift) for such operations, examples would be greatly appreciated!
1
0
311
Jan ’25
Equivalent macOS API for GetFileInformationByHandle to Retrieve File Attributes (e.g., Sync Drive Attributes)
I'm working on a cross-platform application that needs to access file attributes, specifically for files and directories in sync drives like OneDrive. On Windows, I use the GetFileInformationByHandle API to retrieve attributes such as FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS and FILE_ATTRIBUTE_RECALL_ON_OPEN to identify files that are stored remotely or in the cloud. Is there an equivalent API or mechanism on macOS to achieve the same? Specifically, I’m looking for a way to: Identify attributes similar to cloud/offline storage status for files in synced drives (e.g., OneDrive, DropBox etc). Retrieve metadata to distinguish files/folders stored locally versus those stored remotely and downloaded on access. If there’s a preferred macOS framework (like Core Services or FileManager in Swift) for such operations, examples would be greatly appreciated!
1
0
404
Jan ’25
Monintoring Picture in Picture is hide in Device Edge.
I am developing a custom Picture in Picture (PiP) app that plays videos. The video continues to play even when the app goes to the background, but I would like to be able to get the bool value when the PiP is hidden on the edge, as shown in the attached image. The reason why we need this is because we don't want the user to consume extra network bandwidth, so we want to pause the video when the PiP is hidden on the edge of the screen. Please let me know if this is possible. This can be done in the foreground or in the background.
1
0
318
Jan ’25
Persistent "Framework 'Flutter' Not Found" Error When Building iOS Simulator
I'm currently facing a recurring issue while attempting to build my Flutter app for the iOS simulator. The build process fails with the following error Error (Xcode): Framework 'Flutter' not found Error (Xcode): Linker command failed with exit code 1 Steps I've Taken: Recreated the ios/ folder and cleared derived data: Used flutter clean to clean the project. Reinstalled CocoaPods with pod deintegrate followed by pod install. Verified Configuration: Checked AppDelegate and framework paths within Xcode. Set the deployment target to 14.0 in the Podfile. Additional Actions: Performed flutter clean again, followed by removal of Pods, .symlinks, and Flutter.framework under ios/. Updated CocoaPods, ensured all dependencies in pubspec.yaml are current. Added FirebaseCore initialization in AppDelegate.swift to resolve previous Firebase integration issues. Despite these efforts, the "Framework 'Flutter' not found" error persists. Here's the relevant part of my AppDelegate.swift and Podfile: swift import Flutter import UIKit @main @objc class AppDelegate: FlutterAppDelegate { override func application( _ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? ) -> Bool { GeneratedPluginRegistrant.register(with: self) return super.application(application, didFinishLaunchingWithOptions: launchOptions) } } ruby platform :ios, '14.0' CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' project 'Runner', { 'Debug' => :debug, 'Profile' => :release, 'Release' => :release, } def flutter_root generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), FILE) unless File.exist?(generated_xcode_build_settings_path) raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first" end File.foreach(generated_xcode_build_settings_path) do |line| matches = line.match(/FLUTTER_ROOT=(.*)/) return matches[1].strip if matches end raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get" end require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root) flutter_ios_podfile_setup target 'Runner' do use_frameworks! use_modular_headers! flutter_install_all_ios_pods File.dirname(File.realpath(FILE)) target 'RunnerTests' do inherit! :search_paths end end post_install do |installer| installer.pods_project.targets.each do |target| flutter_additional_ios_build_settings(target) target.build_configurations.each do |config| xcconfig_path = config.base_configuration_reference.real_path xcconfig = File.read(xcconfig_path) xcconfig_mod = xcconfig.gsub(/DT_TOOLCHAIN_DIR/, "TOOLCHAIN_DIR") end end end Error Log from Flutter Run: [ +278 ms] Failed to build iOS app [ +42 ms] Error (Xcode): Framework 'Flutter' not found [ +8 ms] Error (Xcode): Linker command failed with exit code 1 (use -v to see invocation) [ +7 ms] Could not build the application for the simulator. [ +1 ms] Error launching application on iPhone 16 Pro Max. [ +6 ms] "flutter run" took 88,663ms. [ +164 ms] #0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3) #1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:860:9) #2 FlutterCommand.run. (package:flutter_tools/src/runner/flutter_command.dart:1450:27) #3 AppContext.run. (package:flutter_tools/src/base/context.dart:153:19) #4 CommandRunner.runCommand (package:args/command_runner.dart:212:13) #5 FlutterCommandRunner.runCommand. (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9) #6 AppContext.run. (package:flutter_tools/src/base/context.dart:153:19) #7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5) #8 run.. (package:flutter_tools/runner.dart:131:9) #9 AppContext.run. (package:flutter_tools/src/base/context.dart:153:19) #10 main (package:flutter_tools/executable.dart:94:3) . Environment: Flutter: Version 3.27.3, Channel stable Xcode: Version 16.2, Build 16C5032a CocoaPods: Version 1.16.2 macOS: Version 15.2 (24C101) Additional Context: Initially, the issue was resolved by the sequence of cleanup and reinstalls listed above, but it re-emerged after integrating Firebase authentication. After adding FirebaseCore to AppDelegate.swift, the Firebase issue was resolved, but the framework error returned. I'm seeking guidance to resolve this issue permanently. Any insights or suggestions would be greatly appreciated!
1
0
1k
Jan ’25
Unexpected Insertion of U+2004 (Space) When Using UITextView with Pinyin Input on iOS 18
I encountered an issue with UITextView on iOS 18 where, when typing Pinyin, extra Unicode characters such as U+2004 are inserted unexpectedly. This occurs when using a Chinese input method. Steps to Reproduce: 1. Set up a UITextView with a standard delegate implementation. 2. Use a Pinyin input method to type the character “ㄨ”. 3. Observe that after the character “ㄨ” is typed, extra spaces (U+2004) are inserted automatically between the characters. Code Example: class ViewController: UIViewController { @IBOutlet weak var textView: UITextView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } } extension ViewController: UITextViewDelegate { func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool { print("shouldChangeTextIn: range \(range)") print("shouldChangeTextIn: replacementText \(text)") return true } func textViewDidChange(_ textView: UITextView) { let currentText = textView.text ?? "" let unicodeValues = currentText.unicodeScalars.map { String(format: "U+%04X", $0.value) }.joined(separator: " ") print("textViewDidChange: textView.text: \(currentText)") print("textViewDidChange: Unicode Scalars: \(unicodeValues)") } } Output: shouldChangeTextIn: range {0, 0} shouldChangeTextIn: replacementText ㄨ textViewDidChange: textView.text: ㄨ textViewDidChange: Unicode Scalars: U+3128 ------------------------ shouldChangeTextIn: range {1, 0} shouldChangeTextIn: replacementText ㄨ textViewDidChange: textView.text: ㄨ ㄨ textViewDidChange: Unicode Scalars: U+3128 U+2004 U+3128 ------------------------ shouldChangeTextIn: range {3, 0} shouldChangeTextIn: replacementText ㄨ textViewDidChange: textView.text: ㄨ ㄨ ㄨ textViewDidChange: Unicode Scalars: U+3128 U+2004 U+3128 U+2004 U+3128 This issue may affect text processing, especially in cases where precise text manipulation is required, such as calculating ranges in shouldChangeTextIn.
5
0
1k
Jan ’25
Converted Model Preview Issues in Xcode
Hello! I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly. I have some code that may be relevant in debugging this issue: How I use the model: model = BallTrackerNet() # this is the model architecture which will be referenced later device = self.device # cpu model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights model = model.to(device) model.eval() Here is the BallTrackerNet() model itself: import torch.nn as nn import torch class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True): super().__init__() self.block = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias), nn.ReLU(), nn.BatchNorm2d(out_channels) ) def forward(self, x): return self.block(x) class BallTrackerNet(nn.Module): def __init__(self, out_channels=256): super().__init__() self.out_channels = out_channels self.conv1 = ConvBlock(in_channels=9, out_channels=64) self.conv2 = ConvBlock(in_channels=64, out_channels=64) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = ConvBlock(in_channels=64, out_channels=128) self.conv4 = ConvBlock(in_channels=128, out_channels=128) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv5 = ConvBlock(in_channels=128, out_channels=256) self.conv6 = ConvBlock(in_channels=256, out_channels=256) self.conv7 = ConvBlock(in_channels=256, out_channels=256) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv8 = ConvBlock(in_channels=256, out_channels=512) self.conv9 = ConvBlock(in_channels=512, out_channels=512) self.conv10 = ConvBlock(in_channels=512, out_channels=512) self.ups1 = nn.Upsample(scale_factor=2) self.conv11 = ConvBlock(in_channels=512, out_channels=256) self.conv12 = ConvBlock(in_channels=256, out_channels=256) self.conv13 = ConvBlock(in_channels=256, out_channels=256) self.ups2 = nn.Upsample(scale_factor=2) self.conv14 = ConvBlock(in_channels=256, out_channels=128) self.conv15 = ConvBlock(in_channels=128, out_channels=128) self.ups3 = nn.Upsample(scale_factor=2) self.conv16 = ConvBlock(in_channels=128, out_channels=64) self.conv17 = ConvBlock(in_channels=64, out_channels=64) self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels) self.softmax = nn.Softmax(dim=1) self._init_weights() def forward(self, x, testing=False): batch_size = x.size(0) x = self.conv1(x) x = self.conv2(x) x = self.pool1(x) x = self.conv3(x) x = self.conv4(x) x = self.pool2(x) x = self.conv5(x) x = self.conv6(x) x = self.conv7(x) x = self.pool3(x) x = self.conv8(x) x = self.conv9(x) x = self.conv10(x) x = self.ups1(x) x = self.conv11(x) x = self.conv12(x) x = self.conv13(x) x = self.ups2(x) x = self.conv14(x) x = self.conv15(x) x = self.ups3(x) x = self.conv16(x) x = self.conv17(x) x = self.conv18(x) # x = self.softmax(x) out = x.reshape(batch_size, self.out_channels, -1) if testing: out = self.softmax(out) return out def _init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.uniform_(module.weight, -0.05, 0.05) if module.bias is not None: nn.init.constant_(module.bias, 0) elif isinstance(module, nn.BatchNorm2d): nn.init.constant_(module.weight, 1) nn.init.constant_(module.bias, 0) Here is also the meta data of my model: [ { "metadataOutputVersion" : "3.0", "storagePrecision" : "Float16", "outputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 256 × 230400)", "shortDescription" : "", "shape" : "[1, 256, 230400]", "name" : "var_462", "type" : "MultiArray" } ], "modelParameters" : [ ], "specificationVersion" : 6, "mlProgramOperationTypeHistogram" : { "Cast" : 2, "Conv" : 18, "Relu" : 18, "BatchNorm" : 18, "Reshape" : 1, "UpsampleNearestNeighbor" : 3, "MaxPool" : 3 }, "computePrecision" : "Mixed (Float16, Float32, Int32)", "isUpdatable" : "0", "availability" : { "macOS" : "12.0", "tvOS" : "15.0", "visionOS" : "1.0", "watchOS" : "8.0", "iOS" : "15.0", "macCatalyst" : "15.0" }, "modelType" : { "name" : "MLModelType_mlProgram" }, "userDefinedMetadata" : { "com.github.apple.coremltools.source_dialect" : "TorchScript", "com.github.apple.coremltools.source" : "torch==2.5.1", "com.github.apple.coremltools.version" : "8.1" }, "inputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)", "shortDescription" : "", "shape" : "[1, 9, 360, 640]", "name" : "input_frames", "type" : "MultiArray" } ], "generatedClassName" : "BallTracker", "method" : "predict" } ] I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well. Thanks! Michael
1
0
634
Jan ’25
Displaying a toast message and a loader(activity indicator) in CarPlay
Hello, Could you please help me with the below, How to display a toast message to the user in CarPlay after a successful operation? How to show a spinner or an activity indicator just before performing some operation? I have referred to the CarPlay pdf design guidelines in which I couldn't find support for the above two. But I could see a loader within a button in one of the default apps in CarPlay simulator. Kindly help me with these queries
1
0
432
Jan ’25
WKWebView: Fullscreen API very unreliable on iPadOS 18.x
Since iPadOS 18.x WKWebView seems to have a bug within its Fullscreen API (which can be enabled via WKPreferences.isElementFullscreenEnabled). This bug has the effect that websites trying to make an element (for example a video player) fullscreen fail to do so. This does not always happen, most of the time the fullscreen mode does work fine, but sometimes (far too often to be ignored) it does not. If an instance of WKWebView shows this issue, it can not be "fixed" by reloading the page or loading other pages, this issue exists in this instance forever. My App is a web browser App so I can create and remove WKWebView instance easily (by opening or closing Tabs). And there are times where I never see this bug, and times where ever other tab shows this bug. It's totally unreliable. The App does not show any issues at all when running under iPadOS 17 or older. The issue is only present under iPadOS 18.x. After some testing I've found out that when the bug has affected an instance of WKWebView, the JavaScript call element.requestFullscreen() will work if the element is a video element, but does no longer work if it is another element (like a DIV). If an instance of WKWebView is not affected by this bug, element.requestFullscreen() will work for all HTML elements. Does anyone has experienced this bug as well? And maybe found a workaround? Or maybe found a pattern which helps to find out what exactly is triggering this bug?
2
1
1.1k
Jan ’25
Swift Package Manager - Package Download Issue
We have developed a custom iOS framework called PaySDK. Earlier we distributed the framework as PaySDK.xcframework.zip through GitHub (Private repo) with two dependent xcframeworks. Now, one of the clients asking to distribute the framework through Swift Package Manager. I have created a new Private repo in the GitHub, created the new release (iOSSDK_SPM_Test) tag 1.0.0. Uploaded the below frameworks as Assets and updated the downloadable path in the Package.Swift and pushed to the GitHub Main branch. PaySDK.xcframework.zip PaySDKDependentOne.xcframework.zip PaySDKDependentTwo.xcframework.zip When I try to integrate (testing) the (https://github.com/YuvaRepo/iOSSDK_SPM_Test) in Xcode, am not able to download the frameworks, the downloadable path is pointing to some old path (may be cache - https://github.com/YuvaRepo/iOSSDK_SPM/releases/download/1.2.0/PaySDK.xcframework.zip). Package.Swift: // swift-tools-version:5.3 import PackageDescription let package = Package( name: "iOSSDK_SPM_Test", platforms: [ .iOS(.v13) ], products: [ // Products define the executables and libraries a package produces, making them visible to other packages. .library( name: "iOSSDK_SPM_Test", targets: ["PaySDK", "PaySDKDependentOne", "PaySDKDependentTwo"] ) ], targets: [ // Targets are the basic building blocks of a package, defining a module or a test suite. // Targets can depend on other targets in this package and products from dependencies. .binaryTarget( name: "PaySDK", url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDK.xcframework.zip", checksum: " checksum " ), .binaryTarget( name: "PaySDKDependentOne", url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDKDependentOne.xcframework.zip", checksum: " checksum " ), .binaryTarget( name: "PaySDKDependentTwo", url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDKDependentTwo.xcframework.zip", checksum: " checksum " ), .testTarget( name: "iOSSDK_SPM_TestTests", dependencies: ["PaySDK", "PaySDKDependentOne", "PaySDKDependentTwo"] ) ] ) Steps I followed: I have tried below steps, Removed the local repo and cloned new rm -rf ~/Library/Caches/org.swift.swiftpm/ rm -rf ~/Library/Developer/Xcode/DerivedData/* Can anyone help to identify the issue and resolve? Thanks in advance.
0
0
397
Jan ’25
How to detect if picture-in-picture is hidden at the edge of the device
0 I am developing a custom Picture in Picture (PiP) app that plays videos. The video continues to play even when the app goes to the background, but I would like to be able to get the bool value when the PiP is hidden on the edge, as shown in the attached image. The reason why we need this is because we don't want the user to consume extra network bandwidth, so we want to pause the video when the PiP is hidden on the edge of the screen. Please let me know if this is possible. This can be done in the foreground or in the background.
1
1
251
Jan ’25
Affiliate View Struct is probably hidden by Charts on iOS app
The problem is: As per screenshot below, one can only see the lineChart. I have another struct AffiliateView coded under this Chart: import SnapKit import Charts import DGCharts class AffiliateViewController: UIViewController { private lazy var chartView: LineChartView = { let chart = LineChartView() chart.noDataText = "No data available." chart.chartDescription.enabled = false chart.xAxis.labelPosition = .bottom chart.rightAxis.enabled = false chart.legend.enabled = true chart.backgroundColor = .lightGray // For debugging visibility return chart }() private lazy var containerView: UIView = { let view = UIView() view.backgroundColor = .white return view }() override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .white // Add container view and chart view to the main view view.addSubview(containerView) view.addSubview(chartView) // Add SwiftUI View inside the container view let affiliateView = AffiliateView() let hostingController = UIHostingController(rootView: affiliateView) addChild(hostingController) containerView.addSubview(hostingController.view) hostingController.view.frame = containerView.bounds hostingController.didMove(toParent: self) layout() setupChartData() } private func layout() { // Layout the container view (SwiftUI content) containerView.snp.makeConstraints { make in make.top.equalTo(view.safeAreaLayoutGuide.snp.top) make.left.right.equalToSuperview() make.height.equalTo(350) // Increase the height for the SwiftUI content } // Layout the chart view below the container view chartView.snp.makeConstraints { make in make.top.equalTo(containerView.snp.bottom).offset(20) // Space between chart and the affiliate content make.left.equalToSuperview().offset(20) make.right.equalToSuperview().offset(-20) make.height.equalTo(200) // Set a fixed height for the chart } } private func setupChartData() { let dataEntries = [ ChartDataEntry(x: 1, y: 10), ChartDataEntry(x: 2, y: 20), ChartDataEntry(x: 3, y: 15), ChartDataEntry(x: 4, y: 30), ChartDataEntry(x: 5, y: 25) ] let dataSet = LineChartDataSet(entries: dataEntries, label: "Clicks per Day") dataSet.colors = [.blue] dataSet.valueColors = [.black] dataSet.circleColors = [.red] dataSet.circleRadius = 4.0 let data = LineChartData(dataSet: dataSet) chartView.data = data chartView.notifyDataSetChanged() } } // SwiftUI View remains in the same file struct AffiliateView: View { @State private var customMessage: String = "" @State private var uniqueLink: String = "Your unique link will appear here." @State private var clickData: [Double] = [10, 20, 15, 30, 25] // Example data var body: some View { NavigationView { VStack(spacing: 20) { // TextField for custom message input TextField("Enter your custom message...", text: $customMessage) .textFieldStyle(RoundedBorderTextFieldStyle()) .padding(.horizontal) // Generate Link Button Button(action: generateLink) { Text("Generate Sign-Up Link") .font(.headline) .foregroundColor(.white) .frame(maxWidth: .infinity, maxHeight: 50) .background(Color.red) .cornerRadius(10) } .padding(.horizontal) // Generated Link Label Text(uniqueLink) .font(.body) .multilineTextAlignment(.center) .padding(.horizontal) // You can add a chart here if you want to show it in SwiftUI too /* LineChartView(data: clickData, title: "Clicks per Day", legend: "Daily Clicks") */ } .navigationTitle("Affiliate Marketing") .navigationBarTitleDisplayMode(.inline) } } private func generateLink() { let encodedMessage = customMessage.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) ?? "" uniqueLink = "https://affiliate.example.com/referral?message=\(encodedMessage)" addClickData() } private func addClickData() { clickData.append(Double.random(in: 0...100)) } } As you see, the AffiliateView has been declared outside of Controller View class. The View content was visible before the lineChart was added into this code. Now the View content is not visible anymore. I have tried to increment/decrement values at make.height.equalTo() but to no avail. Could anyone kindly point me in the right direction?
0
0
295
Jan ’25
Writing an `NWProtocolFramerImplementation` to run on top of `NWProtocolWebSocket`
Hi All, I am trying to write an NWProtocolFramerImplementation that will run after Websockets. I would like to achieve two goals with this Handle the application-layer authentication handshake in-protocol so my external application code can ignore it Automatically send pings periodically so my application can ignore keepalive I am running into trouble because the NWProtocolWebsocket protocol parses websocket metadata into NWMessage's and I don't see how to handle this at the NWProtocolFramerImplementation level Here's what I have (see comments for questions) class CoolProtocol: NWProtocolFramerImplementation { static let label = "Cool" private var tempStatusCode: Int? required init(framer: NWProtocolFramer.Instance) {} static let definition = NWProtocolFramer.Definition(implementation: CoolProtocol.self) func start(framer: NWProtocolFramer.Instance) -> NWProtocolFramer.StartResult { return .willMarkReady } func wakeup(framer: NWProtocolFramer.Instance) { } func stop(framer: NWProtocolFramer.Instance) -> Bool { return true } func cleanup(framer: NWProtocolFramer.Instance) { } func handleOutput(framer: NWProtocolFramer.Instance, message: NWProtocolFramer.Message, messageLength: Int, isComplete: Bool) { // How to write a "Message" onto the next protocol handler. I don't want to just write plain data. // How to tell the websocket protocol framer that it's a ping/pong/text/binary... } func handleInput(framer: NWProtocolFramer.Instance) -> Int { // How to handle getting the input from websockets in a message format? I don't want to just get "Data" I would like to know if that data is // a ping, pong, text, binary, ... } } If I implementing this protocol at the application layer, here's how I would send websocket messages class Client { ... func send(string: String) async throws { guard let data = string.data(using: .utf8) else { return } let metadata = NWProtocolWebSocket.Metadata(opcode: .text) let context = NWConnection.ContentContext( identifier: "textContext", metadata: [metadata] ) self.connection.send( content: data, contentContext: context, isComplete: true, completion: .contentProcessed({ [weak self] error in ... }) ) } } You see at the application layer I have access to this context object and can access NWProtocolMetadata on the input and output side, but in NWProtocolFramer.Instance I only see final func writeOutput(data: Data) which doesn't seem to include context anywhere. Is this possible? If not how would you recommend I handle this? I know I could re-write the entire Websocket protocol framer, but it feels like I shouldn't have to if framers are supposed to be able to stack.
1
0
271
Jan ’25