Dear Apple Developer Forum,
I have a question regarding the AVCaptureDevice on iOS. We're trying to capture photos in the best quality possible along with depth data with the highest accuracy possible. We were delighted when we saw AVCaptureDevice could be initialized with the AVMediaType=.depthData which works as expected (depthData is a part of the AVCapturePhoto). When setting to AVMediaType=.video, we still receive depth data (of same quality according to our own internal tests). That confused us.
Mind you, we set the device format and depth format as well:
private func getDeviceFormat() throws -> AVCaptureDevice.Format {
// Ensures high video format and an appropriate color profile.
let format = camera?.formats.first(where: {
$0.isHighPhotoQualitySupported &&
$0.supportedDepthDataFormats.count > 0 &&
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
})
// Check and see if it's available.
guard format != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
return format!
}
private func getDepthDataFormat(for format: AVCaptureDevice.Format) throws -> AVCaptureDevice.Format {
// Access the depth format.
let depthDataFormat = format.supportedDepthDataFormats.first(where: {
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat32
})
// Check if it exists
guard depthDataFormat != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
// Returns it.
return depthDataFormat!
}
We're wondering, what steps we can take to ensure the best quality photo, along with the most accurate depth data? What properties are the most important, which have an effect, which don't? Are there any ways we can optimize our current configuration? We find it difficult as there's very limited guides and explanations on the media subtypes, for example kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. Is it the best? Is it the best for our use case of high quality photo + most accurate depth data?
Important comment: Our App only runs on iPhone 14 Pro, iPhone 15 Pro, iPhone 16 Pro on the latest iOS versions.
We hope someone with greater knowledge at Apple can help us and guide us on how we can have the photos of best quality and depth data with most accuracy.
Thank you very much!
Kind regards.
Swift
RSS for tagSwift is a powerful and intuitive programming language for Apple platforms and beyond.
Posts under Swift tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have followed this post for creating a Launch Agent that provides an XPC service on macOS using Swift-
post link - https://rderik.com/blog/creating-a-launch-agent-that-provides-an-xpc-service-on-macos/
In the swift code the interface of the XPC service is defined by protocols which makes the code nice and neat. I want to implement the XPC service using C APIs for XPC, and C APIs send and receive messages using dictionaries, which need manual handling with conditional statements.
I want to know if its possible to go with the protocol based approach with C APIs.
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
Hello,
Could you please help me with the below,
How to display a toast message to the user in CarPlay after a successful operation?
How to show a spinner or an activity indicator just before performing some operation?
I have referred to the CarPlay pdf design guidelines in which I couldn't find support for the above two. But I could see a loader within a button in one of the default apps in CarPlay simulator.
Kindly help me with these queries
Hello!
I have a swift program that tracks the location of a ball (through the back camera). It seems to be working fine, but the only issue is the run time, particularly my concatenate, normalize, and argmax functions, which are meant to be a 1 to 1 copy of the PyTorch argmax function and the following python lines:
imgs = np.concatenate((img, img_prev, img_preprev), axis=2)
imgs = imgs.astype(np.float32)/255.0
imgs = np.rollaxis(imgs, 2, 0)
inp = np.expand_dims(imgs, axis=0) # used to pass into model
However, I need my program to run in real time and in an ideal world, I want it to run way under real time. Below is a run down of the run times that result from my code:
Starting model inference
Setup took: 0.0 seconds
Resize took: 0.03741896152496338 seconds
Concatenation took: 0.3359949588775635 seconds
Normalization took: 0.9906361103057861 seconds
Model prediction took: 0.3425499200820923 seconds
Argmax took: 28.17007803916931 seconds
Postprocess took: 0.054128050804138184 seconds
Model inference took 29.934185028076172 seconds
Here are the concatenateBuffers, normalizeBuffers, and argmax functions that I use:
func concatenateBuffers(_ buffers: [CVPixelBuffer?]) -> CVPixelBuffer? {
guard buffers.count == 3, let first = buffers[0] else { return nil }
let width = CVPixelBufferGetWidth(first)
let height = CVPixelBufferGetHeight(first)
let targetChannels = 9
var concatenated: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attrs, &concatenated)
guard let output = concatenated else { return nil }
CVPixelBufferLockBaseAddress(output, [])
defer { CVPixelBufferUnlockBaseAddress(output, []) }
guard let outputData = CVPixelBufferGetBaseAddress(output) else { return nil }
let outputPtr = UnsafeMutablePointer<UInt8>(OpaquePointer(outputData))
// Lock all input buffers at once
buffers.forEach { buffer in
guard let buffer = buffer else { return }
CVPixelBufferLockBaseAddress(buffer, .readOnly)
}
defer {
buffers.forEach { CVPixelBufferUnlockBaseAddress($0!, .readOnly) }
}
// Process each input buffer
for (frameIdx, buffer) in buffers.enumerated() {
guard let buffer = buffer,
let inputData = CVPixelBufferGetBaseAddress(buffer) else { continue }
let inputPtr = UnsafePointer<UInt8>(OpaquePointer(inputData))
let bytesPerRow = CVPixelBufferGetBytesPerRow(buffer)
let totalPixels = width * height
// Process all pixels in one go for this frame
for i in 0..<totalPixels {
let y = i / width
let x = i % width
let inputOffset = y * bytesPerRow + x * 4
let outputOffset = i * targetChannels + frameIdx * 3
// BGR order to match numpy
outputPtr[outputOffset] = inputPtr[inputOffset + 2] // B
outputPtr[outputOffset + 1] = inputPtr[inputOffset + 1] // G
outputPtr[outputOffset + 2] = inputPtr[inputOffset] // R
}
}
return output
}
func normalizeBuffer(_ buffer: CVPixelBuffer?) -> MLMultiArray? {
guard let input = buffer else { return nil }
let width = CVPixelBufferGetWidth(input)
let height = CVPixelBufferGetHeight(input)
let channels = 9
CVPixelBufferLockBaseAddress(input, .readOnly)
defer { CVPixelBufferUnlockBaseAddress(input, .readOnly) }
guard let inputData = CVPixelBufferGetBaseAddress(input) else { return nil }
let shape = [1, NSNumber(value: channels), NSNumber(value: height), NSNumber(value: width)]
guard let output = try? MLMultiArray(shape: shape, dataType: .float32) else { return nil }
let inputPtr = inputData.assumingMemoryBound(to: UInt8.self)
let bytesPerRow = CVPixelBufferGetBytesPerRow(input)
let ptr = UnsafeMutablePointer<Float>(OpaquePointer(output.dataPointer))
let totalSize = width * height
for c in 0..<channels {
for idx in 0..<totalSize {
let h = idx / width
let w = idx % width
let inputIdx = h * bytesPerRow + w * channels + c
ptr[c * totalSize + idx] = Float(inputPtr[inputIdx]) / 255.0
}
}
return output
}
func argmax(_ array: MLMultiArray) -> MLMultiArray? {
let shape = array.shape.map { $0.intValue }
guard shape.count == 3,
shape[0] == 1,
shape[1] == 256,
shape[2] == 230400 else {
return nil
}
guard let output = try? MLMultiArray(shape: [1, NSNumber(value: 230400)], dataType: .int32) else { return nil }
let ptr = UnsafePointer<Float>(OpaquePointer(array.dataPointer))
let outputPtr = UnsafeMutablePointer<Int32>(OpaquePointer(output.dataPointer))
let channelSize = 230400
for pos in 0..<230400 {
var maxValue = -Float.infinity
var maxIndex: Int32 = 0
for channel in 0..<256 {
let value = ptr[channel * channelSize + pos]
if value > maxValue {
maxValue = value
maxIndex = Int32(channel)
}
}
outputPtr[pos] = maxIndex
}
return output
}
Are there any glaring areas of inefficiencies that can be reduced to allow for under real time processing whilst following the same logic as found in the python code exactly? Would using Obj-C speed things up for some reason? Are there any tools I can use so I don't have to write these functions myself?
Additionally, in the classes init, function, I tried to check the compute units being used since I feel 0.34 seconds for a singular model prediction is also far too long, but no print statements are showing for some reason:
init() {
guard let loadedModel = try? BallTrackerModel() else {
fatalError("Could not load model")
}
let config = MLModelConfiguration()
config.computeUnits = .all
guard let configuredModel = try? BallTrackerModel(configuration: config) else {
fatalError("Could not configure model")
}
self.model = configuredModel
print("model loaded with compute units \(config.computeUnits.rawValue)")
}
Thanks!
I'm working on a cross-platform application that needs to access file attributes, specifically for files and directories in sync drives like OneDrive. On Windows, I use the GetFileInformationByHandle API to retrieve attributes such as FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS and FILE_ATTRIBUTE_RECALL_ON_OPEN to identify files that are stored remotely or in the cloud.
Is there an equivalent API or mechanism on macOS to achieve the same? Specifically, I’m looking for a way to:
Identify attributes similar to cloud/offline storage status for files in synced drives (e.g., OneDrive, DropBox etc).
Retrieve metadata to distinguish files/folders stored locally versus those stored remotely and downloaded on access.
If there’s a preferred macOS framework (like Core Services or FileManager in Swift) for such operations, examples would be greatly appreciated!
Topic:
App & System Services
SubTopic:
Core OS
Tags:
APFS
Files and Storage
Swift
Cloud and Local Storage
I'm working on a cross-platform application that needs to access file attributes, specifically for files and directories in sync drives like OneDrive. On Windows, I use the GetFileInformationByHandle API to retrieve attributes such as FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS and FILE_ATTRIBUTE_RECALL_ON_OPEN to identify files that are stored remotely or in the cloud.
Is there an equivalent API or mechanism on macOS to achieve the same? Specifically, I’m looking for a way to:
Identify attributes similar to cloud/offline storage status for files in synced drives (e.g., OneDrive, iCloud Drive).
Retrieve metadata to distinguish files/folders stored locally versus those stored remotely and downloaded on access.
If there’s a preferred macOS framework (like Core Services or FileManager in Swift) for such operations, examples would be greatly appreciated!
Topic:
App & System Services
SubTopic:
Core OS
Tags:
APFS
Files and Storage
Swift
Cloud and Local Storage
We have developed a custom iOS framework called PaySDK. Earlier we distributed the framework as PaySDK.xcframework.zip through GitHub (Private repo) with two dependent xcframeworks.
Now, one of the clients asking to distribute the framework through Swift Package Manager.
I have created a new Private repo in the GitHub, created the new release (iOSSDK_SPM_Test) tag 1.0.0. Uploaded the below frameworks as Assets and updated the downloadable path in the Package.Swift and pushed to the GitHub Main branch.
PaySDK.xcframework.zip
PaySDKDependentOne.xcframework.zip
PaySDKDependentTwo.xcframework.zip
When I try to integrate (testing) the (https://github.com/YuvaRepo/iOSSDK_SPM_Test) in Xcode, am not able to download the frameworks, the downloadable path is pointing to some old path (may be cache - https://github.com/YuvaRepo/iOSSDK_SPM/releases/download/1.2.0/PaySDK.xcframework.zip).
Package.Swift:
// swift-tools-version:5.3
import PackageDescription
let package = Package(
name: "iOSSDK_SPM_Test",
platforms: [
.iOS(.v13)
],
products: [
// Products define the executables and libraries a package produces, making them visible to other packages.
.library(
name: "iOSSDK_SPM_Test",
targets: ["PaySDK", "PaySDKDependentOne", "PaySDKDependentTwo"]
)
],
targets: [
// Targets are the basic building blocks of a package, defining a module or a test suite.
// Targets can depend on other targets in this package and products from dependencies.
.binaryTarget(
name: "PaySDK",
url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDK.xcframework.zip",
checksum: " checksum "
),
.binaryTarget(
name: "PaySDKDependentOne",
url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDKDependentOne.xcframework.zip",
checksum: " checksum "
),
.binaryTarget(
name: "PaySDKDependentTwo",
url: "https://github.com/YuvaRepo/iOSSDK_SPM_Test/releases/download/1.0.0/PaySDKDependentTwo.xcframework.zip",
checksum: " checksum "
),
.testTarget(
name: "iOSSDK_SPM_TestTests",
dependencies: ["PaySDK", "PaySDKDependentOne", "PaySDKDependentTwo"]
)
]
)
Steps I followed:
I have tried below steps,
Removed the local repo and cloned new
rm -rf ~/Library/Caches/org.swift.swiftpm/
rm -rf ~/Library/Developer/Xcode/DerivedData/*
Can anyone help to identify the issue and resolve? Thanks in advance.
0
I am developing a custom Picture in Picture (PiP) app that plays videos. The video continues to play even when the app goes to the background, but I would like to be able to get the bool value when the PiP is hidden on the edge, as shown in the attached image. The reason why we need this is because we don't want the user to consume extra network bandwidth, so we want to pause the video when the PiP is hidden on the edge of the screen. Please let me know if this is possible. This can be done in the foreground or in the background.
I am developing a custom Picture in Picture (PiP) app that plays videos.
The video continues to play even when the app goes to the background, but I would like to be able to get the bool value when the PiP is hidden on the edge, as shown in the attached image.
The reason why we need this is because we don't want the user to consume extra network bandwidth, so we want to pause the video when the PiP is hidden on the edge of the screen.
Please let me know if this is possible.
This can be done in the foreground or in the background.
I intend to participate in the Swift Student Challenge 25. I see Rules, It is mentioned that Playgrounds works should be a work that can be experienced in three minutes. However, my work does not meet this requirement.
Create an interactive scene in an app playground that can be experienced within three minutes.
Initially, my work was not intended for the Challenge but for the App Store. However, I decided to submit it to the Challenge, and my work and I met the requirements of the Challenge. Therefore, my work is a complete application, which makes it impossible for the judges to experience it within three minutes. It may take more time. Does this have any impact?
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
Swift Student Challenge
Swift
Swift Playground
SwiftUI
Background:
We are using Apple WeatherKit for an app to fetch future weather data specialty cloudCover and condition and pretipitationChance from the Overnightforecast data from weatherResponse.dailyForecast.forecast array object.
Code Sample:
On xcode on Swift and Swift UI using Apple WeatherKit
task{
do{
let weatherResponse= try await self.weatherService.forecast(for: latLong)
let cloudCover = weatherResponse.dailyForecast.forecast[0].overnightforecast.cloudCover
Issue:
While we are on debug mode, we can literally see overnightforecast.cloudCover data (as well as condition and pretipitationChance) is available under from index 0 to index 10 of weatherResponse.dailyForecast.forecast array object but when we are trying to assign this property, its throwing error as
"overnightforecast object not present under weatherResponse.dailyForecast.forecast array object"
Attachment:
The problem is:
As per screenshot below, one can only see the lineChart. I have another struct AffiliateView coded under this Chart:
import SnapKit
import Charts
import DGCharts
class AffiliateViewController: UIViewController {
private lazy var chartView: LineChartView = {
let chart = LineChartView()
chart.noDataText = "No data available."
chart.chartDescription.enabled = false
chart.xAxis.labelPosition = .bottom
chart.rightAxis.enabled = false
chart.legend.enabled = true
chart.backgroundColor = .lightGray // For debugging visibility
return chart
}()
private lazy var containerView: UIView = {
let view = UIView()
view.backgroundColor = .white
return view
}()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .white
// Add container view and chart view to the main view
view.addSubview(containerView)
view.addSubview(chartView)
// Add SwiftUI View inside the container view
let affiliateView = AffiliateView()
let hostingController = UIHostingController(rootView: affiliateView)
addChild(hostingController)
containerView.addSubview(hostingController.view)
hostingController.view.frame = containerView.bounds
hostingController.didMove(toParent: self)
layout()
setupChartData()
}
private func layout() {
// Layout the container view (SwiftUI content)
containerView.snp.makeConstraints { make in
make.top.equalTo(view.safeAreaLayoutGuide.snp.top)
make.left.right.equalToSuperview()
make.height.equalTo(350) // Increase the height for the SwiftUI content
}
// Layout the chart view below the container view
chartView.snp.makeConstraints { make in
make.top.equalTo(containerView.snp.bottom).offset(20) // Space between chart and the affiliate content
make.left.equalToSuperview().offset(20)
make.right.equalToSuperview().offset(-20)
make.height.equalTo(200) // Set a fixed height for the chart
}
}
private func setupChartData() {
let dataEntries = [
ChartDataEntry(x: 1, y: 10),
ChartDataEntry(x: 2, y: 20),
ChartDataEntry(x: 3, y: 15),
ChartDataEntry(x: 4, y: 30),
ChartDataEntry(x: 5, y: 25)
]
let dataSet = LineChartDataSet(entries: dataEntries, label: "Clicks per Day")
dataSet.colors = [.blue]
dataSet.valueColors = [.black]
dataSet.circleColors = [.red]
dataSet.circleRadius = 4.0
let data = LineChartData(dataSet: dataSet)
chartView.data = data
chartView.notifyDataSetChanged()
}
}
// SwiftUI View remains in the same file
struct AffiliateView: View {
@State private var customMessage: String = ""
@State private var uniqueLink: String = "Your unique link will appear here."
@State private var clickData: [Double] = [10, 20, 15, 30, 25] // Example data
var body: some View {
NavigationView {
VStack(spacing: 20) {
// TextField for custom message input
TextField("Enter your custom message...", text: $customMessage)
.textFieldStyle(RoundedBorderTextFieldStyle())
.padding(.horizontal)
// Generate Link Button
Button(action: generateLink) {
Text("Generate Sign-Up Link")
.font(.headline)
.foregroundColor(.white)
.frame(maxWidth: .infinity, maxHeight: 50)
.background(Color.red)
.cornerRadius(10)
}
.padding(.horizontal)
// Generated Link Label
Text(uniqueLink)
.font(.body)
.multilineTextAlignment(.center)
.padding(.horizontal)
// You can add a chart here if you want to show it in SwiftUI too
/* LineChartView(data: clickData, title: "Clicks per Day", legend: "Daily Clicks") */
}
.navigationTitle("Affiliate Marketing")
.navigationBarTitleDisplayMode(.inline)
}
}
private func generateLink() {
let encodedMessage = customMessage.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) ?? ""
uniqueLink = "https://affiliate.example.com/referral?message=\(encodedMessage)"
addClickData()
}
private func addClickData() {
clickData.append(Double.random(in: 0...100))
}
}
As you see, the AffiliateView has been declared outside of Controller View class. The View content was visible before the lineChart was added into this code. Now the View content is not visible anymore. I have tried to increment/decrement values at make.height.equalTo() but to no avail.
Could anyone kindly point me in the right direction?
Hi there!
I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures.
My question is. How do I release the memory from the current texture before loading the next?
In theory the garbagecollector should erase it eventually?
Hope someone can help =)
Thanks in advance!
Best regards,
Kim
Hello everyone,
I'm experiencing an issue with my app, and I would greatly appreciate any guidance. Here's the situation:
My app works flawlessly on the simulator for all iPhone models (16, 16 Pro, 16 Pro Max, SE, etc.).
It runs without issues on physical iPhone 11 devices (both mine and a friend's).
However, when my friend installs the app from the App Store on their iPhone 15 Pro or iPhone 16, it crashes immediately upon opening.
Steps I’ve taken so far:
Verified that the app works on the simulator for the same models where it crashes in real life.
Updated the app's minimum deployment target to iOS 18, but this did not resolve the issue. The app still crashes on the physical iPhone 15 Pro and 16 models while continuing to run fine on previous physical devices and all simulated ones.
Additionally, I had another friend beta test the app through TestFlight on their iPhone 15 Pro, and they received an alert message indicating that the app had crashed when they tried to open it. This confirms the issue still exists after my attempted fixes.
I'm unsure what could be causing this discrepancy between the simulated and physical devices, or what the alert message in TestFlight might signify. Has anyone encountered a similar issue, or does anyone have suggestions on what I should do to fix this?
Thanks in advance for your help!
Hello,
I've been using the @Environment(\.dismiss) var dismiss in a SwiftUI app for the last 2 years which means it was working as expected in iOS 16, iOS 17 and for the most part iOS 18 up until iOS 18.2.1 in which it is causing an endless loop and eventually a crash.
It seems to be something about using a the @Environment(\.dismiss) with a NavigationLink which seems to cause this issue.
When I add a log in my swiftUI views with let _ = Self._printChanges(), I see the following printed out in a loop:
CurrentProjectView: _dismiss changed.
SurveyView: @self changed.
Similar issues have been reported:
https://forums.developer.apple.com/forums/thread/720803
https://forums.developer.apple.com/forums/thread/739512
Any idea how to resolve this ?
As per the documentation link, the Tab initializer in SwiftUI should allow supplying a custom view to the Label. However, the colors used within the Label view are not being honored as expected.
I attempted to set custom colors in the Label, but they either default to system-defined styles or are ignored entirely. This behavior does not align with my understanding of how custom views should work in SwiftUI's Label.
Am I missing a step or configuration here, or is this a bug in the current implementation?
struct ContentView: View {
@State private var activeTab: TabItem = .homeTab
var body: some View {
TabView(selection: $activeTab) {
ForEach(TabItem.allCases) { tabItem in
Tab(value: tabItem) {
getView(for: tabItem)
} label: {
VStack(spacing: 0) {
MainTabButtonView(
selected: activeTab == tabItem,
tabItem: tabItem
)
Text(tabItem.title)
}
}
}
}
}
}
private extension ContentView {
@ViewBuilder
func getView(for tabItem: TabItem) -> some View {
switch tabItem {
case .homeTab:
Text("Home")
case .searchTab:
Text("Search")
case .profileTab:
Text("Profile")
case .moreTab:
Text("More")
}
}
}
#Preview {
ContentView()
}
enum TabItem: String, Identifiable, Hashable, CaseIterable {
case homeTab
case searchTab
case profileTab
case moreTab
var tabImage: String {
switch self {
case .homeTab:
"house"
case .searchTab:
"magnifying-glass"
case .profileTab:
"biographic"
case .moreTab:
"hamburger-menu"
}
}
var title: String {
switch self {
case .homeTab:
"Home"
case .searchTab:
"Search"
case .profileTab:
"Profile"
case .moreTab:
"More"
}
}
var id: String {
rawValue
}
}
struct MainTabButtonView: View {
private let selected: Bool
private let tabItem: TabItem
init(
selected: Bool,
tabItem: TabItem
) {
self.selected = selected
self.tabItem = tabItem
}
var body: some View {
Image(tabItem.tabImage)
.renderingMode(.template)
.resizable()
.aspectRatio(contentMode: .fill)
.frame(width: 30, height: 30)
.foregroundStyle(
selected ? Color.green : Color.orange
)
}
}
Expected Behavior:
The custom colors applied within the Label should render as defined.
Actual Behavior:
The colors are overridden or ignored, defaulting to the system-defined styles.
Environment:
Xcode Version: Xcode 16.2
iOS: 18.2
Swift Version: Swift 6
If I show a textfield in my app and set nothing else on it but the following, The keyboard will show an autofill suggestion from a password manager for a one time code.
textField.keyboardType = .numberPad
In this case, the text field is for typing in a count, so iOS suggesting to autofill a one time code is incorrect.
Setting textField.textContentType to nil has no affect on the behaviour.
Prerequisites to reproduce
an app with an associated domain
an entry in a password manager with a one time code for the domain
a textfield with keyboardType set to numberPad
Hello,
Please guide me if there is a way to show a simple toast to a user that the action has been performed. When a user taps on a button, an api returns a status based on which I need to show appropriate message as a toast. Is this possible in CarPlay? If not, why? Please suggest any alternative for this.
Awaiting your response
Thanks a lot!!
Hi,
I'd like to call an Async function upon a state change or onAppear() but I'm not sure how to do so. Below is my code:
.onAppear() {
if !subscribed {
await Subscriptions().checkSubscriptionStatus()
}
}
class Subscriptions {
var subscribed = UserDefaults.standard.bool(forKey: "subscribed")
func checkSubscriptionStatus() async {
if !subscribed {
await loadProducts()
}
}
func loadProducts() async {
for await purchaseIntent in PurchaseIntent.intents {
// Complete the purchase workflow.
await purchaseProduct(purchaseIntent.product)
}
}
func purchaseProduct(_ product: Product) async {
// Complete the purchase workflow.
do {
try await product.purchase()
}
catch {
// Add your error handling here.
}
// Add your remaining purchase workflow here.
}
}