I have developed a mobile app using SwiftUI that supports GoogleMaps. Now I am in the process of building a CarPlay application. I assume CarPlay only supports Apple MapKit, as I could not find any way to integrate the Google Maps. Below are few queries,
Could you please guide me on how I can obtain the user's current location on the CarPlay app launch? Is there a way CarPlay can get the details from the mobile app(not pretty sure as its using Google Maps)?
If the user is logged out from the mobile app, what is the flow in CarPlay? Do we have any standard login page asking user to login to the mobile app first?
Is there any UI asking the user to capture the location in CarPlay?
This is my first CarPlay app. Kindly guide me to a document or so that covers these details.
Thanks a ton!!
Swift
RSS for tagSwift is a powerful and intuitive programming language for Apple platforms and beyond.
Posts under Swift tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi everyone,
I'm encountering an issue where the background location indicator remains visible on the status bar even though I have set the location permissions to Never for my app in the system settings. Despite taking all the necessary steps to stop location tracking (including stopping updates, geofencing, and other location-related services), the indicator still appears. This seems to be a bug since everything has been turned off on my end.
Here’s what I’ve already tried:
Setting location permissions to Never in the settings.
Stopping startUpdatingLocation(), stopMonitoringSignificantLocationChanges(), and geofencing (using locationManager.stopMonitoringRegions()).
Calling locationManager.showsBackgroundLocationIndicator = false.
Ensuring that the CLLocationManager is fully invalidated.
Despite all of this, the background location indicator still remains in the status bar. I’ve tested it on real devices, as well as in the simulator, with no improvement.
Has anyone experienced something similar, or can suggest why this might be happening? Could this be related to an iOS 18+ issue?
Any insights or guidance would be greatly appreciated.
Some reason the image 'Ren' is not being loaded even though it is in the project how can I resolve this issue in Xcode Playgrounds?
I’m curious about the situation since the Playgrounds haven’t released a new version to support the numerous new frameworks released this year at WWDC24. How are we supposed to build with these new frameworks if they haven’t even been released for Playgrounds for the Swift Student Challenge?
How do I implement the same Navigation split view with a side bar in Appkit?
Basically I have this code:
import SwiftUI
struct ContentView: View {
var body: some View {
NavigationSplitView {
// Sidebar
List {
NavigationLink("Item 1", value: "Item 1 Details")
NavigationLink("Item 2", value: "Item 2 Details")
NavigationLink("Item 3", value: "Item 3 Details")
}
.navigationTitle("Items")
} content: {
// Main content (detail view for selected item)
Text("Select an item to see details.")
.padding()
} detail: {
// Detail view (for the selected item)
Text("Select an item from the sidebar to view details.")
.padding()
}
}
}
struct MyApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
and wanted to somehow convert it to Appkit. I tried to use an NSSplitViewController but I still don't have that side bar and that button to collapse it, how do I go about this?
Saw this info: https://developer.apple.com/documentation/contacts/cncontactstore
But have no idea what I'm doing. This is a pressing matter and I need to determine the date/time contacts were originally created on my icloud account. I have tried the shortcuts method and it merely shows the date they were loaded into whichever device i'm logged in on if they were created a while ago
I am trying to create a list of not rectangular elements, each of which has a context menu. However, I am encountering an issue with the corners when performing a long press.
What is the correct way to use such a combination? I don't want to use List because of its default styling. The issue takes place only while animation is in progress.
Here's a simplified code example that can be copied pasted and ran in one file. The video was recorded on the device with iOS 18.2
import SwiftUI
@main
struct MyApp: App {
var body: some Scene { WindowGroup { TestView() } }
}
struct TestView: View {
let items = ["Item 1", "Item 2", "Item 3"]
var body: some View {
VStack {
ForEach(items, id: \.self) { item in
HStack {
Text(item)
Spacer()
Image(systemName: "star")
}
.padding()
.background(.yellow)
// tried all these in different combinations, none works
.contentShape(RoundedRectangle(cornerRadius: 10))
.clipShape(RoundedRectangle(cornerRadius: 10))
.containerShape(RoundedRectangle(cornerRadius: 10))
.contextMenu {
Button { print("Edit \(item)") }
label: { Text("Edit"); Image(systemName: "pencil") }
}
}
}
.padding()
}
}
#Preview {
TestView()
}
This is easy to reproduce,in dark mode, 2 UIViewControllers A and B, A present B. code:
class AAA: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
navigationItem.title = "AAA"
view.backgroundColor = .systemBackground
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
present(UINavigationController(rootViewController: BBB()), animated: true)
}
}
class BBB: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
navigationItem.title = "BBB"
view.backgroundColor = .systemBackground
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
dismiss(animated: true)
}
}
before present:
after present:
Obviously, the backgroundColor of the view has changed.
I guess it's because view's backgroundColor is the same as the the window, so changed the color to distinguish between the controller and the background, but this brought unexpected changes which is confusing. I want to know how this happened and how I can manually control it
I'm currently working on a project in Swift where I need to digitally sign a PDF file. I have the following resources available:
Private Key stored in the iOS Keychain with a tag. Public Key also stored in the iOS Keychain with a tag. A valid certificate stored as a PEM string. I need to digitally sign a PDF file with the above keys and certificate, but I'm struggling to find a clear and straightforward example or guidance on how to achieve this in Swift.
Specifically, I’m looking for help with:
Creating the digital signature using the private key and certificate. Embedding this signature into the PDF file. Any considerations I should be aware of regarding the format of the signed PDF (e.g., CMS, PKCS7, etc.). If anyone has experience with digitally signing PDFs in Swift, I would greatly appreciate your guidance or code examples.
Thank you in advance!
Can someone from Apple officially confirm either a delay or cancellation of this feature? 9 days before 2024 ends and Apple said “later this year” in June 2024. Apple mentioned Swift Assist even in the MacBook Pro announcement on the 30th of October 2024 as if it already exists.
Why is Apple not releasing Swift Assist?
I found when I put a webView on the screen and then remove it, several properties in TableView including firstResponderView, FirstResponderIndexPath, and FirstResponderViewType have changed. These properties are hidden and I cannot change them. firstResponderView strong holds my cell, resulting in my cell not being able to call didEndDisplayCell when it slides out of the screen as expected. What should I do to avoid firstResponderView from strong holding my cell, or what should I do to release it?
I want to understand the utility of using AsyncStream when iOS 17 introduced @Observable macro where we can directly observe changes in the value of any variable in the model(& observation tracking can happen even outside SwiftUI view). So if I am observing a continuous stream of values, such as download progress of a file using AsyncStream in a SwiftUI view, the same can be observed in the same SwiftUI view using onChange(of:initial) of download progress (stored as a property in model object). I am looking for benefits, drawbacks, & limitations of both approaches.
Specifically, my question is with regards to AVCam sample code by Apple where they observe few states as follows. This is done in CameraModel class which is attached to SwiftUI view.
// MARK: - Internal state observations
// Set up camera's state observations.
private func observeState() {
Task {
// Await new thumbnails that the media library generates when saving a file.
for await thumbnail in mediaLibrary.thumbnails.compactMap({ $0 }) {
self.thumbnail = thumbnail
}
}
Task {
// Await new capture activity values from the capture service.
for await activity in await captureService.$captureActivity.values {
if activity.willCapture {
// Flash the screen to indicate capture is starting.
flashScreen()
} else {
// Forward the activity to the UI.
captureActivity = activity
}
}
}
Task {
// Await updates to the capabilities that the capture service advertises.
for await capabilities in await captureService.$captureCapabilities.values {
isHDRVideoSupported = capabilities.isHDRSupported
cameraState.isVideoHDRSupported = capabilities.isHDRSupported
}
}
Task {
// Await updates to a person's interaction with the Camera Control HUD.
for await isShowingFullscreenControls in await captureService.$isShowingFullscreenControls.values {
withAnimation {
// Prefer showing a minimized UI when capture controls enter a fullscreen appearance.
prefersMinimizedUI = isShowingFullscreenControls
}
}
}
}
If we see the structure CaptureCapabilities, it is a small structure with two Bool members. These changes could have been directly observed by a SwiftUI view. I wonder if there is a specific advantage or reason to use AsyncStream here & continuously iterate over changes in a for loop.
/// A structure that represents the capture capabilities of `CaptureService` in
/// its current configuration.
struct CaptureCapabilities {
let isLivePhotoCaptureSupported: Bool
let isHDRSupported: Bool
init(isLivePhotoCaptureSupported: Bool = false,
isHDRSupported: Bool = false) {
self.isLivePhotoCaptureSupported = isLivePhotoCaptureSupported
self.isHDRSupported = isHDRSupported
}
static let unknown = CaptureCapabilities()
}
iOS18.2 / iPhone 16pro / Xcode 16.2
'traitCollectionDidChange'
This function has been deprecated since ios17.
However, in ios18, when I changed the app to the background state or changed it to the foreground state again, it was confirmed that the function worked.
It hasn't been confirmed in ios17, but why is it only confirmed in ios18?
iOS18.2 / iPhone16 pro / xcode16.2
'traitCollectionDidChange'
This function has been deprecated in iOS17.
However, when I debugged it, I confirmed that it is not called on iOS17, but it is called on iOS18.2.
What is the reason?
I’m trying to use the Vision framework in a Swift Playground to perform face detection on an image. The following code works perfectly when I run it in a regular Xcode project, but in an App Playground, I get the error:
Thread 12: EXC_BREAKPOINT (code=1, subcode=0x10321c2a8)
Here's the code:
import SwiftUI
import Vision
struct ContentView: View {
var body: some View {
VStack {
Text("Face Detection")
.font(.largeTitle)
.padding()
Image("me")
.resizable()
.aspectRatio(contentMode: .fit)
.onAppear {
detectFace()
}
}
}
func detectFace() {
guard let cgImage = UIImage(named: "me")?.cgImage else { return }
let request = VNDetectFaceRectanglesRequest { request, error in
if let results = request.results as? [VNFaceObservation] {
print("Detected \(results.count) face(s).")
for face in results {
print("Bounding Box: \(face.boundingBox)")
}
} else {
print("No faces detected.")
}
}
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
do {
try handler.perform([request]) // This line causes the error.
} catch {
print("Failed to perform Vision request: \(error)")
}
}
}
The error occurs on this line:
try handler.perform([request])
Details:
This code runs fine in a normal Xcode project (.xcodeproj).
I'm using an App Playground instead (.swiftpm).
The image is being included in the .xcassets folder.
Is there any way I can mitigate this issue? Please do not recommend switching to .xcodeproj, as I am making a submission for Apple's Swift Student Challenge, and they require that I use .swiftpm.
Here is a relatively simple code fragment:
let attributedQuote: [NSAttributedString.Key: Any] = [ .font: FieldFont!, .foregroundColor: NSColor.red]
let strQuote = NSAttributedString.init(string:"Hello World", attributes:attributedQuote)
strQuote.draw(in: Rect1)
It compiles without an issue, bur when I execute it, I get:
"*** -colorSpaceName not valid for the NSColor <NSColor: 0x6000005adfd0>; need to first convert colorspace."
I have tried everything I can think of. What's going on?
In the Xcode 16.2 release notes, it says to avoid a memory leak in Swift 6 you should "Pass -checked-async-objc-bridging=off to the Swift compiler using “Other Swift Flags” in Xcode build settings." https://developer.apple.com/documentation/xcode-release-notes/xcode-16_2-release-notes#Swift
However, when I add this value to OTHER_SWIFT_FLAGS (either in the Xcode build settings interface, or in an .xcconfig file), it yields a build error:
error: Driver threw unknown argument: '-checked-async-objc-bridging' without emitting errors.
Does anybody know if there's a trick to get this working that isn't explained in the release notes?
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Hello Everyone,
I'm currently working on a cross-platform application that uses IP-based multicast for device discovery across both Apple and non-Apple devices running the same app. All devices join a multicast group "X.X.X.X" on port Y.
For Apple devices, I am using NWConnectionGroup for multicast discovery, while for non-Apple devices, I am using BSD sockets.
The issue arises when I attempt to send a multicast message to the group using NWConnectionGroup. The message is sent from a separate ephemeral port rather than the multicast port Y. As a result, all Apple processes that are using NWConnectionGroup can successfully receive the multicast message. However, the processes running on the non-Apple devices (using BSD sockets) do not receive the message.
My Questions:
Is there a way to configure NWConnectionGroup to send multicast messages from the same multicast port Y rather than an ephemeral port?
Is there any known behavior or limitation in how NWConnectionGroup handles multicast that could explain why non-Apple devices using BSD sockets cannot receive the message?
How can I ensure cross-platform multicast compatibility between Apple devices using NWConnectionGroup and non-Apple devices using BSD sockets?
Any guidance or suggestions would be greatly appreciated!
Thanks,
Harshal