There was no mention if the Vision Pro could be used outside. Several of the other AR/VR systems out there are prohibited from this (sensors overload). Can the Vision Pro be used in sunlight?
Thanks!
WWDC23
RSS for tagDiscuss the latest Apple technologies announced at WWDC23.
Posts under WWDC23 tag
55 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Is it possible to use an iPhone running iOS 17 with Xcode 14.3.1?
I tried to use the old method but nothing. In Xcode 15 inside DeviceSupport folder there isn't the folder of iOS 17.
I liked the TipKit presentation -- nice and short and to the point, great introduction!
All the code snippets were in SwiftUI. Will TipKit be available for regular UIKit / AppKit apps as well, or is it restricted to only being used within SwiftUI apps?
thanks
Hi,
I am getting a linking error when building my app to run against an iOS17 device, using Xcode15. Same project builds and runs fine with Xcode 14 and iOS16. The linking error just says:
clang: error: unable to execute command: Segmentation fault: 11
clang: error: linker command failed due to signal (use -v to see invocation)
Not sure what I should try to overcome this. I can't run my app on an iOS17 device. It builds, links and runs just fine on a simulator.
I'm trying to put together an app intent that allows a user to navigate to a specific part of my app.
I've built a basic intent, and set up an AppEnum with a case for each "screen" in my app a user should be allowed to navigate to (e.g. "All Posts", "Favourite Posts", etc.).
In addition, I'd like to include additional parameters based on the enum selected. For example, I'd like to include an enum case "Post" where a user can configure a specific post to navigate to.
This would mean I can have an enum of "All Posts", "Specific Post", "Favourite Posts" etc. which is cleaner than having a separate intent for "Open Specific Post"...
Is this possible? I can see ParameterSummaryBuilder, AppIntent.Switch etc. but there are no docs or examples using these.
Can you provide more information on whether this is possible, and show an example of Swift code to do this.
Thanks!
I'd love to play around with DockKit, but I didn't see anything mentioned about hardware. I'm assuming Apple isn't releasing their own motorized dock and haven't seen anything about how to get hardware recognized by the accessory manager.
I'd like to prototype a dock myself using esp32 and some stepper motors. I've already got this working with bluetooth communication from iOS via CoreBluetooth, but I don't know if there's specific service and characteristic UUIDs that the system is looking for to say it's compatible with DockKit?
Would really love to start playing with this, anyone got any insights on how to get up and running?
Is there a limit to the size of the object that you are wanting to capture with Object Capture. e.g could it capture a horse or other such sized animal?
In the video 'The SwiftUI cookbook for focus" a key detail is left out.
https://developer.apple.com/videos/play/wwdc2023/10162/?time=1130
selectRecipe has no code provided meaning it leaves out a vital detail, how to handle up and down keyboard presses.
If a LazyVGrid has 4 items per row with the current shape of the window and the user presses the down key, how is the application supposed to know which item is directly underneath the currently focused one? Or if they press up and they need to know which is directly above? What happens when the user resizes the window and the number of items per row changes?
This would seem to require knowing the exact current layout of the window to return the correct recipe ID. The code provided isn't wrapped in a complex GeometryReader so I assume there's some magic I am missing here.
I am trying to create a similar LazyVGrid that can be navigated with the keyboard as with the recipes grid here but have no means of implementing .onMoveCommand in such a way that makes sense.
At the moment, SwiftUI seems to be intentionally built in such a way to defy all attempts to implement keyboard navigation.
I'm trying to create swift macros for initializer. I can't get the members of the inherited class.
Rapidly tapping on a Button in an interactive widget bypasses the button's AppIntent action, and launches the host app instead.
I've filed a radar for this, but is there any known workaround for this behaviour?
Doesn't seem to happen when using Apple's first party app widgets.
Adding an inspector and toolbar to Xcode's app template, I have:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.toolbar {
Text("test")
}
.inspector(isPresented: .constant(true)) {
Text("this is a test")
}
}
}
In the preview canvas, this renders as I would expect:
However when running the app:
Am I missing something?
(Relevant wwdc video is wwdc2023-10161. I couldn't add that as a tag)
I attempted to utilize the Background Assets feature for an iOS app. While debugging, I employed the following command to trigger the installation event:
xcrun backgroundassets-debug -b <bundleID> -s --app-install -d <Device ID>
This command worked flawlessly on an iPhone.
However, when I attempted to trigger the installation event on a Mac, I encountered the following error message:
The requested device to send simulation events to is not available.
Verify that the device is connected to this Mac.
Please note that the xcrun backgroundassets-debug -l command only displays a list of connected devices.Mac is not listed in that list.
Related to this StackOverflow post (not mine).
In my chat view:
ScrollView(showsIndicators: false) {
messagesView
}
.safeAreaInset(edge: .bottom) { composerView }
.scrollDismissesKeyboard(.interactively)
Using interactively keyboard dismissing won't change the safe area size interactively causing a weird UI glitch like you can see in the post up above.
The keyboard and composer play nice when I use it as tool bar, But I want my composer to always be visible (obviously), I've tried to play with FocusState to change the composer parent:
.toolbar {
ToolbarItem(placement: $isFocused ? .keyboard : .bottomBar) {
bottomView
}
}
But not only it redraws the view each time, it will also make the view lose its focus state, affectively releasing the keyboard. Plus it feels kind of a hack.
What is the right way to make the composer move with the keyboard interactively and stay on screen while the keyboard is gone, like in iMessages?
I'm experiencing an issue trying to install the 'mirroringworkoutssample' app from the official Apple documentation on my Apple Watch. When attempting a direct installation from the Apple Watch, I receive an error stating, "Cannot install this app due to an inability to verify its integrity."
Has anyone else encountered this problem or can provide any solutions or insights?
** I have a 'Development' type certificate that allows for watchOS(it includes iOS, tvOS ..) development.
** also added WKCompanionAppBundleIdentifier com.example.apple-samplecode.MirroringWorkoutsSample7C76V3X7AB.watchkitapp
Related to this post.
In my chat view, each time I load new page (items are added from top), the ScrollView jumps to top instead of maintaining scrollPosition.
Here is my scroll view:
GeometryReader { geometryProxy in
ScrollView(showsIndicators: false) {
VStack(spacing: 0) {
if viewModel.isLoading {
LoadingFooter()
}
messagesView
.frame(minHeight: geometryProxy.size.height - loadingFooterHeight - bottomContentMargins, alignment: .bottom)
}
}
.scrollDismissesKeyboard(.interactively)
.defaultScrollAnchor(.bottom)
.scrollPosition(id: $scrolledId, anchor: .top)
.contentMargins(.bottom, bottomContentMargins, for: .scrollContent)
.onChange(of: scrolledId, scrollViewDidScroll)
}
And this is the messages view
@ViewBuilder var messagesView: some View {
LazyVStack(spacing: 0) {
ForEach(sectionedMessages) { section in
Section(header: sectionHeaderView(title: section.id)) {
ForEach(section, id: \.id) { message in
MessageView(message: message)
.padding(.horizontal, .padding16)
.padding(.bottom, .padding8)
.id(message.id)
}
}
}
.scrollTargetLayout()
}
}
Printing the scrolledId after a page load, I can see it hasn't changed, but the ScrollView position does.
WWDC23 Platform State of the Union mentioned that Volume shutter buttons to trigger the camera shutter is coming later this year. This was mentioned at 0:30:15.
Would anyone know when this will be available?
I'd like to implement a fully immersive space that's experienced by multiple Vision Pro users simultaneously via SharePlay. To do this, the multiple Vision Pro users will join a SharePlay-enabled visionOS window that has a button to enter a fully immersive space, which is also SharePlay-enabled. I tried following the WWDC sessions and docs, but they don't provide enough detail about integrating SharePlay into an existing window and immersive space. How can I adjust my SharePlay code so it makes my visionOS window + fully immersive space SharePlay-able? Please see existing code below for a SharePlay visionOS widow, thank you.
P.S. WWDC ref. https://developer.apple.com/videos/play/wwdc2023/10087
import SwiftUI
import RealityKit
import RealityKitContent
import GroupActivities
import LinkPresentation
struct SharePlayWorld: View, GroupActivity {
@Environment(ViewModel.self) private var model
@Environment(\.openWindow) private var openWindow
@Environment(\.dismissWindow) private var dismissWindow
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
@Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace
var body: some View {
@Bindable var model = model
Toggle(
model.isShowingPracticeSpace ? "Leave Space" : "Enter Space",
isOn: $model.isShowingPracticeSpace
)
.onChange(of: model.isShowingPracticeSpace) { _, isShowing in
Task {
if isShowing
{
await openImmersiveSpace(id: "SharePlayWorld")
}
else
{
await dismissImmersiveSpace()
}
}
}
.toggleStyle(.button)
}
// SHAREPLAY CODE
private func startSharePlaySession() async {
for await session in SharePlayWorld.sessions() {
guard let systemCoordinator = await session.systemCoordinator else { continue }
let isLocalParticipantSpatial = systemCoordinator.localParticipantState.isSpatial
Task.detached {
for await localParticipantState in systemCoordinator.localParticipantStates {
if localParticipantState.isSpatial {
// Start syncing scroll position
} else {
// Stop syncing scroll position
}
}
}
var configuration = SystemCoordinator.Configuration()
configuration.spatialTemplatePreference = .sideBySide
systemCoordinator.configuration = configuration
session.join()
}
// Create the activity
let activity = SharePlayWorld()
// Register the activity on the item provider
let itemProvider = NSItemProvider()
itemProvider.registerGroupActivity(activity)
// Create the activity items configuration
let configuration = await UIActivityItemsConfiguration(itemProviders: [itemProvider])
// Provide the metadata for the group activity
configuration.metadataProvider = { key in
guard key == .linkPresentationMetadata else { return nil }
let metadata = LPLinkMetadata()
metadata.title = "Explore Together"
metadata.imageProvider = NSItemProvider(object: UIImage(named: "explore-activity")!)
return metadata
}
self.activityItemsConfiguration = configuration
}
}
#Preview {
SharePlayWorld()
.environment(ViewModel())
}