Posts

Post not yet marked as solved
0 Replies
248 Views
Above Xcode's navigator area or uitilty area there are some buttons for switching the content views below. I implemented the standard SwiftUI TabView and also a Picker with the segmented style to get the same appearance in my macOS app but both are looking different. Plain buttons in a HStack above my content area are solving the problem. But it seems more like a workaround. Is there maybe a simpler solution to implement a Xcode style tabview that I have overseen.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.6k Views
My goal is to hide the SwiftUI slider line on macOS by using a clear color because I set a color gradient as a background view. But whatever color I use (.clear, .red, ...) with the accentColor or foregroundColor modifier the line does not change or can be hidden and appears still on top of the sliders background view.
Posted Last updated
.
Post marked as solved
3 Replies
1.5k Views
Please apologize if this question is really trival. I implementent a content view with two buttons on it. One button should be 8px below the top, the other 8px above the bottom. Maybe I am doing something wrong. But I would use a simple ZStack here. Only problem is that I can set only one alignment for the ZStack at a time. So I can either align all views from the top border or from the bottom border but not one from the top and one from the bottom. Is there a way to solve this layout problem with these three views and a ZStack?
Posted Last updated
.
Post not yet marked as solved
4 Replies
2k Views
I have a simple class conforming to the codable protocol. Here is an example which is the same like in the developer documentation...class Landmark: Codable { var name: String var foundingYear: Int }Because the two properties inside the class body are basic Swift types the compiler has nothing to complain.But if I want to edit these properties using SwiftUI I must also make sure that the class conforms also to the ObservableObject protocol.class Landmark: Codable, ObservableObject { @Published var name: String @Published var foundingYear: Int }Now I get an Xcode error message that type Landmark does not conform to the codable protocol anymore.This is something I cannot understand. The property wrapper @Published should not change the type of the properties String and Int. The codable conformance should imho still be given to the class without additional lines of codes.
Posted Last updated
.
Post not yet marked as solved
1 Replies
744 Views
Inside the app delegate I uses a simple SwiftUI view as the root for my macOS main window.func applicationDidFinishLaunching(_ aNotification: Notification) { // Create the SwiftUI view that provides the window contents. let contentView = ContentView().touchBar(myTouchbar) // Create the window and set the content view. window = NSWindow( contentRect: NSRect(x: 0, y: 0, width: 480, height: 300), styleMask: [.titled, .closable, .miniaturizable, .resizable, .fullSizeContentView], backing: .buffered, defer: false) window.center() window.setFrameAutosaveName("Main Window") window.contentView = NSHostingView(rootView: contentView) window.makeKeyAndOrderFront(nil) }This contentView has a touchbar asigned to it. If the contentView is a simple view that can hold a focus (like a Textield) then the touchbar becomes visible on my MacBook. But in my application the contentView (the root view of the NSWindow) is a layout container like a HSplitView. In this case the window appears but the touchbar is not visible. How can I use a SwiftUI touchbar with a window without focusable or input elements so that the touchbar is always visible together with the window.
Posted Last updated
.
Post not yet marked as solved
14 Replies
6.4k Views
Can someone explain how to use the CarPlay simulator in Xcode 11 GM? Maybe I am doing something wrong.In the simulator I choosed hardware->external displays and then a small rectangle window appears. But the screen stays black. My goal is to start the iOS app in the simulator and see the CarPlay screen in this window side by side.In the internet I saw a video where this simulator scenario was working. But I get only the black screen and an iPhone simulator with no CarPlay option in the settings.Some people write you must enter first some commands in the terminal window before you can use the CarPlay simulator. I am not sure if this is still neccessary in Xcode 11 but here is what I used in terminal:defaults write com.apple.iphonesimulator CarPlay -bool YESdefaults write com.apple.iphonesimulator CarPlayExtraOptions -bool YESdefaults write com.apple.iphonesimulator CarPlayProtocols -array-add com.brand.carplay.featureMust I request first the MFI profile from apple when you test only in the simulator? Or does CarPlay simulator not work in Xcode 11 because I saw another thread in the developer forum where someone said he gets also the black screen.
Posted Last updated
.
Post not yet marked as solved
3 Replies
1k Views
Does someone know how they GPU channels work exactly in Metal?I implemented a blit command encoder in two different ways and metal system traces showed me that one blit command was sheduled to the GPUs blit channel while the other is running on the gfx channel.- First Blit Copy: Shared MTLBuffer -> Private MTLBuffer- Second Blit Copy: CVMetalTexture -> Private MTLTextureBoth blit commands were commited to the queue in seperate command buffers and on seperate threads.A blit commander for generateMipmaps() on the private MTLTexture from above is also running on the gfx channel and not on the blit channel. Copying parts from one private texture to another region of a destination private texture also runs on the gfx channel.So only the blit copy from one buffer to another buffer seems to run in the blit channel or is this wrong?
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.9k Views
I created two threads in my macOS app, a main thread that dispatches work every 40ms to a worker thread. This worker thread causes a memory leak. After some debugging it seems that the MTLCommandBuffer is the reason for the leak:if let commandBuffer = commandQueue.makeCommandBuffer() { // some code here commandBuffer.commit() }I uncommented every code from the worker thread and there is no memory leak anymore. But when I add these lines above and create only an empty command buffer and commit it to the queue then the CPU memory will increase over time.The app is running on macOS and compiled with Xcode 10.3 (and Xcode 11 beta with the the same effect).Instruments cannot find any leaks. Persistent allocation stays constant over a long time. Only Xcode debugger shows this static memory increase. And only if I create a commandBuffer (with commands encoded or also when empty).Edit: I created a new project in Xcode with the game template and selected metal. Same effect. The debugger memory section has the same memory increase over time (nearly 0.1MB each second).
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.9k Views
Hi guys,I am working on a video application that needs realtime performance. First I used GCD but than I saw one WWDC video where the priority decay mechanism was explained.My application is quite simple. I have an external API call that waits for the next incoming video frame. So instead of using GCD I implemented a pthread which opt-out of the priority decay mechanism and set the priority very high as it was shown in the WWDC video.This dispatch thread only waits for the next frame sync and starts several worker threads using conditional flags. All the worker threads also opt-out from priority decay with the same high priority like the dispatcher. They do not much, some are encoding Metal commands, others are reading some UDP data or fetching the new video frame from the hardware board.When I used instruments I saw that all working threads has been scheduled to all available cores and blocking them all for 10ms. The UI gets unresponsive as well.Here is a short version of the code. Maybe I understand something wrong with pthreads. So please apologize if this is a silly mistake I made here in my code. GCD would be much better to use but when I watched the WWDC video it seems that there is no way to opt-out from the priority decay problem.void *dispatcherThread(void *params) { while( !forcedExit ) { waitForNextFrame(); pthread_mutex_lock(&mutex); needsCaptureFrame = true; needsProcessFrame = true; needsPlayoutFrame = true; pthread_cond_broadcast(&condition); pthread_mutex_unlock(&mutex); } pthread_exit(NULL); }void *workerThreadProcessFrame(void *params) { while( !forcedExit ) { pthread_mutex_lock(&mutex); while (!needsProcessFrame && !forcedExit) pthread_cond_wait(&condition, &mutex); needsProcessFrame = false; pthread_mutex_unlock(&mutex); if (!forcedExit) { processFrame(); } } pthread_exit(NULL); }The C function processFrame itself is bound to a Swift function. This works pretty well. Only problem is that all worker threads block every 40ms all cores of the Mac for 10ms even when their Swift function returns in a few mikroseconds.Here is also the code snippet how the pthreads are created.void startThread(void * _Nullable (* _Nonnull start_routine)(void * _Nullable)) { pthread_t thread; pthread_attr_t attr; int returnVal; // create attributes with the standard values returnVal = pthread_attr_init(&attr); assert(!returnVal); // set the detachstate attribute (because we don't need return values and therefor a pthread_join) returnVal = pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); assert(!returnVal); // set the scheduling policy to round robin to avoid priority decay (this is very important!!!) pthread_attr_setschedpolicy(&attr, SCHED_RR); // the thread priority is set to 45 which seems to be a good value on the mac struct sched_param param = {.sched_priority = 45}; pthread_attr_setschedparam(&attr, ¶m); int threadError = pthread_create(&thread, &attr, start_routine, NULL); assert(!threadError); returnVal = pthread_attr_destroy(&attr); assert(!returnVal); }I would be really happy if someone has an idea why this dispath/worker mechanism does not work or if there is also a solution with GCD avoiding the priority decay problem.
Posted Last updated
.
Post not yet marked as solved
0 Replies
476 Views
Is it possible to initialize the content of a stencil texture within a compute kernel function? In my case I wanted to fill zeros in the even rows and ones in the odd rows of the stencil buffer. When I use .stencil8 as a pixel format for this texture, then Xcode gives me an error that this pixel format .stencil8 has no write access for a compute function even the usage property in the texture descriptor contains the .shaderWrite flag.
Posted Last updated
.
Post not yet marked as solved
1 Replies
2.2k Views
I have a third party driver software with a lot of static libraries and a whole lot of headers. My goal is to wrap those libraries and headers into a cocoa framework, use an umbrella for all the headers that I need, and load this framework as a module into my Swift app. What I want to avoid is to copy all the headers into the public header section of the build settings. When I import the C++ headers in the framework header I get a lot of error messages "Inlcude of non-modular headers inside framework module".In the internet they say we can use the targets build settings to allow non-modular headers. But this doesn't worked either.Some background why I want to do this. Sometimes we must switch back to older versions of the driver software. So my goal is to use git branches for each version and use them to build cocoa frameworks that I can use in my Swift app.Has someone managed to build frameworks containing C++ libraries and their headers to use them within Swift projects already? I checked the internet and it seems to be not so easy as I thought.
Posted Last updated
.
Post not yet marked as solved
0 Replies
458 Views
First of all I want to thank the Apple team for their good WWDC videos about CloudKit. But there is one thing where I still struggle with CloudKit. Maybe someone can give me some good best practices.It was mentioned in the WWDC video that when you send a write/modify CKOperation to the CloudKit server we won't get an updated change token. How does Apple handle the situation when they fetch new data from the server in their apps? That was missing in the WWDC videos.Let me explain it in more detail: You have locally changed objects on your device (may you add an item to a shopping list) and when you fetch the next time data from the CloudKit server (either from a push or an application launch) you will get the same data back together with modified data from other devices. How to you compare this records? I would store my local objects together with the recordIDs and compare them, but maybe there is an simpler solution.It was also mentionied in their example todo list app that they use CoreData for their persistent store. Is CoreData the way to go for caching local data from CloudKit or are codeable objects also as easy to use?
Posted Last updated
.
Post not yet marked as solved
1 Replies
696 Views
I have serval depencies (frameworks) consisting of either pure Swift or C code. With CocoaPods I can create frameworks that my main apps can link to. My main apps are macOS Cocoa Xcode workspaces containing the executable project and a playground for testing.Is it possible to switch to Swift package manager when you have such existing Xcode workspaces and load/update the depencies which were build with the swift package manager?I found many articles in the internet but all seems to be based on server side applications. The question is, is it possible to use an existing Cocoa project (workspace) and link dependencies as swift package mananager modules to it?Sorry if this questions is maybe stupid, but I really havent found a good tutorial in the internet yet. I know CocoaPods (and also Carthage) right well but have no idea how to switch to Swift Package Manager or if this is even possible for Cocoa apps when the directory structure (Xcode->New Project) already exists.
Posted Last updated
.