Apple Silicon

RSS for tag

Build apps, libraries, frameworks, plug-ins, and other executable code that run natively on Apple silicon.

Posts under Apple Silicon tag

64 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

M2 - Upgrade from Xcode 15.2 to 15.3, can no longer run on simulator
I am working on an Apple M2 Pro, MacOS Sonoma 14.3.1. Today Xcode automatically updated from 15.2 to 15.3 and downloaded the new 17.4 simulators and runtime tools. I can no longer run my apps in simulator. On Xcode 15.2, there was an option to choose the destination architecture, under Product -> Destination -> Destination Architectures -> Rosetta (the one I have ben required to select in order to run apps for the last few versions). On Xcode 15.3, the option to chose the destination architecture is now missing. I am still able to build successfully to my phone directly. I am unable to build to a simulator, I get the same error for regarding a linker error failure. I have tried: reboot laptop delete information stored in derived data delete local Podfile.lock delete Pods folder pod install reopen xcode run on device - works! run on 17.2 simulator - fails with error run on 17.4 simulator - fails with error Our Podfile looks like this: require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules' require_relative '../node_modules/react-native-permissions/scripts/setup' platform :ios, '13.4' prepare_react_native_project! setup_permissions([ 'AppTrackingTransparency', 'Camera', 'LocationAlways', 'LocationWhenInUse', 'Notifications', ]) target 'myapp' do config = use_native_modules! # @react-native-firebase/app requirement: use_frameworks! :linkage => :static $RNFirebaseAsStaticFramework = true use_react_native!( :path => config[:reactNativePath], # to enable hermes on iOS, change `false` to `true` and then install pods :hermes_enabled => false ) # Pods for GoogleMaps on iOS rn_maps_path = '../node_modules/react-native-maps' pod 'react-native-google-maps', :path => "#{rn_maps_path}" pod 'react-native-camera', path: '../node_modules/react-native-camera', subspecs: [ 'BarcodeDetectorMLKit' ] pod 'RNSquareInAppPayments', :path => '../node_modules/react-native-square-in-app-payments' target 'myappTests' do inherit! :complete # Pods for testing end # Enables Flipper. # # Note that if you have use_frameworks! enabled, Flipper will not work and # you should disable the next line. # use_flipper!("Flipper-DoubleConversion" => "1.1.7") #avoid duplicate symbols for architecture x86_64 for Folly post_install do |installer| react_native_post_install(installer) installer.pods_project.targets.each do |target| if target.name == "RCT-Folly" target.build_configurations.each do |config| config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= ['$(inherited)', 'FOLLY_HAVE_CLOCK_GETTIME=1'] end end end end end I'm open to additional suggestions, at this point, I can't see a way to tell xcode that we need the other build option (like specifying Rosetta which I used to be able to do). Also, if anyone can help me understand why it is doing this, I'm busy on Google, but not finding what I'm looking for, so I wonder if I'm looking for the right things. Thanks so much! Error:
2
1
5.4k
Mar ’24
Audio Workgroups: Aux threads joined to workgroup executed on E-Cores when App is in background
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
0
0
482
Mar ’24
SwiftUI Stepper Crashes (EXC_BAD_ACCESS) on My Mac (Designed for iPhone) but works fine on iOS device/simulator?
I've been working on an iOS project for the iPhone and would like to support running it on macOS computers with Apple Silicon. In the Targets / Supported Destinations we added "Mac (Designed for iPhone)" but experienced Thread 1: EXC_BAD_ACCESS crashes immediately when we tried to run it. We've isolated it down to Stepper UI elements in our view. Starting a new project and just trying to present a single Stepper in the ContentView, we get the same crash. Here is code that presents the issue: // ContentView.swift import SwiftUI struct ContentView: View { @State var someValue = 5 var body: some View { VStack { Stepper("Stepper", value: $someValue, in: 0...10) } } } When run from Xcode on an iOS device or the simulator, it runs fine. Trying to run it on the Mac, it crashes here: // Stepper_01App.swift import SwiftUI @main // <-- Thread 1: EXC_BAD_ACCESS (code=2, address=0x16a643f70) struct Stepper_01App: App { var body: some Scene { WindowGroup { ContentView() } } } Xcode 14.3 (14E222b), MacOS Ventura 13.3.1 (a), Mac mini M2. Target: Mac (Designed for iPhone) We have verified that the same code crashes on all the Apple Silicon Macs we have access to. Searching the Internet and Apple Developer forums I dont find other reports, so I kind of feel there must be some level of either user error or system/project misconfiguration going on? If any iOS app that used Steppers was just crashing when trying to run on a Mac, it seems like this would be a big deal. If anyone has input or can point out what we need to do differently, it would be appreciated!
12
1
1.3k
Feb ’24
smc keys for M3 pro chip, returning temperature values
Im honestly a bit lost and looking for general pointers. Here is the general flow of my project. I have an Xcode project where I want to return and convert the temperature values accessed from the apple smc and I found a GitHub repo with all the smc key sensors for the M3Pros/Max chips: https://github.com/exelban/stats/issues/1703 basically, I have all these keys stored in an array in obj-c like so: NSArray *smcKeys = @[ @"Tp01", @"Tp05", @"Tp09", @"Tp0D", @"Tp0b", @"Tp0f", @"Tp0j", @"Tp0n",@"Tp0h", @"Tp0L", @"Tp0S", @"Tp0V", @"Tp0z", @"Tp0v", @"Tp17", @"Tp1F", @"Tp1J", @"Tp1p", @"Tp1h", @"Tp1R", ]; I am passing all these keys by passing 'smcKeys' in a regular C code file I have here that is meant to open, close and read the data shown here: #include "smc.h" #include <mach/mach.h> #include <IOKit/IOKitLib.h> #include "smckeys.h" io_connect_t conn; kern_return_t openSMC(void) { kern_return_t result; kern_return_t service; io_iterator_t iterator; service = IOServiceGetMatchingServices(kIOMainPortDefault, IOServiceMatching("AppleSMC"), &iterator); if(service == 0) { printf("error: could not match dictionary"); return 0; } result = IOServiceOpen(service, mach_task_self(), 0, &conn); IOObjectRelease(service); return 0; } kern_return_t closeSMC(void) { return IOServiceClose(conn); } kern_return_t readSMC(char *smcKeys, SMCVal_t *val) { kern_return_t result; uint32_t keyCode = *(uint32_t *)smcKeys; SMCVal_t inputStruct; SMCVal_t outputStruct; inputStruct.datasize = sizeof(SMCVal_t); inputStruct.datatype = 'I' << 24; //a left shift operation. turning the I into an int by shifting the ASCII value 24 bits to the left inputStruct.data[0] = keyCode; result = IOConnectCallStructMethod(conn, 5, &inputStruct, sizeof(SMCVal_t), &outputStruct, (size_t*)&inputStruct.datasize); if (result == kIOReturnSuccess) { if (val -> datasize > 0) { if (val -> datatype == ('f' << 24 | 'l' << 16 | 't' << 8 )) { //bit shifting to from 32bit operation associated with the ASCII charecters'f', 'l', and 't', sets datatype field. double temp = *(double *)val -> data; return temp; } } } return 0.0; } Which I am then then calling the functions from this file in a swift file and converting the values to Fahrenheit but no data is being printed in my console: import IOKit public class getTemperature { public struct SMCVal_t { var datasize: UInt32 var datatype: UInt32 var data: (UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8) } @_silgen_name("openSMC") func openSMC() -> kern_return_t @_silgen_name("closeSMC") func closeSMC() -> kern_return_t @_silgen_name("readSMC") func readSMC(key: UnsafePointer<CChar>?,val: UnsafeMutablePointer<SMCVal_t>) -> kern_return_t func convertAndPrintTempValue(key:UnsafePointer<CChar>?,scale: Character, showTemp: Bool ) -> kern_return_t { let openSM = openSMC() guard openSM == 0 else { print("Failed to open SMC: \(openSM)") return kern_return_t() } let closeSM = closeSMC() guard closeSM == 0 else { print("could not close SMC: \(closeSM)") return IOServiceClose(conn) } func convertAndPrint(val: SMCVal_t) -> Double { if val.datatype == (UInt32("f".utf8.first!) << 24 | UInt32("l".utf8.first!) << 16 | UInt32("t".utf8.first!) << 8) { let extractedTemp = Double(val.data.0) return( extractedTemp * 9.0 / 5.0 + 32.0 ) } return 0.0 } let smcValue = SMCVal_t(datasize: 0, datatype: 0, data: (0,0,0,0,0,0,0,0)) let convertedVal = convertAndPrint(val: smcValue) print("Temperarure:\(convertedVal)F°") return kern_return_t() } } I know this is a lot but I am honestly looking for any tips to fill in any gaps in my knowledge for anyone who's built a similar application meant to extract any sort of data from Mac hardware.
1
0
522
Feb ’24
Extract GPU usage percentage on Apple Silicon M-series Mac in Xcode (graphics card monitoring with Swift)
I'm working on an app for macOS where it would be very useful to display the GPU (graphics card) workload usage as a percentage. CPU usage monitoring is easy, but GPU monitoring on Apple Silicon is next-to-impossible. Apple only seems to give us our app’s GPU usage which is not what we want, since we want to total GPU workload for the whole system. I'm using the latest version of Xcode and Swift, any ideas how to achieve this?
0
0
593
Feb ’24
otool doesn't list "Load" commands?
I'm struggling with compiling lib opus so that it works in the simulator on Apple silicon. I found a thread on the forums that seems to address part of the issue, but I am unable to build the static lib so that it shows the platform it is targeting. The thread mentions that I should be able to run otool and see a "load commands" that indicate the platform. When I run otool against the static library that we have created, it doesn't list any load commands. I don't see LC_BUILD_VERSION or LC_VERSION_MIN_***. Why would there not be any "Load command" entries? % otool -l -arch arm64 dependencies/lib/libopus.a Archive : dependencies/lib/libopus.a dependencies/lib/libopus.a(bands.o): is an LLVM bit-code file dependencies/lib/libopus.a(celt.o): is an LLVM bit-code file dependencies/lib/libopus.a(celt_encoder.o): is an LLVM bit-code file dependencies/lib/libopus.a(celt_decoder.o): is an LLVM bit-code file ... dependencies/lib/libopus.a(mlp.o): is an LLVM bit-code file dependencies/lib/libopus.a(mlp_data.o): is an LLVM bit-code file The static library has the two architectures embedded in it, but when compiling the framework for the simulator platform the linking phase complains that we are building for the simulator, but linking object code built for ios. % lipo -info dependencies/lib/libopus.a Architectures in the fat file: dependencies/lib/libopus.a are: x86_64 arm64 In case you are curious, I'm just piggybacking on this project that has a build-libopus.sh script in the root directory that builds the official open source Opus library files. My hope is to build this static library for ios, ios-simulator, and mac-catalyst platforms and then include them in a xcframework.
1
0
578
Jan ’24
Detect if iOS app is running on Apple Silicon Mac using pure C
Is there a way in pure C (not Objective-C, and certainly not Swift) way to detect if an iOS app is running on a Mac? I'm aware of iOSAppOnMac, but AFAICT I'd need to write Objective-C code to use it. My application is quite old and large, with portions going back to the 1980s, so the burden of moving on the anything that can't be accessible directly from C is quite high.
2
0
505
Jan ’24
I am unable to run my iPad app on the Vision Pro simulator
My device has an M2 Max chip, and I am using Xcode version 15.1 Beta 3. My app runs normally in iOS and iPad simulators, but when I attempt to run it in the Vision Pro simulator, even though the compilation is successful, a dialog box appears stating, 'AppName's architectures (Intel 64-bit) include none that Apple Vision Pro can execute (arm64).' Consequently, the app is not successfully installed in the Vision Pro simulator. Additionally, my project uses Cocoapods for dependency management. I would appreciate any help, thank you!
5
1
2.6k
Jan ’24
/usr/bin/leaks Still Reachable Leak Detection
Consider the following program, memory-leak.c: #include <stdlib.h> void *p; int main() { p = malloc(7); p = 0; // The memory is leaked here. return 0; } If I compile this with clang memory-leak.c and test the output with the built-in MacOS memory leak detector leaks using leaks -quiet -atExit -- ./a.out, I get (partly) the following output: 1 leak for 16 total leaked bytes. However, if I remove the 'leaking' line like so: #include <stdlib.h> void *p; int main() { p = malloc(7); return 0; } Compiling this file and again running leaks now (partly) returns: 0 leaks for 0 total leaked bytes. The man page for leaks shows that it is only un-reachable memory that is considered a leak. Is there a configuration to detect un-free'd reachablemalloc segments?
2
0
773
Jan ’24
Xcode 13 on Apple silicon not running app with error A build only device cannot be used to run this target.
On xcode 13, I have macos project that runs fine on intel machine. On apple silicon (M1 Plus) I get the error "A build only device cannot be used to run this target.", when I try to run from Xcode. This seems to be an ios error. All Google suggested fixes involve picking a new device which is an ios fix. Right? Bulids fine and Archive app runs fine. I get the error for both intel and Arm64 architectures. I tried building for Target deployment target device families and Deployment as: macos 12.0 sdk any suggestions?
6
0
3.2k
Jan ’24
Incorrect Registers Values with Rosetta 2
#include <stdio.h> int main() { unsigned long a, d; __asm__ volatile ( "\n\t" "movl $0x77777777, %%eax\n\t" "movl $0xffffffff, %%ecx\n\t" "xorl %%edx, %%edx\n\t" "divl %%ecx\n\t" "cwtd\n\t" "movq %%rax, %0\n\t" "movq %%rdx, %1\n\t" : "=r"(a), "=r"(d) :: "rax", "rdx" ); printf("rax: %lx, rdx, %lx", a, d); } The minimal program above was expected to give rax: 0, rdx: 77770000 but on macOS with Rosetta 2 it gave rax: 0, rdx, 0. It causes specfic programs (e.g. Genshin Impact) to crash.
3
12
1.4k
Jan ’24
Building OpenCL code for Apple Silicon
I am asking this more in hope than expectation, but would greatly appreciate any help or suggestions (with apologies for a rather lengthy post). The problem I have with my existing OpenCL code is, quite simply, that I am unable to get it to build in Xcode (I have always used Xcode without problems in the past). So my question, quite simply, is: Can anyone advise how to configure and use Xcode in order to successfully build OpenCL code for Apple Silicon? Background: Having just received a shiny new M3 MacBook Pro, I would really like to try out one or two of my GPU programs. They were all written several years using OpenCL, before Apple decided to give up on it in favour of Metal. (In fact, I have since converted one of them to use CUDA, but that is not useful here.) Now, I completely understand that the right thing to do is to convert them to use Metal directly, and will do this when I have time, but I suspect that it will take me several days, if not weeks (I have never had reason to use Metal until now, so I will also have to learn how to convert my code; there are quite a few kernels). I don’t have time to do that at the moment. Meanwhile, I would very much like to try the programs right now, using OpenCL, simply to find out how they run on Apple Silicon (I have previously only used them on older, Intel Macs with AMD GPUs). It would be great to see my code running on the M3’s GPU! The reasons I think this must still be possible are (a) there are plenty of Geekbench OpenCL results for the M3 chips; and (b) I have managed to compile and run a really trivial OpenCL program (but only using clang from the command line; I have been unable to work out how to compile individual .cl files containing OpenCL kernels). The problem I am getting is that, having cloned one of my sets of programs into Xcode on my new M3 Mac, I am unable to get any of the kernels even to build. The failure I’m getting is that Xcode is trying to run a version of openclc in the directory /System/Library/Frameworks/OpenCL.framework/Libraries/, which gives the error condition Bad CPU type in executable when Xcode tries to use it. It seems that this is an x86_64 version of openclc. There is a universal binary version in /System/Library/PrivateFrameworks/GPUCompiler.framework/Versions/A/Libraries/, but I have been unable to find a way to configure (or force ….) Xcode to use that one. It may well be, of course, that if I manage to get past this problem, another one will present itself. Nonetheless, if any of you can suggest anything that I might try, I would be most grateful. One secondary question, if I may: Using openclc to compile a .cl file (containing a kernel) from the command line, is there a parameter (e.g. a value to specify with -arch) or combination of parameters that will cause it to produce a .bc file for an Apple Silicon GPU and also the .cl.h header file that has to be #included in the C or C++ code that will dispatch the kernel? Thanks …. Andrew PS. I’ve also posted this question on MacRumors, because there seem to be quite a number of people there who understand Apple Silicon, but I rather suspect there’s a better chance of getting getting the help I need here ….
0
0
860
Dec ’23
How to get MacOS Version and device model from iOS app running on the Mac with Apple Silicon?
Hello, we're trying to running Unity game built for iPhones and iPads on Mac, just like this. https://developer.apple.com/documentation/apple-silicon/running-your-ios-apps-in-macos Getting device model with Unity API works on iPhone and iPad, but we got "iPad8,1"(or iPad8,2/3/4/..., one of the model of iPad Pro 3) for device model, and get "iPadOS 16.6" for Operating system on Mac. They are not Mac device information, how we get Mac device model and Mac OS version if we are running on Mac? (Additionally, not Mac Catalyst.)
1
0
507
Nov ’23
SecureTransport Generates SSL Continuation Message Instead of TLS Client Hello on M1
I maintain a cross-platform client side network library for persistent TCP connections targeting Win32, Darwin and FreeBSD platforms. I recently upgraded to a Mac Studio w/ M1 Max (Ventura 13.1) from a late 2015 Intel Macbook Pro (Monterey 12.6.2) and I've encountered a discrepancy between the two. For secure TCP connections my lib uses WolfSSL across all platforms but also supports use of system provided Security libraries. On Darwin platforms this is SecureTransport. Yes I am aware SecureTransport is deprecated in favor of Network. I intend to attempt to integrate with Network later but for now my architecture dictates that I use similar C-style callbacks akin to WolfSSL, OpenSSL, MBedTLS etc. On the first call to SSLHandshake the SecureTransport write callback generates 151 bytes for my TLS 1.2 connection to example.com:443 on both platforms. However, while on Intel MBP I am able to continue with the full handshake I immediately receive 0 bytes with EOF. In Wireshark on the Intel MBP the 151 bytes are observed as a TLS 1.2 client hello while on M1 it is observed as an SSL continuation message and that is the last message observed.
11
0
1.6k
Nov ’23
Problem with variadic C functions
I am the author of the open-source Dynace. This is an OO extension to C. It has been in production use for around 20 years. It has been used on DOS, all Windows, Linux, macOS/Intel, VMS, PLAN9, COSMIC, SUNOS, etc. all without a problem. However it does not run on Apple M1, M2 machines. I traced the problem to variadic function calls. I am creating a va_list in a ... function. I then pass the va_list to a second function. I wrote something to copy the the va_list (via va_copy) to see what I am getting. The first function works okay. But the second function does not. (I know you can't re-use a va_list.) I have spend a couple of days on this and can't find a problem with my code. I tried creating a simple example but it worked. Apparently the problem is situational. Anyway, I have no idea what is wrong or what to do next. I'd sure appreciate any help! Thanks!
2
0
608
Nov ’23
python code runs on other OS but not on MacOS
I have a Python code that parses XML string. When I run that code on MacOS, it shows me an error. But when I run the same code on my Linux machine or on Windows it works completely fine. I'm wondering how is possible for a platform-independent language that runs on Linux but not on Mac. from lxml.etree import fromstring s = """<Abstract><AbstractText>The genus Cinchona is known for a range of alkaloids, such as quinine, quinidine, cinchonine, and cinchonidine. Cinchona bark has been used as an antimalarial agent for more than 400 years. Quinine was first isolated in 1820 and is still acknowledged in the therapy of chloroquine-resistant falciparum malaria; in lower dosage quinine has been used as treatment for leg cramps since the 1940s. Here we report the effects of the quinoline derivatives quinine, quinidine, and chloroquine on human adult and fetal muscle nicotinic acetylcholine receptors (nAChRs). It could be demonstrated that the compounds blocked acetylcholine (ACh)-evoked responses in Xenopus laevis oocytes expressing the adult nAChR composed of αβ𝜀δ subunits in a concentration-dependent manner, with a ranked potency of quinine (IC50 = 1.70 **M), chloroquine (IC50 = 2.22 **M) and quinidine (IC50 = 3.96 **M). At the fetal nAChR composed of ** subunits, the IC50 for quinine was found to be 2.30 **M. The efficacy of the block by quinine was independent of the ACh concentration. Therefore, quinine is proposed to inhibit ACh-evoked currents in a non-competitive manner. The present results add to the pharmacological characterization of muscle nAChRs and indicate that quinine is effective at the muscular nAChRs close to therapeutic blood concentrations required for the therapy and prophylaxis of nocturnal leg cramps, suggesting that the clinically proven efficacy of quinine could be based on targeting nAChRs.</AbstractText></Abstract>""" print(fromstring(s))
1
1
517
Nov ’23
Debug a process by hand from a c program on an Apple Silicon CPU
Hello, My purpose is to understand how macOS works. Here is what i've done: I have wrote a c program on a M1 CPU with this lines: printf("Before breakpoint\n"); asm volatile("brk #0"); printf("After breakpoint\n"); When i run this program with lldb, a breakpoint is hit on the second line. So i suppose lldb is writing a "brk #0" instruction when we put a breakpoint manually. I can't continue to next line with lldb "c" command. PC stays on the brk instruction. I need to manually set PC to next instruction in lldb. Now, what i want to do is to create my own debugger. (I want to understand how lldb works). I have managed to ptrace the target program and i was able to catch an event with waitpid when "brk #0" is hit. But i don't know how i can increase PC value and continue execution because i can't do this on Silicon CPU: ptrace(PTRACE_GETREGS, child_pid, NULL, &amp;regs); ptrace(PTRACE_SETREGS, child_pid, NULL, &amp;regs); kill(child_pid, SIGCONT); So my question is: How does lldb managed to change ARM64 registers of a remote process ? Thanks
2
0
782
Nov ’23
ProRes encoding on M1 Max fails for high bit depth buffers
I have code that has worked for many years for writing ProRes files, and it is now failing on the new M1 Max MacBook. Specifically, if I construct buffers with the pixel type "kCVPixelFormatType_64ARGB", after a few frames of writing, the pixel buffer pool becomes nil. This code works just fine on non Max processors (Intel and base M1 natively). Here's a sample main that demonstrates the problem. Am I doing something wrong here? //  main.m //  TestProresWriting // #import <Foundation/Foundation.h> #import <AVFoundation/AVFoundation.h> int main(int argc, const char * argv[]) {     @autoreleasepool {         int timescale = 24;         int width = 1920;         int height = 1080;         NSURL *url = [NSURL URLWithString:@"file:///Users/diftil/TempData/testfile.mov"];         NSLog(@"Output file = %@", [url absoluteURL]);         NSFileManager *fileManager = [NSFileManager defaultManager];         NSError *error = nil;         [fileManager removeItemAtURL:url error:&error];         // Set up the writer         AVAssetWriter *trackWriter = [[AVAssetWriter alloc] initWithURL:url                                                    fileType:AVFileTypeQuickTimeMovie                                                         error:&error];         // Set up the track         NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:                                        AVVideoCodecTypeAppleProRes4444, AVVideoCodecKey,                                        [NSNumber numberWithInt:width], AVVideoWidthKey,                                        [NSNumber numberWithInt:height], AVVideoHeightKey,                                        nil];                  AVAssetWriterInput *track = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo                                                         outputSettings:videoSettings];         // Set up the adapter         NSDictionary *attributes = [NSDictionary                                     dictionaryWithObjects:                                     [NSArray arrayWithObjects:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_64ARGB], // This pixel type causes problems on M1 Max, but works on everything else                                      [NSNumber numberWithUnsignedInt:width],[NSNumber numberWithUnsignedInt:height],                                      nil]                                     forKeys:                                     [NSArray arrayWithObjects:(NSString *)kCVPixelBufferPixelFormatTypeKey,                                      (NSString*)kCVPixelBufferWidthKey, (NSString*)kCVPixelBufferHeightKey,                                      nil]];         /*         NSDictionary *attributes = [NSDictionary                                     dictionaryWithObjects:                                     [NSArray arrayWithObjects:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], // This pixel type works on M1 Max                                      [NSNumber numberWithUnsignedInt:width],[NSNumber numberWithUnsignedInt:height],                                      nil]                                     forKeys:                                     [NSArray arrayWithObjects:(NSString *)kCVPixelBufferPixelFormatTypeKey,                                      (NSString*)kCVPixelBufferWidthKey, (NSString*)kCVPixelBufferHeightKey,                                      nil]];         */         AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor                             assetWriterInputPixelBufferAdaptorWithAssetWriterInput:track                             sourcePixelBufferAttributes:attributes];         // Add the track and start writing         [trackWriter addInput:track];         [trackWriter startWriting];         CMTime startTime = CMTimeMake(0, timescale);         [trackWriter startSessionAtSourceTime:startTime];         while(!track.readyForMoreMediaData);         int frameTime = 0;         CVPixelBufferRef frameBuffer = NULL;         for (int i = 0; i < 100; i++)         {             NSLog(@"Frame %@", [NSString stringWithFormat:@"%d", i]);             CVPixelBufferPoolRef PixelBufferPool = pixelBufferAdaptor.pixelBufferPool;             if (PixelBufferPool == nil)             {                 NSLog(@"PixelBufferPool is invalid.");                 exit(1);             }             CVReturn ret = CVPixelBufferPoolCreatePixelBuffer(nil, PixelBufferPool, &frameBuffer);             if (ret != kCVReturnSuccess)             {                 NSLog(@"Error creating framebuffer from pool");                 exit(1);             }             CVPixelBufferLockBaseAddress(frameBuffer, 0);             // This is where we would put image data into the buffer.  Nothing right now.             CVPixelBufferUnlockBaseAddress(frameBuffer, 0);             while(!track.readyForMoreMediaData);             CMTime presentationTime = CMTimeMake(frameTime+(i*timescale), timescale);             BOOL result = [pixelBufferAdaptor appendPixelBuffer:frameBuffer                                            withPresentationTime:presentationTime];             if (result == NO)             {                 NSLog(@"Error appending to track.");                 exit(1);             }             CVPixelBufferRelease(frameBuffer);         }         // Close everything         if ( trackWriter.status == AVAssetWriterStatusWriting)             [track markAsFinished];         NSLog(@"Completed.");     }     return 0; }
21
0
5.7k
Oct ’23
Xcode 15 linker error: not 8-byte aligned, which cannot be encoded as a target of LDR/STR
In an old project we just moved on mac ARM + Sonoma + Xcode 15 (fresh install), we get this link error: '_yytext' from '.../Objects-normal/arm64/html.lexer.o' not 8-byte aligned, which cannot be encoded as a target of LDR/STR in '_PHPyylex' from '.../Objects-normal/arm64/php.lexer.o' We don't get this error on Xcode 15 on mac Intel, nor with Xcode 14 on mac ARM. Does anyone know what we could do to fix that? thanks!
4
0
639
Oct ’23