Explore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.

Post

Replies

Boosts

Views

Activity

AppIntentVocabulary (INPlayMediaIntent) is unstable.
I am developing an iOS app that supports INPlayMediaIntent. We are trying to increase the recognition rate of content names, which are song titles, using AppIntentVocabulary. As a sample, some extracts are shown below. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>IntentPhrases</key> <array> <dict> <key>IntentName</key> <string>INPlayMediaIntent</string> <key>IntentExamples</key> <array> <string>Mezamashi Appで湖畔の朝を再生</string> <string>湖畔の朝をMezamashi Appで再生して</string> </array> </dict> </array> <key>ParameterVocabularies</key> <array> <dict> <key>ParameterNames</key> <array> <string>INPlayMediaIntent.playlistTitle</string> </array> <key>ParameterVocabulary</key> <array> <dict> <key>VocabularyItemIdentifier</key> <string>ID1</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>aogamagaeru</string> <key>VocabularyItemPhrase</key> <string>青ガマガエル</string> </dict> </array> </dict> <dict> <key>VocabularyItemIdentifier</key> <string>ID2</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>kohon no asa</string> <key>VocabularyItemPhrase</key> <string>湖畔の朝</string> </dict> </array> </dict> <dict> <key>VocabularyItemIdentifier</key> <string>ID3</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>kumageratachi no uta</string> <key>VocabularyItemPhrase</key> <string>クマゲラたちの歌</string> </dict> </array> </dict> </array> </dict> </array> </dict> </plist> When running on the iOS 17.5 simulator in Xcode 15.4, the results are as follows. mediaName = VocabularyItemIdentifier mediaIdentifier = nil <INMediaSearch: 0x6000026212c0> { reference = 0; mediaType = 0; sortOrder = 0; albumName = <null>; mediaName = ID1; genreNames = ( ); artistName = <null>; moodNames = ( ); releaseDate = <null>; mediaIdentifier = <null>; } However, when running on an iOS 17.5 device, the following applies. mediaName = VocabularyItemPhrase mediaIdentifier = VocabularyItemIdentifier <INMediaSearch: 0x301efd9e0> { reference = 0; mediaType = 5; sortOrder = 0; albumName = <null>; mediaName = 青ガマガエル; genreNames = ( ); artistName = <null>; moodNames = ( ); releaseDate = <null>; mediaIdentifier = ID1; } The results are not stable, for example, sometimes everything else returns null. I have tried everything, but it is just taking a long time. Does anyone have any advice on this?
1
0
219
Jun ’24
How to add support for Siri / Apple Intelligence to my existing AppEntity?
iOS 18 adds a specific macro for exposing your search app intent, app entities, etc, to siri but how are you meant to add it to your existing objects without removing it entirely from < iOS 18 users? For example, i get the following error: AssistantIntent(schema:) is only available in iOS 18 or newer. Add @available attribute to enclosing struct. I don't want to do that since i still want to support iOS 17 users with my existing shortcuts. Do i need to duplicate my entire shortcuts model to add the new macro?
1
0
541
Jun ’24
Siri enters loop of requesting parameter when running AppIntent
I want to add shortcut and Siri support using the new AppIntents framework. Running my intent using shortcuts or from spotlight works fine, as the touch based UI for the disambiguation is shown. However, when I ask Siri to perform this action, she gets into a loop of asking me the question to set the parameter. My AppIntent is implemented as following: struct StartSessionIntent: AppIntent { static var title: LocalizedStringResource = "start_recording" @Parameter(title: "activity", requestValueDialog: IntentDialog("which_activity")) var activity: ActivityEntity @MainActor func perform() async throws -> some IntentResult & ProvidesDialog { let activityToSelect: ActivityEntity = self.activity guard let selectedActivity = Activity[activityToSelect.name] else { return .result(dialog: "activity_not_found") } ... return .result(dialog: "recording_started \(selectedActivity.name.localized())") } } The ActivityEntity is implemented like this: struct ActivityEntity: AppEntity { static var typeDisplayRepresentation = TypeDisplayRepresentation(name: "activity") typealias DefaultQuery = MobilityActivityQuery static var defaultQuery: MobilityActivityQuery = MobilityActivityQuery() var id: String var name: String var icon: String var displayRepresentation: DisplayRepresentation { DisplayRepresentation(title: "\(self.name.localized())", image: .init(systemName: self.icon)) } } struct MobilityActivityQuery: EntityQuery { func entities(for identifiers: [String]) async throws -> [ActivityEntity] { Activity.all()?.compactMap({ activity in identifiers.contains(where: { $0 == activity.name }) ? ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon) : nil }) ?? [] } func suggestedEntities() async throws -> [ActivityEntity] { Activity.all()?.compactMap({ activity in ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon) }) ?? [] } } Has anyone an idea what might be causing this and how I can fix this behavior? Thanks in advance
3
3
1k
Jun ’23
Image Playground
Are there going to be any sessions on Image Playgrounds API for iOS? "Explore machine learning on Apple platforms" mentions the writing and points to sessions, but only mentions Image Playground without pointing to sessions.
0
5
410
Jun ’24
"Error: Intent of type INStartCallIntent is not supported for this app category"
I am trying to make a voip car play app using siri let assistant = CPAssistantCellConfiguration(position: .top, visibility: .always, assistantAction: .startCall) let siriTmeplate = CPListTemplate(title: "Siri", sections: [sectionItems, loadingSection], assistantCellConfiguration: assistant) siriTmeplate.tabSystemItem = .recents siriTmeplate.showsTabBadge = false Using the above code gives me the error "Error: Intent of type INStartCallIntent is not supported for this app category" on app luanch I have INStartCallIntent in my apps info plist and I have all the entitlements and I have "business" as the app category, I can fine 0 help online with this. what does this error really mean and how can I fix it please
2
0
333
Jun ’24
"accelerate everyday tasks" in apps without intents?
From https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/: Powered by Apple Intelligence, Siri becomes more deeply integrated into the system experience. With richer language-understanding capabilities, Siri is more natural, more contextually relevant, and more personal, with the ability to simplify and accelerate everyday tasks. From https://developer.apple.com/apple-intelligence/: Siri is more natural, more personal, and more deeply integrated into the system. Apple Intelligence provides Siri with enhanced action capabilities, and developers can take advantage of pre-defined and pre-trained App Intents across a range of domains to not only give Siri the ability to take actions in your app, but to make your app’s actions more discoverable in places like Spotlight, the Shortcuts app, Control Center, and more. SiriKit adopters will benefit from Siri’s enhanced conversational capabilities with no additional work. And with App Entities, Siri can understand content from your app and provide users with information from your app from anywhere in the system. Based on this, as well as the video at https://developer.apple.com/videos/play/wwdc2024/10133/ , my understanding is that in order for Siri to be able to execute tasks in applications, those applications must implement the Siri Intents API. Can someone at Apple please clarify: will it be possible for Siri or some other aspect of Apple Intelligence / Core ML / Create ML to take actions in applications which do not support these APIs (e.g. web apps, Citrix apps, legacy apps)? Thank you!
1
1
413
Jun ’24
Custom Dataset Training With YOLO model on Mac M1 Pro
Hi I have only recently started working on ML on my Mac M1 Pro, previously I was working on a Windows platform. I am having difficulties getting my machine set up right so that its ready for the super fast training I was hoping for when I got it. Please help me with this and let me know if and where I am going wrong. So, I tried a custom dataset training using Yolov8 model. I want to train for a 100 epochs. Now the same dataset and hyperparameters take about 2.5 hours on a T4 GPU on Google Colab, whereas I was only at around 60 epochs after 24 hours on my M1 pro. I have home brew, miniconda, pytorch nightly for mac installed and set the device to mps when training the YOLO model. I feel that this is reaaallly slow. What should I be doing right? Thank you Lakshmi
0
1
429
May ’24
jax installation
followed instruction in https://developer.apple.com/metal/jax/ I got Successfully installed importlib-metadata-7.1.0 jax-0.4.28 jax-metal-0.0.7 jaxlib-0.4.28 opt-einsum-3.3.0 scipy-1.13.0 six-1.16.0 zipp-3.18.2 but the test failed python -c 'import jax; print(jax.numpy.arange(10))' Traceback (most recent call last): File "", line 1, in File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/init.py", line 37, in import jax.core as _core File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/core.py", line 18, in from jax._src.core import ( File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/core.py", line 39, in from jax._src import dtypes File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/dtypes.py", line 33, in from jax._src import config File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/config.py", line 27, in from jax._src import lib File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/lib/init.py", line 84, in cpu_feature_guard.check_cpu_features() RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. You may be able work around this issue by building jaxlib from source.
0
0
620
May ’24
Crash due to SNMLModelFactory "try!" expression
Hello there, We currently have a crash in prod when executing the following line: let classificationRequest = try SNClassifySoundRequest(classifierIdentifier: .version1) It appears to only happen on iOS 17+ and only when regaining audio focus after an interruption in a background state. We are aware this call probably fails because it is happening from a background state - however - I would expect then that the SNClassifySoundRequest throws some kind of error since it is already an initializer that throws. If it is possible for the call to fail under certain circumstances, then could SNMLModelFactory throw an error instead of using try! ? Full trace below: SoundAnalysis/SNMLModelFactory.swift:112: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/System/Library/Frameworks/SoundAnalysis.framework/SNSoundClassifierVersion1Model.mlmodelc/model1/model.espresso.net' with error code: -1." UserInfo={NSLocalizedDescription=Failed to build the model execution plan using a model architecture file '/System/Library/Frameworks/SoundAnalysis.framework/SNSoundClassifierVersion1Model.mlmodelc/model1/model.espresso.net' with error code: -1.}
1
3
405
May ’24
Random crash from AVFAudio library
Hi everyone ! I'm getting random crashes when I'm using the Speech Recognizer functionality in my app. This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes. Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library). Here is my code and also the crash log for it. Code: func startRecording() { startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal) if UserDefaults.standard.bool(forKey: Constants.darkTheme) { commentTextView.textColor = .white } else { commentTextView.textColor = .black } commentTextView.isUserInteractionEnabled = false recordingLabel.text = Constants.recording if recognitionTask != nil { recognitionTask?.cancel() recognitionTask = nil } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.record) try audioSession.setMode(AVAudioSession.Mode.measurement) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { showAlertWithTitle(message: Constants.error) } recognitionRequest = SFSpeechAudioBufferRecognitionRequest() let inputNode = audioEngine.inputNode guard let recognitionRequest = recognitionRequest else { fatalError(Constants.error) } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in var isFinal = false if result != nil { self.commentTextView.text = result?.bestTranscription.formattedString isFinal = (result?.isFinal)! } if error != nil || isFinal { self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.startStopRecordBtn.isEnabled = true } }) let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE self?.recognitionRequest?.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { showAlertWithTitle(message: Constants.error) } } Here is the crash log: Thanks for very much for reading this !
2
0
395
May ’24
Cannot assign a device for operation RandomUniform M3 Macbook pro 14.4.1
Cannot assign a device for operation encoder/down1/downs_0/conv1/weight/Initializer/random_uniform/RandomUniform: Could not satisfy explicit device specification '' because the node {{colocation_node encoder/down1/downs_0/conv1/weight/Initializer/random_uniform/RandomUniform}} was colocated with a group of nodes that required incompatible device '/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0]. Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: GPU CPU Mul: GPU CPU AddV2: GPU CPU Sub: GPU CPU RandomUniform: GPU CPU Assign: CPU VariableV2: GPU CPU Const: GPU CPU
0
0
292
May ’24
jax-metal fails to install on M1 clean environment
Hi all, I'm having trouble even getting jax-metal latest version to install on my M1 MacBook Pro. In a clean conda environment, I pip install jax-metal and get In [1]: import jax; print(jax.numpy.arange(10)) Platform 'METAL' is experimental and not all JAX functionality may be correctly supported! --------------------------------------------------------------------------- XlaRuntimeError Traceback (most recent call last) [... skipping hidden 1 frame] File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:977, in _init_backend(platform) 976 logger.debug("Initializing backend '%s'", platform) --> 977 backend = registration.factory() 978 # TODO(skye): consider raising more descriptive errors directly from backend 979 # factories instead of returning None. File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:666, in register_plugin.<locals>.factory() 665 if not xla_client.pjrt_plugin_initialized(plugin_name): --> 666 xla_client.initialize_pjrt_plugin(plugin_name) 667 updated_options = {} File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jaxlib/xla_client.py:176, in initialize_pjrt_plugin(plugin_name) 169 """Initializes a PJRT plugin. 170 171 The plugin needs to be loaded first (through load_pjrt_plugin_dynamically or (...) 174 plugin_name: the name of the PJRT plugin. 175 """ --> 176 _xla.initialize_pjrt_plugin(plugin_name) XlaRuntimeError: INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) Cell In[1], line 1 ----> 1 import jax; print(jax.numpy.arange(10)) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:2952, in arange(start, stop, step, dtype) 2950 ceil_ = ufuncs.ceil if isinstance(start, core.Tracer) else np.ceil 2951 start = ceil_(start).astype(int) # type: ignore -> 2952 return lax.iota(dtype, start) 2953 else: 2954 if step is None and start == 0 and stop is not None: File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1282, in iota(dtype, size) 1277 def iota(dtype: DTypeLike, size: int) -> Array: 1278 """Wraps XLA's `Iota 1279 <https://www.tensorflow.org/xla/operation_semantics#iota>`_ 1280 operator. 1281 """ -> 1282 return broadcasted_iota(dtype, (size,), 0) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1292, in broadcasted_iota(dtype, shape, dimension) 1289 static_shape = [None if isinstance(d, core.Tracer) else d for d in shape] 1290 dimension = core.concrete_or_error( 1291 int, dimension, "dimension argument of lax.broadcasted_iota") -> 1292 return iota_p.bind(*dynamic_shape, dtype=dtype, shape=tuple(static_shape), 1293 dimension=dimension) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:387, in Primitive.bind(self, *args, **params) 384 def bind(self, *args, **params): 385 assert (not config.enable_checks.value or 386 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args --> 387 return self.bind_with_trace(find_top_trace(args), args, params) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:391, in Primitive.bind_with_trace(self, trace, args, params) 389 def bind_with_trace(self, trace, args, params): 390 with pop_level(trace.level): --> 391 out = trace.process_primitive(self, map(trace.full_raise, args), params) 392 return map(full_lower, out) if self.multiple_results else full_lower(out) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:879, in EvalTrace.process_primitive(self, primitive, tracers, params) 877 return call_impl_with_key_reuse_checks(primitive, primitive.impl, *tracers, **params) 878 else: --> 879 return primitive.impl(*tracers, **params) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/dispatch.py:86, in apply_primitive(prim, *args, **params) 84 prev = lib.jax_jit.swap_thread_local_state_disable_jit(False) 85 try: ---> 86 outs = fun(*args) 87 finally: 88 lib.jax_jit.swap_thread_local_state_disable_jit(prev) [... skipping hidden 17 frame] File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:902, in backends() 900 else: 901 err_msg += " (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)" --> 902 raise RuntimeError(err_msg) 904 assert _default_backend is not None 905 if not config.jax_platforms.value: RuntimeError: Unable to initialize backend 'METAL': INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.) jax.__version__ is 0.4.27.
3
0
720
May ’24
Mismatch between CPU inference and GPU inference using metal-trained GRU
Regardless of the installation version combinations of tensorflow & metal (2.14, 2.15, 2.16), I find a metal/non-metal incompatibility for some layer types. For the GRU layer, for example, metal-trained weights (model.save_weights()/load_weights()) are not compatible with inference using the CPU. That is, train a model using metal, run inference using metal, then run inference again after uninstalling metal, and the results differ -- sometimes a night and day difference. This essentially eliminates the usefulness of tensorflow-metal for me. From my limited testing, models using other, simple combinations of layer types including Dense and LSTM do not show this problem. Just the GRU. And by "testing" I mean really simple models, like one GRU layer. Apple Framework Metal Team: You are doing very useful work, and I kindly ask, please address this bug :)
1
1
347
May ’24
Tensor Metal Plugin installation error
I using a Macbook pro with an m2 pro chip. I was trying to work with TensorFlow but I encountered an illegal hardware instruction error. To resolve it I initiated the installation of a metal plugin which is throwing the following error. or semicolon (after version specifier) awscli>=1.16.100boto3>=1.9.100 ~~~~~~~~~~~^ Unable to locate awscli [end of output]
0
0
393
Apr ’24
Every second Epoch takes zero seconds using imagedatagenerator and tf
When fitting a CNN model, every second Epoch takes zero seconds and with OUT_OF_RANGE warnings. Im using structured folders of categorical images for training and validation. Here is the warning message that occurs after every second Epoch. The fitting looks like this... 37/37 ━━━━━━━━━━━━━━━━━━━━ 14s 337ms/step - accuracy: 0.5255 - loss: 1.0819 - val_accuracy: 0.2578 - val_loss: 2.4472 Epoch 4/20 37/37 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.5312 - loss: 1.1106 - val_accuracy: 0.1250 - val_loss: 3.0711 Epoch 5/20 2024-04-19 09:22:51.673909: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-04-19 09:22:51.673928: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_59]] 2024-04-19 09:22:51.673940: I tensorflow/core/framework/local_rendezvous.cc:422] Local rendezvous recv item cancelled. Key hash: 10431687783238222105 2024-04-19 09:22:51.673944: I tensorflow/core/framework/local_rendezvous.cc:422] Local rendezvous recv item cancelled. Key hash: 17360824274615977051 2024-04-19 09:22:51.673955: I tensorflow/core/framework/local_rendezvous.cc:422] Local rendezvous recv item cancelled. Key hash: 10732905483452597729 My setup is.. Tensor Flow Version: 2.16.1 Python 3.9.19 (main, Mar 21 2024, 12:07:41) [Clang 14.0.6 ] Pandas 2.2.2 Scikit-Learn 1.4.2 GPU is available My generator is.. train_generator = datagen.flow_from_directory( scalp_dir_train, # directory target_size=(256, 256),# all images found will be resized batch_size=32, class_mode='categorical' #subset='training' # Specify the subset as training ) n_samples = train_generator.samples # gets the number of samples validation_generator = datagen.flow_from_directory( scalp_dir_test, # directory path target_size=(256, 256), batch_size=32, class_mode='categorical' #subset='validation' # Specifying the subset as validation Here is my model. early_stopping_monitor = EarlyStopping(patience = 10,restore_best_weights=True) from tensorflow.keras.optimizers import Adam from tensorflow.keras.optimizers import SGD optimizer = Adam(learning_rate=0.01) model = Sequential() model.add(Conv2D(128, (3, 3), activation='relu',padding='same', input_shape=(256, 256, 3))) model.add(BatchNormalization()) model.add(MaxPooling2D((2, 2))) model.add(Dropout(0.3)) model.add(Conv2D(64, (3, 3),padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D((2, 2))) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.4)) model.add(Dense(256, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(4, activation='softmax')) # Defined by the number of classes model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) Here is the fit... history=model.fit( train_generator, steps_per_epoch=37, epochs=20, validation_data=validation_generator, validation_steps=12, callbacks=[early_stopping_monitor] #verbose=2 )
0
0
539
Apr ’24
jax.numpy.insert returning incorrect results wen jitted
Hi, I just noticed that using the jax.numpy.insert() function returns an incorrect result (zero-padding the array) when compiled with jax.jit. When not jitted, the results are correct Config: M1 Pro Macbook Pro 2021 python 3.12.3 ; jax-metal 0.0.6 ; jax 0.4.26 ; jaxlib 0.4.23 MWE: import jax import jax.numpy as jnp x = jnp.arange(20).reshape(5, 4) print(f"{x=}\n") def return_arr_with_ins(arr, ins): return jnp.insert(arr, 2, ins, axis=1) x2 = return_arr_with_ins(x, 99) print(f"{x2=}\n") return_arr_with_ins_jit = jax.jit(return_arr_with_ins) x3 = return_arr_with_ins_jit(x, 99) print(f"{x3=}\n") Output: x2 (computed with the non-jitted function) is correct; x3 just has zero-padding instead of a column of 99 x=Array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]], dtype=int32) x2=Array([[ 0, 1, 99, 2, 3], [ 4, 5, 99, 6, 7], [ 8, 9, 99, 10, 11], [12, 13, 99, 14, 15], [16, 17, 99, 18, 19]], dtype=int32) x3=Array([[ 0, 1, 2, 3, 0], [ 4, 5, 6, 7, 0], [ 8, 9, 10, 11, 0], [12, 13, 14, 15, 0], [16, 17, 18, 19, 0]], dtype=int32) The same code run on a non-metal machine gives the correct results: x=Array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]], dtype=int32) x2=Array([[ 0, 1, 99, 2, 3], [ 4, 5, 99, 6, 7], [ 8, 9, 99, 10, 11], [12, 13, 99, 14, 15], [16, 17, 99, 18, 19]], dtype=int32) x3=Array([[ 0, 1, 99, 2, 3], [ 4, 5, 99, 6, 7], [ 8, 9, 99, 10, 11], [12, 13, 99, 14, 15], [16, 17, 99, 18, 19]], dtype=int32) Not sure if this is the correct channel for bug reports, please feel free to let me know if there's a more appropriate place!
0
0
325
Apr ’24
Seeking Assistance with Bulk Input of Pronunciation Corrections for iOS
Hello iOS Developer Community, I hope this message finds you healthy and happy. I am reaching out to seek your expertise and assistance with a particular challenge I’ve encountered while using the Speak Screen and Speak Selection features on iOS. As you may know, these features are incredibly useful for reading text aloud, but they sometimes struggle with the correct pronunciation of homographs—words that are spelled the same but have different meanings and pronunciations. An example of this is the word “live,” which can be pronounced differently based on the context of the sentence. To enhance my user experience, I am looking to input corrections for the pronunciation of “live” in its “happening now” context, such as in “live broadcast” or “live event.” However, the current process requires manual entry for each phrase, which is quite labor-intensive. I am wondering if there is a way to automate or streamline this process, perhaps through a shortcut or script that allows for bulk input of these corrections. Additionally, if anyone has already compiled a list of common phrases with homographs and their correct pronunciations, I would greatly appreciate it if you could share it or guide me on where to find such resources. Your insights and guidance on this matter would be invaluable, and I believe any solutions could benefit not just myself but many other users facing similar issues. Thank you for your time and consideration. I look forward to any suggestions or advice you may have. Best regards, Alec
0
0
311
Apr ’24