preferredMetalDevice shows null for MLBoostedTreeRegressor

I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma.

For the basic model training I used

let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000)

let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst)

Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma

I have tried to investigate why, and have noticed that when I use

I see these results

 useWatchSPIForScribble: NO,             
 allowLowPrecisionAccumulationOnGPU: NO,             
 allowBackgroundGPUComputeSetting: NO,             
 preferredMetalDevice: (null),             
 enableTestVectorMode: NO,             
 parameters: (null),             
 rootModelURL: (null),             
 profilingOptions: 0,             
 usePreloadedKey: NO,             
 trainWithMLCompute: NO,             
 parentModelName: ,             
 modelName: Unnamed_Model,             
 experimentalMLE5EngineUsage: Enable,             
 preparesLazily: NO,             
 predictionConcurrencyHint: 0,  

Why is the preferred Metal Device null?

If I do


            let devices = MTLCopyAllDevices()

for device in devices {
                config.preferredMetalDevice = device
                print(device.name)
            }

I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?)

Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?

  • Not sure what happened to the code but it should have read

    print(model.model.configuration)

    to notice preferred Metal Device is null

Add a Comment

Replies

Can you please provide more details? How many rows are you training on? How many and what kind of features? If you prefer, please file a Feedback Assistant issue and include details of your dataset. You can also try profiling with Instruments and attaching the results.

Thanks for reporting.

All datasets are doubles with around 20-50 columns and around 1000 rows. I have noticed the significant slowdown across the board, whether training with just 50 rows or up to 1000. All exhibit the same degree of slowdown. The slowdown scales linearly with the number of iterations performed.

  • Can you share your specific macOS versions of Ventura and Sonoma?

Add a Comment

Also, what is the trainingdata you use? Is it a MLDataTable or DataFrame (from TabularData framework)?

I am using a dataframe. When I accidentally upgraded to Sonoma the code was using MLDataTable. I switched to a dataframe on seeing the results to see if this is what caused the issue. The speeds are the same regardless, a huge degradation of performance.

Is the preferredMetalDevice meant to be (null)? All documentation I read says it should be nil.

Specific MacOS of Sonoma is 14.0 - 23A344. I had kept up with the updates on Ventura, so it would have been the latest version before the roll out.