Hello!
I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly.
I have some code that may be relevant in debugging this issue: How I use the model:
model = BallTrackerNet() # this is the model architecture which will be referenced later
device = self.device # cpu
model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights
model = model.to(device)
model.eval()
Here is the BallTrackerNet() model itself:
import torch.nn as nn
import torch
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias),
nn.ReLU(),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
return self.block(x)
class BallTrackerNet(nn.Module):
def __init__(self, out_channels=256):
super().__init__()
self.out_channels = out_channels
self.conv1 = ConvBlock(in_channels=9, out_channels=64)
self.conv2 = ConvBlock(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = ConvBlock(in_channels=64, out_channels=128)
self.conv4 = ConvBlock(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv5 = ConvBlock(in_channels=128, out_channels=256)
self.conv6 = ConvBlock(in_channels=256, out_channels=256)
self.conv7 = ConvBlock(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv8 = ConvBlock(in_channels=256, out_channels=512)
self.conv9 = ConvBlock(in_channels=512, out_channels=512)
self.conv10 = ConvBlock(in_channels=512, out_channels=512)
self.ups1 = nn.Upsample(scale_factor=2)
self.conv11 = ConvBlock(in_channels=512, out_channels=256)
self.conv12 = ConvBlock(in_channels=256, out_channels=256)
self.conv13 = ConvBlock(in_channels=256, out_channels=256)
self.ups2 = nn.Upsample(scale_factor=2)
self.conv14 = ConvBlock(in_channels=256, out_channels=128)
self.conv15 = ConvBlock(in_channels=128, out_channels=128)
self.ups3 = nn.Upsample(scale_factor=2)
self.conv16 = ConvBlock(in_channels=128, out_channels=64)
self.conv17 = ConvBlock(in_channels=64, out_channels=64)
self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels)
self.softmax = nn.Softmax(dim=1)
self._init_weights()
def forward(self, x, testing=False):
batch_size = x.size(0)
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ups1(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.ups2(x)
x = self.conv14(x)
x = self.conv15(x)
x = self.ups3(x)
x = self.conv16(x)
x = self.conv17(x)
x = self.conv18(x)
# x = self.softmax(x)
out = x.reshape(batch_size, self.out_channels, -1)
if testing:
out = self.softmax(out)
return out
def _init_weights(self):
for module in self.modules():
if isinstance(module, nn.Conv2d):
nn.init.uniform_(module.weight, -0.05, 0.05)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
elif isinstance(module, nn.BatchNorm2d):
nn.init.constant_(module.weight, 1)
nn.init.constant_(module.bias, 0)
Here is also the meta data of my model:
[
{
"metadataOutputVersion" : "3.0",
"storagePrecision" : "Float16",
"outputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 256 × 230400)",
"shortDescription" : "",
"shape" : "[1, 256, 230400]",
"name" : "var_462",
"type" : "MultiArray"
}
],
"modelParameters" : [
],
"specificationVersion" : 6,
"mlProgramOperationTypeHistogram" : {
"Cast" : 2,
"Conv" : 18,
"Relu" : 18,
"BatchNorm" : 18,
"Reshape" : 1,
"UpsampleNearestNeighbor" : 3,
"MaxPool" : 3
},
"computePrecision" : "Mixed (Float16, Float32, Int32)",
"isUpdatable" : "0",
"availability" : {
"macOS" : "12.0",
"tvOS" : "15.0",
"visionOS" : "1.0",
"watchOS" : "8.0",
"iOS" : "15.0",
"macCatalyst" : "15.0"
},
"modelType" : {
"name" : "MLModelType_mlProgram"
},
"userDefinedMetadata" : {
"com.github.apple.coremltools.source_dialect" : "TorchScript",
"com.github.apple.coremltools.source" : "torch==2.5.1",
"com.github.apple.coremltools.version" : "8.1"
},
"inputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)",
"shortDescription" : "",
"shape" : "[1, 9, 360, 640]",
"name" : "input_frames",
"type" : "MultiArray"
}
],
"generatedClassName" : "BallTracker",
"method" : "predict"
}
]
I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well.
Thanks!
Michael
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
I am trying to run TinyLlama directly using Swift Playgrounds for iOS. I have tried multiple solutions, like libraries (LLM.swift, swift-transformers, ...) which never worked due to import issues, and also tried importing an exported mlmodel.
For the later, I followed the article about Llama 3.1 on CoreML. It was hard to understand how to do the inference with it, but I was able to export a mlpackage, that I then placed in a xcode project to generate the mlmodelc (compiled model) and the model class. I had to go with the first version described in the article, without optimizations, as I got errors during model loading with the flexible input shapes. I was able to run the model for one token generation.
But my biggest problem is that, though the mlmodelc is only 550 MiB, th model loads 24+GiB of memory, largely exceeding what I can have on an iOS device.
Is there a way to use do LLM inferences on Swift Playgrounds at a reasonable speed (even 1 token / s would be sufficient)?
Hello!
I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly.
I have some code that may be relevant in debugging this issue:
How I use the model:
model = BallTrackerNet() # this is the model architecture which will be referenced later
device = self.device # cpu
model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights
model = model.to(device)
model.eval()
Here is the BallTrackerNet() model itself
import torch.nn as nn
import torch
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias),
nn.ReLU(),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
return self.block(x)
class BallTrackerNet(nn.Module):
def __init__(self, out_channels=256):
super().__init__()
self.out_channels = out_channels
self.conv1 = ConvBlock(in_channels=9, out_channels=64)
self.conv2 = ConvBlock(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = ConvBlock(in_channels=64, out_channels=128)
self.conv4 = ConvBlock(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv5 = ConvBlock(in_channels=128, out_channels=256)
self.conv6 = ConvBlock(in_channels=256, out_channels=256)
self.conv7 = ConvBlock(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv8 = ConvBlock(in_channels=256, out_channels=512)
self.conv9 = ConvBlock(in_channels=512, out_channels=512)
self.conv10 = ConvBlock(in_channels=512, out_channels=512)
self.ups1 = nn.Upsample(scale_factor=2)
self.conv11 = ConvBlock(in_channels=512, out_channels=256)
self.conv12 = ConvBlock(in_channels=256, out_channels=256)
self.conv13 = ConvBlock(in_channels=256, out_channels=256)
self.ups2 = nn.Upsample(scale_factor=2)
self.conv14 = ConvBlock(in_channels=256, out_channels=128)
self.conv15 = ConvBlock(in_channels=128, out_channels=128)
self.ups3 = nn.Upsample(scale_factor=2)
self.conv16 = ConvBlock(in_channels=128, out_channels=64)
self.conv17 = ConvBlock(in_channels=64, out_channels=64)
self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels)
self.softmax = nn.Softmax(dim=1)
self._init_weights()
def forward(self, x, testing=False):
batch_size = x.size(0)
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ups1(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.ups2(x)
x = self.conv14(x)
x = self.conv15(x)
x = self.ups3(x)
x = self.conv16(x)
x = self.conv17(x)
x = self.conv18(x)
# x = self.softmax(x)
out = x.reshape(batch_size, self.out_channels, -1)
if testing:
out = self.softmax(out)
return out
def _init_weights(self):
for module in self.modules():
if isinstance(module, nn.Conv2d):
nn.init.uniform_(module.weight, -0.05, 0.05)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
elif isinstance(module, nn.BatchNorm2d):
nn.init.constant_(module.weight, 1)
nn.init.constant_(module.bias, 0)
I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated!
Thanks!
Michael
I see the solution is simple "just change the language in the build settings" but the build settings are not a thing in an App Playground project. It also says duplicated tasks.
Hi all,
I'm working on an app to classify dog breeds via CoreML, but when I try training a model using Image Feature Print v2, I get the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
Strangely, when I switch back to Image Feature Print v1, the model trains perfectly fine. I've verified that there aren't any invalid or broken images in my dataset. Is there a fix for this? Thanks!
I have a smallish image classifier I've been working on using the Create ML app. For a while everything was going fine, but lately, as the dataset has gotten larger, Create ML seems to stop during the testing phase with no error or test results.
You can see here that there is no score in the result box, even though there are testing started and completed messages:
No error message is shown in the Create ML app, but I do see these messages in the log:
default 14:25:36.529887-0500 MLRecipeExecutionService [0x6000012bc000] activating connection: mach=false listener=false peer=false name=com.apple.coremedia.videodecoder
default 14:25:36.529978-0500 MLRecipeExecutionService [0x41c5d34c0] activating connection: mach=false listener=true peer=false name=(anonymous)
default 14:25:36.530004-0500 MLRecipeExecutionService [0x41c5d34c0] Channel could not return listener port.
default 14:25:36.530364-0500 MLRecipeExecutionService [0x429a88740] activating connection: mach=false listener=false peer=true name=com.apple.xpc.anonymous.0x41c5d34c0.peer[1167].0x429a88740
default 14:25:36.534523-0500 MLRecipeExecutionService [0x6000012bc000] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
default 14:25:36.534537-0500 MLRecipeExecutionService [0x41c5d34c0] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
default 14:25:36.534544-0500 MLRecipeExecutionService [0x429a88740] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
error 14:25:36.558788-0500 MLRecipeExecutionService CreateWithURL:342: *** ERROR: err=24 (Too many open files) - could not open '<CFURL 0x60000079b540 [0x1fdd32240]>{string = file:///Users/kevin/Library/Mobile%20Documents/com~apple~CloudDocs/Binary%20Formations/Under%20My%20Roof/Core%20ML%20Training%20Data/Household%20Items/Output/2025.01.23_12.55.16/Test/Stove/Test480.webp, encoding = 134217984, base = (null)}'
default 14:25:36.559030-0500 MLRecipeExecutionService Error: <private>
default 14:25:36.559077-0500 MLRecipeExecutionService Error: <private>
Of particular interest is the "Too many open files" message from MLRecipeExecutionService referencing one of the test images.
There are a total of 2,555 test images, which I wouldn't think would be a very large set. The system doesn't seem to be running out of memory or anything like that.
Near the end of the test run there MLRecipeExecution service had 2934 file descriptors open according to lsof.
Has anyone else run into this or know of a workaround? So far I've tried rebooting and recreating the Create ML project.
Currently using Create ML Version 6.1 (150.3) on macOS 15.2 (24C101) running on a Mac Studio.
import coremltools as ct
from coremltools.models.neural_network import quantization_utils
# load full precision model
model_fp32 = ct.models.MLModel(modelPath)
model_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16)
model_fp16.save("reduced-model.mlmodel")
I'm testing it with the model from one of Apple's source codes(GameBoardDetector), and it works fine, reduces the model size by half.
But there are several problems with my model(trained on CreateML app using Full Network):
Quantizing to float 16 does not work(new file gets created with reduced only 0.1mb).
Quantizing to below 16 values cause errors, and no file gets created.
Here are additional metadata and precisions of models.
Working model's additional metadata and precision:
Mine's additional metadata and precision:
Hey everyone, I am a beginner with developing and using Artificial Intelligence models.
How do I integrate my createML image classification with swift.
I already have have an ML model and I want to integrate it into a swiftUI app.
If anyone could help, that would be great.
Thank you, O3DP
Hey everyone, I want to add an if statement that would do something along the lines of this:
if confidence = 100% {
}
How could I do this?
I already have a createML model.
Thank you,
Oliver
I would like to make use of create ML to classify a motion. However, it seems it requires 2 classes at least to train or test it. What should I do as I only has 1 class (the target motion).
Also, how to interpret the 'Recall' and 'F1 Score'
We are using VNRecognizeTextRequest to detect text in documents, and we have noticed that even in some very clear and well-formatted documents, there are still instances where text blocks are missed. the live text also have the same issue.
When using CoreML for VAE model prediction, the prediction result shows a distorted display with no error messages. How can this issue be addressed?
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario:
My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ."
So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do.
What's the right way to define this behavior?
p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
I've been following along with "App Shortcuts" development but cannot get Siri to run my Intent. The intent on its own works in Shortcuts, along with a couple others that aren't in the AppShortcutsProvder structure.
I keep getting the following two errors, but cannot figure out why this is occurring with documentation or other forum posts.
No ConnectionContext found for 12909953344
Attempted to fetch App Shortcuts, but couldn't find the AppShortcutsProvider.
Here are the relevant snippets of code -
(1) The AppIntent definition
struct SetBrightnessIntent: AppIntent {
static var title = LocalizedStringResource("Set Brightness")
static var description = IntentDescription("Set Glass Display Brightness")
@Parameter(title: "Level")
var level: Int?
static var parameterSummary: some ParameterSummary {
Summary("Set Brightness to \(\.$level)%")
}
func perform() async throws -> some IntentResult {
guard let level = level else {
throw $level.needsValueError("Please provide a brightness value")
}
if level > 100 || level <= 0 {
throw $level.needsValueError("Brightness must be between 1 and 100")
}
// do stuff with level
return .result()
}
}
(2) The AppShortcutsProvider (defined in my iOS app target, there are no other targets)
struct MyAppShortcuts: AppShortcutsProvider {
static var shortcutTileColor: ShortcutTileColor = .grayBlue
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: SetBrightnessIntent(),
phrases: [
"set \(.applicationName) brightness to \(\.$level)",
"set \(.applicationName) brightness to \(\.$level) percent"
],
shortTitle: LocalizedStringResource("Set Glass Brightness"),
systemImageName: "sun.max"
)
}
}
Does anything here look wrong? Is there some magical key that I need to specify in Info.plist to get Siri to recognize the AppShortcutsProvider?
On Xcode 16.2 and iOS 18.2 (non-beta).
Note: I posted this to the feedback assistant but haven't gotten a response for 3months =( FB13482199
I am trying to train a large image classifier. I have a training run for ~300000 images. Each image has a folder and the file names within the folders are somewhat random. 381 classes. I am on an M2 Pro, Sonoma 14.0 running CreateML Version 5.0 (121.1). I would prefer not to pursue the pytorch/HF -> coremltools route.
CreateML seems to consistently crash ~25000-30000 images in during the feature extraction phase with "Unexpected Error". It does not seem to be due to an out of memory issue. I am looking for some guidance since it seems impossible to debug why this is consistently crashing.
My initial assumption was that it could be due to blank/corrupt files. I do not think that is the case. I also checked if there were any special characters in the data/folders. I wasn't able to go through all, but did try some programatic regex. Don't think this is the case either.
I attached the sysdiagnose results in feedback assistant after the crash happened. I did notice when going into /var/logs there was some write issue saying that Mac had written too much to disk. Note: I also tried Xcode 15.2-beta this time and the associated CoreML version.
My questions:
How can I fix this?
How should I go about debugging CreateML errors in the future?
'Unexpected Error' - where can I go about getting the exact createml logs on my device? This is far too broad of an error statement
Please let me know. As a note, I did successfully train a past model on ~100000 images. I am planning to 10-15x that if this run is successful. Please help, spent a lot of time gathering the extra data and to date have been an occasional power user of createml. Haven't heard back from Apple since December =/. I assume I'm not the only one with this problem, so looking for any instructions to hands on debug and help others. Thx!
It appears that there is a size limit when training the Tabular Classification model in CreatML. When the training data is small, the training process completes smoothly after a specified period. However, as the data volume increases, the following issues occur: initially, the training process indicates that it is in progress, but after approximately 24 hours, it is automatically terminated after an hour. I am certain that this is not a manual termination by myself or others, but rather an automatic termination by the machine. This issue persists despite numerous attempts, and the only message displayed is “Training Canceled.” I would appreciate it if someone could explain the reason behind this behavior and provide a solution. Thank you for your assistance.
Some of my users are experiencing crashes on instantiation of a CoreML model I've bundled with my app. I haven't been able to reproduce the crash on any of my devices. Crashes happen across all iOS 18 releases. Seems like something internal in CoreML is causing an issue.
Full stack trace:
6646631296fb42128ddc340b2d4322f7-symbolicated.crash
Hi,
I want to develop an app which makes use of Image Playground.
However, I am located in Europe which makes it impossible for me as Image Playground is not available for me. Even if I would like to distribute the app in the US.
Nor the simulator, nor a physical device will always return that support for ImagePlayground is not supported
(@Environment(.supportsImagePlayground) private var supportsImagePlayground)
How to set my environment such that I can test the feature in my iOS application
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create.
After loading the file, which I can view correctly in the console, I start the training.
Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr."
How can I understand what the issue is? Has anyone else experienced the same problem?
Thanks
Diego
I'm using Numbers to build a spreadsheet that I'm exporting as a CSV. I then import this file into Create ML to train a word tagger model. Everything has been working fine for all the models I've trained so far, but now I'm coming across a use case that has been breaking the import process: commas within the training data. This is a case that none of Apple's examples show.
My project takes Navajo text that has been tokenized by syllables and labels the parts-of-speech.
Case that works...
Raw text:
Naaltsoos yídéeshtah.
Tokens column:
Naal,tsoos, ,yí,déesh,tah,.
Labels column:
NObj,NObj,Space,Verb,Verb,VStem,Punct
Case that breaks...
Raw text:
óola, béésh łigaii, tłʼoh naadą́ą́ʼ, wáin, akʼah, dóó á,shįįh
Tokens column with tokenized text (commas quoted):
óo,la,",", ,béésh, ,łi,gaii,",", ,tłʼoh, ,naa,dą́ą́ʼ,",", ,wáin,",", ,a,kʼah,",", ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Tokens column with tokenized text (commas escaped):
óo,la,\,, ,béésh, ,łi,gaii,\,, ,tłʼoh, ,naa,dą́ą́ʼ,\,, ,wáin,\,, ,a,kʼah,\,, ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Tokens column with tokenized text (commas escape-quoted):
óo,la,\",\", ,béésh, ,łi,gaii,\",\", ,tłʼoh, ,naa,dą́ą́ʼ,\",\", ,wáin,\",\", ,a,kʼah,\",\", ,dóó, ,á,shįįh
(record not detected by Create ML)
Tokens column with tokenized text (commas escape-quoted):
óo,la,"","", ,béésh, ,łi,gaii,"","", ,tłʼoh, ,naa,dą́ą́ʼ,"","", ,wáin,"","", ,a,kʼah,"","", ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Labels column:
NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Punct,Space,NSub,NSub,Punct,Space,Conj,Space,NSub,NSub
Sample From Spreadsheet
Solution Needed
It's simple enough to escape commas within CSV files, but the format needed by Create ML essentially combines entire CSV records into single columns, so I'm ending up needing a CSV record that contains a mixture of commas to use for parsing and ones to use as character literals. That's where this gets complicated.
For this particular use case (which seems like it would frequently arise when training a word tagger model), how should I properly escape a comma literal?