Migrating from Cocoa

If you are a Cocoa developer, many of the frameworks available in iOS should already seem familiar to you. The basic technology stack in iOS is identical in many respects to the one in OS X. Despite the similarities, however, the frameworks in iOS are not exactly the same as their OS X counterparts.

This chapter describes the differences you may encounter as you create iOS apps and explains how you can adjust to some of the more significant differences.

General Migration Notes

If your Cocoa app is already factored using the Model-View-Controller design pattern, it should be relatively easy to migrate key portions of your app to iOS.

Migrating Your Data Model

Cocoa apps whose data model is based on classes in the Foundation, Core Foundation, or Core Data frameworks can be brought over to iOS with little or no modification. Both frameworks are supported in iOS and are virtually identical to their OS X counterparts. Most of the differences that do exist are relatively minor or are related to features that would need to be removed in the iOS version of your app anyway. For example, Core Data in iOS supports binary and SQLite data stores (not XML data stores) and supports migration from existing Cocoa apps. For a detailed list of framework differences, see “Foundation Framework Differences.”

If your Cocoa app displays lots of data on the screen, you may want to simplify your data model when migrating it to iOS. Although you can create rich apps with lots of data in iOS, keep in mind that doing so may not serve your users’ needs. Mobile users typically want only the most important information, in the least amount of time. Providing the user with too much data all at once can be impractical because of the more limited screen space. It could also slow down your app because of the extra work required to load that data. Refactoring your Cocoa app’s data structures may be worthwhile if refactoring results in better performance and a better user experience in iOS.

Migrating Your User Interface

The user interface in iOS is very different from that in OS X—in both structure and implementation. Take, for example, the objects that represent views and windows in Cocoa. Although iOS and Cocoa both have objects representing views and windows, the way those objects work differs slightly on each platform. In addition, you must be more selective about what you display in your iOS views because screen size is limited and views that handle touch events must be large enough to provide an adequate target for a user’s finger.

In addition to differences in the view objects themselves, there are significant differences in how you display those views at runtime. For example, when you want to display a lot of data in a Cocoa app, you might increase the window size, use multiple windows, or use tab views to manage that data. In iOS apps, there is only one window, and its size is fixed, so apps must break information into reasonably sized chunks and present those chunks on different sets of views. When you want to present a new chunk of information, you push a new set of views onto the screen, replacing the previous set. Although your interface design is somewhat more complex, because the fixed window is such a crucial way of displaying information, iOS provides considerable support for this type of organization.

View controllers in iOS are a critical part of managing your user interface. You use view controllers to structure your visual content, to present that content onto the screen, and to handle device-specific behaviors such as orientation changes. View controllers also manage views and work with the system to load and unload those views at appropriate times. Understanding the role of view controllers and how you use them in your app is therefore critical to the design of your user interface.

For information about view controllers and how you use them to organize and manage your user interface, see View Controller Programming Guide for iOS. For general information about the user interface design principles of iOS, see iOS Human Interface Guidelines. For additional information about the windows and views you use to build your interface, and the underlying architecture on which they are built, see View Programming Guide for iOS.

Memory Management

In iOS and OS X, you manage memory using automatic reference counting (ARC). With this model, the compiler manages memory for you by automatically deallocating objects when they are no longer used by your code. All you have to do is maintain strong references to the objects you want to keep and set those references to nil when you no longer need the objects.

For more information on how to use ARC, see Transitioning to ARC Release Notes.

Framework Differences

Although most of the iOS frameworks are also present in OS X, there are differences in how those frameworks are implemented and used. The following sections call out some of the key differences that existing Cocoa developers might notice as they develop iOS apps.

UIKit Versus AppKit

In iOS, the UIKit framework provides the infrastructure for building graphical apps, managing the event loop, and performing other interface-related tasks. The UIKit framework is distinct from the AppKit framework, however, and should be treated as such when designing your iOS apps. Therefore, when migrating a Cocoa app to iOS, you must replace a significant number of interface-related classes and logic. Table 5-1 lists some of the specific differences between the frameworks to help you understand what is required of your app in iOS.

Table 5-1  Differences in interface technologies for iOS and OS X

Difference

Discussion

View classes

UIKit provides a focused set of custom views and controls for you to use. Many of the views and controls found in AppKit would not work well on iOS devices. Other views have more iOS-specific behaviors or variants. For example, instead of the NSBrowser class, iOS uses an entirely different paradigm (navigation controllers) to manage the display of hierarchical information.

For a description of the views and controls available in iOS, along with information on how to use them, see iOS Human Interface Guidelines.

View coordinate systems

The drawing model for UIKit views is nearly identical to the model in AppKit, with one exception. AppKit views use a y-up coordinate system, where the origin for windows and views is in the lower-left corner by default and y values increase moving up. By contrast, UIKit uses a y-down coordinate system, where the origin point is in the upper-left corner and y values increase moving down.

For more information about view coordinate systems, see View Programming Guide for iOS.

Core Animation (and layer-backed views)

Every view in iOS is backed by a Core Animation CALayer object. In OS X, views are not backed by layers automatically; instead, you must explicitly specify which views should have layers. Layer backing is optional in OS X because there are different performance and behavior implications to consider.

For more information about layers and layer-backed views, see Core Animation Programming Guide.

Windows as views

Conceptually, windows and views represent the same constructs in UIKit as they do in AppKit. In implementation terms, however, the two platforms implement windows and views quite differently. In AppKit, the NSWindow class is a subclass of NSResponder, but in UIKit, the UIWindow class is actually a subclass of UIView. This change in inheritance means that windows in UIKit are backed by Core Animation layers and can perform most of the same tasks that views do.

The main reason for having window objects at all in UIKit is to support the layering of windows within the operating system. For example, the system displays the status bar in a separate window that floats above your app’s window.

Another difference between iOS and OS X relates to the use of windows. Whereas a OS X app can have any number of windows, most iOS apps have only one. When you want to change the content displayed by your app, you swap out the views of your window rather than create a new window.

Event handling

The UIKit event-handling model is significantly different from the one found in AppKit. Instead of delivering mouse and keyboard events, UIKit delivers touch and motion events to your views. These events require you to implement a different set of methods and also require you to make some changes to your overall event-handling code. For example, you would never track a touch event by extracting queued events from a local tracking loop.

Gesture recognizers provide a target-action model for responding to standard touch-based gestures, such as taps, swipes, pinches, and rotations. You can also define your own gesture recognizers for custom gestures.

For more information about handling events in iOS apps, see Event Handling Guide for iOS.

Target-action model

UIKit supports three variant forms for action methods, as opposed to just one for AppKit. Controls in UIKit can invoke actions for different phases of the interaction, and they have more than one target assigned to the same interaction. Thus, in UIKit a control can deliver multiple distinct actions to multiple targets over the course of a single interaction cycle.

For more information about the target-action model in iOS apps, see Event Handling Guide for iOS.

Drawing and printing support

The drawing capabilities of UIKit are scaled to support the rendering needs of the UIKit classes. This support includes image loading and display, string display, color management, font management, and a handful of functions for rendering rectangles and getting the graphics context. Apps can deliver data wirelessly to a nearby printer. UIKit does not include a general-purpose set of drawing classes because several other alternatives (namely, Quartz and OpenGL ES) are already present in iOS.

For more information about graphics and drawing, see Drawing and Printing Guide for iOS.

Text support

UIKit provides sophisticated text handling using Text Kit, which is a text system derived largely from existing AppKit classes. You can use this support with the built-in text views or with custom views you create yourself.

Both iOS and OS X support the Core Text framework, which provides sophisticated text handling and typography using a C-based interface.

For more information about text support, see Text Programming Guide for iOS.

The use of accessor methods versus properties

UIKit makes extensive use of properties throughout its class declarations. Properties were introduced to OS X in version 10.5 and thus came along after the creation of many classes in the AppKit framework. Rather than simply mimic the same getter and setter methods in AppKit, properties are used in UIKit as a way to simplify the class interfaces.

For information about how to use properties, see Programming with Objective-C.

Controls and cells

Controls in UIKit do not use cells. AppKit uses cells in many places as a lightweight alternative to views. Because views in UIKit are themselves very lightweight objects, cells are not needed. Despite the naming conventions, the cells designed for use with the UITableView class are actually based on the UIView class.

Table views

The UITableView class in UIKit can be thought of as a cross between the NSTableView and NSOutlineView classes in the AppKit framework. UITableView uses features from both of those AppKit classes to create a more appropriate tool for displaying data on a smaller screen. The UITableView class displays a single column at a time and allows you to group related rows together into sections. It is also a means for displaying and editing hierarchical lists of information.

For multicolumn layouts, iOS apps can use the UICollectionView class. This class provides similar features to table views and the NSCollectionView class in OS X but also supports custom layouts that do not follow a grid pattern.

For more information about creating and using table views, see Table View Programming Guide for iOS.

Menus

iOS apps are driven primarily by the direct manipulation of objects. For this reason, menu bars are not supported in iOS (and are generally unnecessary anyway). For those few commands that are needed, a toolbar or set of buttons is usually more appropriate. For data-based menus, a picker or navigation controller interface is often more appropriate. For context-sensitive commands in iOS, you can display those on the edit menu in addition to (or in lieu of) commands such as Cut, Copy, and Paste.

For information about the classes of UIKit, see UIKit Framework Reference.

Foundation Framework Differences

A version of the Foundation framework is available in both OS X and iOS, and most of the classes you would expect to be present are available in both. Both frameworks provide support for managing values, strings, collections, threads, and many other common types of data. There are, however, some technologies that are not included in iOS. These technologies are listed in Table 5-2, along with the reasons the related classes are not available. Wherever possible, the table lists alternative technologies that you can use instead.

Table 5-2  Foundation technologies unavailable in iOS

Technology

Notes

Metadata and predicate management

The use of metadata queries in iOS is supported only for locating files in the user’s iCloud storage.

Distributed objects and port name server management

The Distributed Objects technology is not available in iOS, but you can still use the NSPort family of classes to interact with ports and sockets. You can also use the Core Foundation and CFNetwork frameworks to handle your networking needs.

Cocoa bindings

Cocoa bindings are not supported in iOS. Instead, iOS uses a slightly modified version of the target-action model that adds flexibility in how you handle actions in your code.

AppleScript support

AppleScript is not supported in iOS.

The Foundation framework provides support for XML parsing through the NSXMLParser class. However, other XML parsing classes (including NSXMLDocument, NSXMLNode, and NSXMLElement) are not available in iOS. In addition to using the NSXMLParser class, you can also use the libxml2 library, which provides a C-based XML parsing interface.

For a list of the specific classes that are available in OS X but not in iOS, see the class hierarchy diagram in “The Foundation Framework” in Foundation Framework Reference.

Changes to Other Frameworks

Table 5-3 lists the key differences in frameworks that are common to iOS and OS X.

Table 5-3  Differences in frameworks common to iOS and OS X

Framework

Differences

AddressBook.framework

This framework contains the C-level interfaces for accessing user contacts. Although it shares the same name, the iOS version of this framework is very different from its OS X counterpart.

In iOS, you can also use the classes of the Address Book UI framework to present standard picker and editing interfaces for contacts.

For more information, see Address Book Framework Reference for iOS.

AudioToolbox.framework

AudioUnit.framework

CoreAudio.framework

The iOS versions of these frameworks provide support primarily for recording, playing, and mixing of single and multichannel audio content. More advanced audio processing features and custom audio unit plug-ins are not supported.

For information on how to use the audio support, see Multimedia Programming Guide.

CFNetwork.framework

This framework contains the Core Foundation Network interfaces. In iOS, the CFNetwork framework is a top-level framework, not a subframework. Still, most of the actual interfaces remain unchanged.

For more information, see CFNetwork Framework Reference.

CoreGraphics.framework

This framework contains the Quartz interfaces. In iOS, the Core Graphics framework is a top-level framework, not a subframework. You can use Quartz to create paths, gradients, shadings, patterns, colors, images, and bitmaps in exactly the same way you do in OS X. There are a few Quartz features that are not present in iOS, including PostScript support, image sources and destinations, Quartz Display Services support, and Quartz Event Services support.

For more information, see Core Graphics Framework Reference.

CoreLocation.framework

The OS X version of this framework does not include support for determining the current heading.

For more information, see Core Location Framework Reference.

OpenGLES.framework

OpenGL ES is a version of OpenGL designed specifically for embedded systems. If you are an existing OpenGL developer, the OpenGL ES interface should be familiar to you. However, the OpenGL ES interface still differs in several significant ways. First, it is a much more compact interface, supporting only those features that can be performed efficiently using the available graphics hardware. Second, many of the extensions you might normally use in desktop OpenGL may not be available to you in OpenGL ES. Despite these differences, you can perform most of the same operations you would normally perform on the desktop. But if you are migrating existing OpenGL code, you may have to rewrite some parts of your code to use different rendering techniques in iOS.

For information about the OpenGL ES support in iOS, see OpenGL ES Programming Guide for iOS.

QuartzCore.framework

In iOS, this framework contains only the Core Animation interfaces. Core Image is separated out into its own framework, and iOS does not support the Core Video interfaces. For Core Animation, most of the interfaces are the same for iOS and OS X.

For more information, see Quartz Core Framework Reference and Core Image Reference Collection.

Security.framework

This framework contains the security interfaces. In iOS, it focuses on securing your app data by providing support for encryption and decryption, pseudorandom number generation, and the keychain. The framework does not contain authentication or authorization interfaces and has no support for displaying the contents of certificates. In addition, the keychain interfaces are a simplified version of the ones used in OS X.

For information about the security support, see iOS App Programming Guide.

SystemConfiguration.framework

This framework contains networking-related interfaces. In iOS, it contains only the reachability interfaces. You use these interfaces to determine how a device is connected to the network, for example, whether it’s connected using EDGE, GPRS, or Wi-Fi.