iOS Developers and Designers: Stickin’ together is what good waffles do!

We’ve all heard developers say it: “I’m a terrible drawer” or “I’ve got no design skills”. Perhaps we’re even guilty of saying it ourselves – I know I am. But after attending this year’s Swipe Conference I now subscribe to the opinion that this is no longer acceptable. We are all responsible for the design of the app we are building; developer, designer, tester, or producer: every member of the team is accountable for helping shape the app’s design and interactions.

Swipe Conference Highlights: Using gestures as shortcuts within iOS apps

Yesterday was the last day of Swipe Conference so I thought I would take this time to reiterate one of the points I took from the first presentation by Josh Clark.  Josh covered quite a few topics and if you haven’t already you should check out his book Tapworthy: Designing Great iPhone Apps.

In short:

  • Gestures can be brilliant … if the context they are used in feels natural
  • If you’re using gestures make sure your users will find them
  • If you don’t think your users will figure out your gestures easily, don’t overload your users with lots of help hints all at once; instead, let them “unlock” them over time, like a reward for using your app
  • Downside though is that there is no consensus on what a 3 finger swipe gesture might do. Every iOS app that uses this gestures decides it for themselves.

It was obvious from the way Josh presents that he has so much passion for touch / gesture based devices. What I personally took away from his presentation was that gestures can be awesome shortcuts within your app. In a lot of cases gestures are a natural progression with how we interact with real world items.

New iOS Developers Shouldn’t Use Interface Builder

When I first started learning Objective-C and the iOS SDK 2.x a few years ago, one thing that I constantly struggled to get my head around was Interface Builder.  More specifically, why should I use it and how it could possibly benefit my iOS development – given that I could code a UI programmatically, and know exactly how it all worked? A colleague of mine even wrote a blog entry that mirrored my exact feelings towards Interface Builder back then.

Well over the past 6-12 months my attitude towards Interface Builder has changed. There are two reasons for this. Firstly, it’s now nicely integrated into XCode 4. Prior to that, who wanted to have 2 different apps running (XCode and Interface Builder) with popup windows spamming your desktop? That was a deal-breaker for me. Secondly, I’m now a more experienced developer. After using Interface Builder on a couple of projects, I am confident to say I am now more efficient as a developer when using Interface Builder. However, that wasn’t always the case, which leads me to my key message here:

If you’re new to iOS development, don’t touch Interface Builder until you are capable of building UIs programatically.

Continuous Deployment of iOS Apps with Jenkins and TestFlight

I thought it was about time I should put together a simple guide on using Jenkins to build your iOS application – and for those of us that use the awesome testflightapp.com website for managing our iOS app distribution for testing, I have included details on creating a Jenkins job to publish the latest successful artifact to testflightapp.com.

Location, Location, Location: Simulating iOS Location Data

Perhaps the most indispensable tool when developing iOS applications is the iOS simulator. However, if you want to test an app whose functionality revolves around utilizing the device’s GPS, then you’re out of luck – Apple’s iOS simulator will only provide you with a single location (the location of Apple’s headquarters). Furthermore, whilst the next version of XCode promises some progress in this area, it’s still not clear if/how it’ll be able to recorded data for later playback. In this entry I’m going to detail the process we followed to create a small location simulation framework that can also record data.

When to use Delegation, Notification, or Observation in iOS

A common problem that we often experience when developing iOS applications, is how to allow communication between our controllers, without the need to have excessive coupling. Three common patterns that appear time and time again throughout iOS applications include:

  1. Delegation
  2. Notifications through Notification Center, and
  3. Key value observing

So why do we need these patterns and when should and shouldn’t they be used?

Why would you use Interface Builder for iPhone Development?

I don’t understand why you’d use Interface Builder to create a UI for an iPhone application.

When I started building my first iPhone application at Shine, my colleagues advised me to avoid using Interface Builder. They’d tried using it when they were first starting out, but found that it just got in the way and made the learning curve steeper than it needed to be.

The problem for me was that many of the available books and tutorials for iPhone development used Interface Builder. Many of the sample apps on Apple’s site use it as well. I was having trouble figuring out how to not use it, so I took a deep breath and gave it a go.

I got confused pretty quickly. After thrashing around for a day or so, a colleague took pity on me and showed me how to bootstrap an iPhone user interface in code. I ditched Interface Builder and never looked back.

Sure, I probably would have figured it out, but why make life harder than it needs to be? As a new iPhone developer, I was already trying to get my head around Objective-C, Cocoa and XCode. Why add Interface Builder and NIB files to the list, for very little apparent benefit?

My Theory

Perhaps all the iPhone books and tutorials have been written by people who already had experience developing with Interface Builder and Nibs for Mac OS X.

Don’t get me wrong – I don’t have a problem with Interface Builder. It’s just that I wonder whether Interface Builder is more suited to it’s original purpose: building complex user interfaces that are to be used on a desktop computer.

iPhone applications, on the other hand, have a very limited set of widgets and layouts to choose from. Furthermore, there’s a limited amount of stuff you should put on a single screen.

Consequently it seems like overkill to crack out Interface Builder for an iPhone application.

More controversially, in my experience with GUI builders I’ve found that as soon as you try and build anything non-trivial, you’re going to have to code it by hand anyway. Furthermore, if an interface is so simple that you could build it with a GUI builder, I’ve found that it’s probably quicker to code it yourself. I’m not sure that Interface Builder is any exception to this observation.

To support these assertions, I’d like to point out that one of the more complex (and useful) sample iPhone applications that Apple provide – ‘TheElements‘ (which navigates the periodic table) – doesn’t use NIB files.

How to do it

So how does one bootstrap an iPhone interface without a NIB file? It turns out that it’s very easy to do, but there aren’t many examples out there on how to do it. So for the sake of knowledge-dissemination, here’s how you write a main.c that does it:

#import <UIKit/UIKit.h>

int main(int argc, char *argv[]) {
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    int retVal = UIApplicationMain(argc, argv, nil, @"MyAppDelegate");
    [pool release];
    return retVal;
}

The key part is that you provide the name of the AppDelegate you want to use to the UIApplicationMain method, instead of leaving it as nil.

You’d then just code your AppDelegate to bootstrap the UI however you see fit:

#import "MyAppDelegate.h"

@implementation MyAppDelegate

- (void)applicationDidFinishLaunching:(UIApplication *)application {
	UIWindow *window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
	...
              Setup your controllers and views in here.
        ...
	[window addSubview:myViews];
	[window makeKeyAndVisible];
}

Finally, remove the property with the key ‘Main nib file base name’ (the raw key name is ‘NSMainNibFile’) from your Info.plist file.

What do you think?

Of course, as a newby to iPhone development, perhaps I’m missing something here.
If you’re new to iPhone development, have you found Interface Builder useful? If so, I’d like to hear about it. I can only speak from my own experiences (and those of my colleagues), so would be interested in hearing about the experiences of others.

Run-loops vs. Threads in Cocoa

As a relative newby to the world of Cocoa programming (on the iPhone in particular), I have spent some time trying to understand if and when you’d use a run-loop instead of launching a separate thread. I was unable to find any definitive answer on the web, so ended up joining the dots myself. What follows is my understanding of when you’d want to use one or the other. Cocoa experts are welcome to comment if I’ve got it wrong.

The Problem

Touches aren’t the only source of input to an iPhone application. For example, another source can be a socket – sometime you want to listen to a socket for data. But you don’t want the UI to lock up whilst it’s listening – you still want input from the user to be dealt with promptly. Similarly, you might want events to be triggered automatically at certain time intervals, but without locking up the application in the interim.

Coming from other UI frameworks, you might think that the way to deal with this is to to use a separate thread. That way, the thread can block on the socket or sleep for a particular time interval. However, as we all know, the introduction of multiple threads immediately introduces a bunch of potential defects that are difficult to reproduce and fix.

The Solution

Enter run loops. Or more specifically, the run loop – each iPhone application has one by default and for our purposes, this is all we need.

So what exactly is a run loop?

Well, first consider this assertion:the vast majority of the time that your Cocoa application is running, it’s doing nothing. More specifically, it’s waiting for input. However, as soon as you touch the screen, an event gets triggered, which may in turn result in some of your code being executed. If some data comes into a socket, or a timer fires, the same applies.

The key things is that once this code has been executed, the application goes back to waiting for input. Furthermore, in many cases the execution time of your code will be very small relative to the time the application spends waiting for input.

I think of run loops as a mechanism that exploits this fact.

A run loop is essentially an event-processing loop running on a single thread. You register potential input sources on it, pointing it to the code that it should execute whenever input is available on those sources.

Then when input comes into a particular source, the run loop will execute the appropriate code, then go back to waiting for input to come in again to any of it’s registered sources. If input comes into a registered source whilst the run-loop is executing another piece of code, it’ll finish executing the code before it handles the new input.

The upside of this is that whilst you mightn’t know exactly what order things are going to come in, at least you know that they’ll be processed one after the other instead of in parallel. This means that you avoid all of those nasty multi-threading issues that were described earlier. And that’s why run loops are useful.

Run loop scheduling in action

By default, all touch events received by an iPhone application are queued for processing by the application’s main run loop, so there’s nothing special you need to do for UI components. However, other sources of input require additional coding.

To schedule an NSInputStream on a run loop, you’d do something like this:


...
[iStream setDelegate:self];
[iStream scheduleInRunLoop:[NSRunLoop currentRunLoop]
forMode:NSDefaultRunLoopMode];
...

This code sets it up so that whenever input is available on ‘iStream’, a ‘stream:handleEvent’ message will be sent to ‘self’. Note that the stream could be from any sort of source, including a socket.

Another object that can be scheduled on a run loop is a timer. For example:


[NSTimer scheduledTimerWithTimeInterval:2.0
target:self
selector:@selector(doStuff)
userInfo: nil
repeats:YES];

will schedule a timer on the current run loop to send a ‘doStuff’ message to ‘self’ every two seconds.

When not to use a run loop

So when wouldn’t you use a run loop? Well, if you had some event-handling code that was going to take a long time to execute (for example, performing some CPU-intensive calculation), then everything else in the event-handling queue won’t get handled until it’s finished. This would cause your application to become unresponsive until the processing has finished. In that sort of scenario, you might want to consider using a separate thread to do the processing.

However, for the vast majority of cases, our code for handling events – be they from the screen, sockets or timers – takes a very short time to execute. And that’s why it’s easier (and safer) to just use the main run loop to handle those events.

The trade-offs

The only downside to using a run loop instead of a thread is that instead of just whacking a thread around a whole section of code that you know will block in one or more places, you have to go to each potential blocking point, register the source on the run loop, and implement a callback to process events that are generated from that source.

Whilst this may seem like some effort, it pales in comparison to the pain that can result from poorly-considered threading. So next time you’re tempted to use a thread to read from a blocking input source, consider taking the time to use a run loop. It could well save you a lot of time in the long run.