URL Shortening for Mac OS X and iOS

Today I’m releasing version 1.0.0 of URLKit, a library to find and shorten URLs for Mac OS X and iOS applications.

URLKit was written to order to provide URL shortening services to messaging applications where users may be providing multiple URLs in a single input buffer. I needed the ability to shorten multiple URLs at once and to do so without slowing down the main user interface.

URLKit is implemented using Grand Central Dispatch’s operation queues, so your main UI queue is never blocked by URL shortening operations.

Using URLKit is easy:

@interface MyController : NSObject
@implementation MyController
- (void)doSomethingAwesome
    const URLShortener *shortener = [URLShortener defaultURLShortener];
    const NSString *text = @"hello, this is cool: http://newoldage.blogs.nytimes.com/2013/02/01/caregiving-laced-with-humor/?hp but this one isn't that cool http://religion.blogs.cnn.com/2013/01/29/poll-quarter-of-americans-say-god-influences-sporting-events/";
    [shortener shortenTextWithURLs:text observer:self];
- (void)textWithURLsShortened:(NSString *)text

URLKit can be found at https://github.com/dcgibbons/URLKit and is licensed using the Apache 2.0 license. Comments and patches are welcome!

Project Dependencies and Include Files in Xcode

I have read a lot of how-to guides on blogs over the past few years on how to set it up Xcode sub-projects so you can build your project, a dependent sub-project, and have all the header files and libraries be found. Invariably, this involves changing Xcode project settings both in your main application, and in the sub-project so that everything works out. This is wrong!

The proper way to build sub-projects is that a user of your sub-project does not have to do anything but:

  1. Include the .xcodeproj project file.
  2. Add the project to the top project’s dependencies.
  3. Link against the proper library.

If you are having to do more, then the sub-project was not designed correctly.

As it turns out, Xcode already does this right, but the proliferation of how-to guides usually get it wrong and make developers go through more hoops than they need to. Let’s walk through doing it the right way. Note: this post was written with Xcode 4.4 and 4.5 in mind, so older versions may have had different ways of doing things.

For the purposes of this example we need something simple. Let’s just write a Hello World application that uses an external framework to get the text – it will demonstrate all that we need. Also, we will only cover projects targeted for iOS within this post, but look for a follow-up post that discusses extending these projects to work on Mac OS X as well.
Continue reading Project Dependencies and Include Files in Xcode

Lessons Learned on using Chef from Development to Production

Over the past six-months, my team at Alert Logic started and deployed a project where we made heavy use of the Chef infrastructure automation tool. We decided to use Chef from the ground up in our development cycle, from initial coding and testing all the way through final production deployment. Here are some important lessons we learned from the experience.

Robustness in the Midst of Chaos

I’ve been catching up on technology news this week, after a few months of completely ignoring it. One of my favorites so far has been Netflix’s brief post on 5 Lessons We’ve Learned Using AWS. Within, they discuss the importance of testing your systems ability to survive failure in a infrastructure that is inherently chaotic and prone to failure (a fundamental characteristic of massively scalable clouds).

One way Netflix has tested their ability to survive failure is to constantly force failure within their system. They built a system component named ChaosMonkey that randomly kills part of their architecture to ensure that the rest of it can survive and continuing offering service to customers. I love this approach. Not only does it show a lot of maturity within the Netflix engineering team about what systems failure really means, but also shows that they understand the importance of constantly ensuring that failures are expected and recoverable; I still see too many enterprises today that have yet to learn that important lesson.

ChaosMonkey reminds me greatly of what I used to do, albeit manually, when I was in charge of systems architecture at a company in the early a part of my career. We had a wonderfully horizontally scalable system architecture, back in an era when most enterprises would only use mainframes for systems of such size and importance. It was a critical part of our ability to provide high levels of service and performance at a relatively low cost. We did a good job of giving our operations team plenty of opportunity to test system failure and ensure that we could survive catastrophic failure, but I found it was never quite enough as the system grew more complex.

About the time we began implementing RAID drive subsystems for our database nodes, I began a systematic approach to destroy parts of our architecture on a regular basis. At first, it was a way to make sure the RAID subsystem would act appropriately in a degraded state – and they did, but we had plenty of software modifications to make to allow our systems to perform better in such a state. My experiments quickly became a little more aggressive… a few times a week, I would randomly pull drives out of their carriers, or kill processes on production nodes, or even cause an entire system crash.

Remarkably, it was rare when any of this activity would even be noticed. Our architecture was so good at surviving single points of failure that the operations team would often not even notice that anything at all had happened. When they did, it was usually a sign of a weak-point in our architecture that needed to be fixed. Occasionally, hardware actually failed from my little experiments, and I always felt bad about that, but the reality is that same hardware would have failed at a much more inconvenient time.

Alas, once our company was purchased by a publicly-traded company, development was no longer allowed access to the data center. For the most part, this was a very positive thing, but it also prevented people like myself from twiddling with the system to make sure it would work well. Our operations staff was good however, and many of the lessons development learned over the years were picked up by them. Today’s trend of dev-ops is a welcome return to some of that way of thinking, as it really does show that you need developer mindsets when running your operations.

I can distill my experiences doing dev-ops work in the past into some key points:

  1. Build your systems with the assumption that failure happens, and happens regularly.
  2. You must test your ability to fail and recover, and do it constantly.
  3. Hardware & software solutions for vertical scaling are inherently inflexible and likely much more expensive in the long run. This was true in the 1980s, and even more true today now that more shops understand the nature of horizontal scalability.
  4. Beware of the mindset that wants high uptime for individual systems. It’s cool for your server at home, but in an operations scenario that system represents one that might not startup properly if it does fail. Inability for a system to startup properly was, and remains, a hugely common problem. Reboot all of your systems regularly.

You Don’t Need Flash for Rich Graphs on a Web Page

Adobe’s Flash has a lot of uses, but one of the most impressive to me has been the the creation of interactive graphs on a web page. One just has to visit Google Finance to see a great example of this in action; it’s fast, effective and fits seemlessly within the rest of the page.

Many times, however, Flash isn’t an appropriate technology to use. If you’re an open-source product like Zenoss, Flash presents a licensing issue. If you’re targeting mobile platforms like the iPhone, Flash isn’t available. And, sometimes, you may just not like Flash; it does have it’s own security problems and overhead, for example. What’s a web developer to do? Enter HTML5 to the rescue…

HTML5 is not yet an approved standard, but it’s well on its way and surprisingly well-supported by the browser community already. One of the nice new features provided in HTML5 is the canvas element which allows for two-dimensional drawing functionality. Between this new feature and JavaScript, we should be able to create a rich graph display.