Project Dependencies and Include Files in Xcode

I have read a lot of how-to guides on blogs over the past few years on how to set it up Xcode sub-projects so you can build your project, a dependent sub-project, and have all the header files and libraries be found. Invariably, this involves changing Xcode project settings both in your main application, and in the sub-project so that everything works out. This is wrong!

The proper way to build sub-projects is that a user of your sub-project does not have to do anything but:

  1. Include the .xcodeproj project file.
  2. Add the project to the top project’s dependencies.
  3. Link against the proper library.

If you are having to do more, then the sub-project was not designed correctly.

As it turns out, Xcode already does this right, but the proliferation of how-to guides usually get it wrong and make developers go through more hoops than they need to. Let’s walk through doing it the right way. Note: this post was written with Xcode 4.4 and 4.5 in mind, so older versions may have had different ways of doing things.

For the purposes of this example we need something simple. Let’s just write a Hello World application that uses an external framework to get the text – it will demonstrate all that we need. Also, we will only cover projects targeted for iOS within this post, but look for a follow-up post that discusses extending these projects to work on Mac OS X as well.
Continue reading Project Dependencies and Include Files in Xcode

Lessons Learned on using Chef from Development to Production

Over the past six-months, my team at Alert Logic started and deployed a project where we made heavy use of the Chef infrastructure automation tool. We decided to use Chef from the ground up in our development cycle, from initial coding and testing all the way through final production deployment. Here are some important lessons we learned from the experience.

Robustness in the Midst of Chaos

I’ve been catching up on technology news this week, after a few months of completely ignoring it. One of my favorites so far has been Netflix’s brief post on 5 Lessons We’ve Learned Using AWS. Within, they discuss the importance of testing your systems ability to survive failure in a infrastructure that is inherently chaotic and prone to failure (a fundamental characteristic of massively scalable clouds).

One way Netflix has tested their ability to survive failure is to constantly force failure within their system. They built a system component named ChaosMonkey that randomly kills part of their architecture to ensure that the rest of it can survive and continuing offering service to customers. I love this approach. Not only does it show a lot of maturity within the Netflix engineering team about what systems failure really means, but also shows that they understand the importance of constantly ensuring that failures are expected and recoverable; I still see too many enterprises today that have yet to learn that important lesson.

ChaosMonkey reminds me greatly of what I used to do, albeit manually, when I was in charge of systems architecture at a company in the early a part of my career. We had a wonderfully horizontally scalable system architecture, back in an era when most enterprises would only use mainframes for systems of such size and importance. It was a critical part of our ability to provide high levels of service and performance at a relatively low cost. We did a good job of giving our operations team plenty of opportunity to test system failure and ensure that we could survive catastrophic failure, but I found it was never quite enough as the system grew more complex.

About the time we began implementing RAID drive subsystems for our database nodes, I began a systematic approach to destroy parts of our architecture on a regular basis. At first, it was a way to make sure the RAID subsystem would act appropriately in a degraded state – and they did, but we had plenty of software modifications to make to allow our systems to perform better in such a state. My experiments quickly became a little more aggressive… a few times a week, I would randomly pull drives out of their carriers, or kill processes on production nodes, or even cause an entire system crash.

Remarkably, it was rare when any of this activity would even be noticed. Our architecture was so good at surviving single points of failure that the operations team would often not even notice that anything at all had happened. When they did, it was usually a sign of a weak-point in our architecture that needed to be fixed. Occasionally, hardware actually failed from my little experiments, and I always felt bad about that, but the reality is that same hardware would have failed at a much more inconvenient time.

Alas, once our company was purchased by a publicly-traded company, development was no longer allowed access to the data center. For the most part, this was a very positive thing, but it also prevented people like myself from twiddling with the system to make sure it would work well. Our operations staff was good however, and many of the lessons development learned over the years were picked up by them. Today’s trend of dev-ops is a welcome return to some of that way of thinking, as it really does show that you need developer mindsets when running your operations.

I can distill my experiences doing dev-ops work in the past into some key points:

  1. Build your systems with the assumption that failure happens, and happens regularly.
  2. You must test your ability to fail and recover, and do it constantly.
  3. Hardware & software solutions for vertical scaling are inherently inflexible and likely much more expensive in the long run. This was true in the 1980s, and even more true today now that more shops understand the nature of horizontal scalability.
  4. Beware of the mindset that wants high uptime for individual systems. It’s cool for your server at home, but in an operations scenario that system represents one that might not startup properly if it does fail. Inability for a system to startup properly was, and remains, a hugely common problem. Reboot all of your systems regularly.

You Don’t Need Flash for Rich Graphs on a Web Page

Adobe’s Flash has a lot of uses, but one of the most impressive to me has been the the creation of interactive graphs on a web page. One just has to visit Google Finance to see a great example of this in action; it’s fast, effective and fits seemlessly within the rest of the page.

Many times, however, Flash isn’t an appropriate technology to use. If you’re an open-source product like Zenoss, Flash presents a licensing issue. If you’re targeting mobile platforms like the iPhone, Flash isn’t available. And, sometimes, you may just not like Flash; it does have it’s own security problems and overhead, for example. What’s a web developer to do? Enter HTML5 to the rescue…

HTML5 is not yet an approved standard, but it’s well on its way and surprisingly well-supported by the browser community already. One of the nice new features provided in HTML5 is the canvas element which allows for two-dimensional drawing functionality. Between this new feature and JavaScript, we should be able to create a rich graph display.

My Zenoss Development Environment – Part 3

In Part 1 of this series we discussed getting an initial Zenoss environment checked out and running on a Mac OS X or Ubuntu system. In Part 2 we discussed how to configure Eclipse to use the Zenoss source. In this part, we’ll discuss how to handle day-to-day operations such as branch management and working with multiple versions.

Branch-Based Development

At Zenoss we do all development, including bug fixes and small maintenance tasks, in a private development branch within the subversion repository. This allows us to independently work-on changes, check-them into the repository for safe-keeping, and then perform code reviews with team members without having to share files or using pastebin style tools (even though we do both at times).

  1. Create a branch within your user’s sandbox. In this example, I’ve decided to name the sandbox new-widget to identify what I’m working on. If I were fixing a defect, I’d use the defect number from the defect tracking system. Create the branch by copying the trunk folder to the sandbox branch. In Subversion this is a low-overhead operation and doesn’t actually copy files.
    svn copy http://dev.zenoss.org/svn/trunk http://dev.zenoss.org/svn/sandboxen/cgibbons/new-widget -m " * Copying trunk to sandbox branch."
  2. Switch your working directory to use the new sandbox branch. You can do this either from the command-line or using Eclipse. From the command line, you’d do the following:
    cd $HOME/zenoss/core
    svn switch http://dev.zenoss.org/svn/sandboxen/cgibbons/new-widget
    From within Eclipse, secondary-click on the core project and choose Switch to Another Branch/Tag/Revision… option from the Team menu. On the dialog that appears, enter in the sandbox URL. After switching, your Eclipse will show the new location next to the core project item.
    svn switch

Once your development environment has been switched, you can make changes and commit to the Subversion repository as desired. If you’re unsure if you are in the right branch, you can always use the svn info command to see which directory is being used.

Merging Branches

Once you have completed changes in a branch and have had them reviewed by a peer, it is time to merge them into trunk (or another branch, if using a maintenance release). Merging can be tricky, but a consistent process can make it much easier to handle.

  1. Change your working directory to a checked-out and clean version of the branch you want to merge into. For example, I keep a $HOME/zenoss/clean-trunk directory that I never make changes to, except for merging.
  2. Determine the base working revision of your working branch. There are a variety of ways to do this, but one of the best is to view the revision log graph within the Trac system directly. For example, for the branch discussed above we can browse to http://dev.zenoss.org/trac/log/sandboxen/cgibbons/new-widget/ and see that revision 15513 is the base.
  3. Perform a dry-run on the merge to get a general idea of what the changes into the branch will be. You should see your expected changes, plus any conflicts from changes to the other branch while you have been working in the sandbox branch.
    svn merge --dry-run --revision 15513:HEAD http://dev.zenoss.org/svn/sandboxen/cgibbons/new-widget .
  4. If the merge results look satisfactory, rerun the command without the dry-run argument.
  5. Look at the final merge results using svn status and svn diff, and once you’re ready, issue an svn commit.

Multiple Branches and Zenoss Configuration

As you switch between branches you will often render your Zenoss configuration useless.  Resetting your database after each branch switch is usually a good practice, and being able to quickly recreate any test data you may need makes this process less painful.

After switching a branch, my process is usually the following:

  1. Shutdown zenoss and restart only zeo.
    zenoss stop
    zeoctl start
  2. Run the zenwipe script from the inst source directory.
    $HOME/zenoss/inst/zenwipe.sh --no-prompt
  3. Run zenmigrate to install any database changes available within the current branch.

Depending upon the task at hand, I may install additional ZenPacks and add new devices through the command-line if those are needed. Helper scripts, such as install-windows.sh to install of the Windows ZenPacks and create several local test devices in the instance, are useful tools to have for your typical configurations.