Wednesday, March 8, 2017

Visual Studio Mobile Center - First Thoughts

For several weeks now I've been playing with Visual Studio Mobile Center at If you haven't looked at it yet, it is a mobile build server, mobile cloud testing platform, crash reporting and analytics all rolled into one. The vision is ambitious and there are very few other products you can find that will give you as many different features as VSMC does in one spot.

Just to set some expectations, this is a product in preview mode. Not everything is there that you would expect to be. For example, you can currently only tie the build process into GitHub for source control. If you want a full and complex build process that you need to modify, an app distribution method where you can attach release notes or something beyond the basics, it currently won't do enough for you. But what it does do is easy to work with, very intuitive and I have really not encountered any major problems other than trying to tie it to my enterprise MSDN account for table and identity services.

This appears to be a product that was started before Microsoft purchased Xamarin as the items they implemented first are what was important to Xamarin and not Microsoft as a whole. For example, while there are plans to tie into VSTS for source control, that has not been done yet. While there are plans to implement UWP apps, they are not in there yet either. Microsoft may have Cordova tools in Visual Studio (TACO), but compiling for Cordova is no where to be seen in VSMC. These things will likely all come in time but wasn't the focus of the initial pass.

The general features of VSMC can been summarized as follows:

Build Services - For iOS and Android. Soon to feature UWP.
Cloud Test platform - 1000s of devices in the cloud that can run your Calabash, Appium or Xamairn.UITests. Currently just runs tests from the command line.
App Distribution - Manage different versions of your app and allow testing groups to download and use your app. HockeyApp lite.
Table Services - A light weight front end to Azure tables storage.
Identity Services - A light weight front end to Azure identity services.
Crash Reporting - Track when your apps crash, seems to work for iOS, Android and UWP.
Analytics - Write custom events about how users interact with your app, also seems to work for iOS, Android and UWP.

In some ways I question the usefulness of the table services and identity services that front end Azure in anything other than very simple apps. There is not a lot of control over what is happening under the covers. I'd probably go directly to Azure's Mobile Apps for anything of consequence. However, if you want to stand up a quick app any your needs are really basic, the table and identity services may work for you.

I think it is fair to say the same about many of the things that are currently in VSMC. The most basic functionality is there but there just isn't much else. Some of this is likely to be an artifact of the fact that the product is in preview but also I think to some extent it is by design. For basic apps and people just getting into mobile and really don't have the capability to set up CI/CD build servers, HockeyApp, set up Azure, etc it's a nice place to get started.

A nice bonus feature for those without Macs and are doing something like a Xamarin Forms app, it can build the iOS version of app for you. The only other thing you will need is an Apple developer account for the certificate and provisioning profile and you can then use VSMC to build and deploy the app so you can install and try it on your iPhone/iPad.
I've hear rumors that HockeyApp and Xamarin Test cloud will at some point be rolled into VSMC. It will need a lot of work and features for that to happen. I've also heard from the team that individual pieces of of VSMC may be able to be purchased independently. That would be good if you don't need the build services or cloud app testing but really want the crash reporting and analytics pieces.

Overall the product looks promising. If I were to position it I would say it is currently intended for indie developers and small start up companies without a lot of mobile infrastructure expertise. Complex features and ability to do customization isn't there yet but given this will be under the Azure umbrella those capabilities may come with time. I don't know the pricing yet but overall this is a product to watch.

Tuesday, February 14, 2017

Using an AutomationId with a Cell in Xamarin.Forms

In Xamarin Forms version 2.2 the AutomationId was introduced for iOS and Android. The AutomationIds are tied to renderers and elements that are derived from VisualElements. CellRenderers in their various flavors implement the IRegisterable interface but do not derive from VisualElementRenderer nor does the TextCell class derive from VisualElement. Instead it inherits directly from the Element class.

The upshot of the deal is that while there is a AutomationId property on the TextCell from parent Element class (or any of the other classes that derive from Cell) setting it does nothing. If you do have a design that uses the TextCell and want to use the AutomationId what can you do? I came up with this solution.

First I created a class that derives from the TextCell. It does nothing, it's only purpose is to create a type that we can create our own renderers for. An alternate approach to this one would be to use the default TextView with an Effect.

 public class AutomationTextCell : TextCell  
   public AutomationTextCell()  

Now we can create some custom renderers from it. Before that, I do want to talk a little about the AutomationId property. If we examine Xamarin Forms Element source code we see the property defined as:

 public string AutomationId  
   get { return _automationId; }  
     if (_automationId != null)  
       throw new InvalidOperationException("AutomationId may only be set one time");  
     _automationId = value;  

There is no BindableProperty backing store for this. This means that it will be unresponsive to changes in the custom renderer. The AutomationId that exists when the native control is created is the one that will be used, changes after that will not be honored.


We want to set the ContentDescription property. So what we will do is create a custom renderer in the Android project for our AutomationTextCell and override the GetCellCore method to set the view's ContentDescription property with the current AutomationId value.

 [assembly: ExportRenderer(typeof(AutomationTextCell), typeof(AutomationTextCellRenderer))]  
 namespace YourNamespace  
   public class AutomationTextCellRenderer : TextCellRenderer  
     protected override Android.Views.View GetCellCore(Cell item, Android.Views.View convertView, Android.Views.ViewGroup parent, Android.Content.Context context)  
       var view = base.GetCellCore(item, convertView, parent, context);  
       view.ContentDescription = Cell.AutomationId;  
       return view;  


We can create another custom renderer in an iOS project to set the AccessibilityIdentifier property of the native view to the AutomationId.

 [assembly: ExportRenderer(typeof(AutomationTextCell), typeof(AutomationTextCellRenderer))]  
 namespace YourNamespace  
   public class AutomationTextCellRenderer : TextCellRenderer  
     public AutomationTextCellRenderer()  
     public override UIKit.UITableViewCell GetCell(Cell item, UIKit.UITableViewCell reusableCell, UIKit.UITableView tv)  
       var tableViewCell = base.GetCell(item, reusableCell, tv);  
       tableViewCell.AccessibilityIdentifier = item.AutomationId;  
       return tableViewCell;  

Now the Marked method in UI test will be able to find the AutomationId's for cells. It is important to note that the TextCell for both platforms contain multiple underlying views so once you get a reference to it in your UI tests you may be doing some further querying to get the exact value you are looking for.

In your UI Test you can use app.Repl(); with the tree command to verify the automation Ids are being set and see the native structure of the cells.

Good luck!

Thursday, January 19, 2017

Why Are We So Awful at Estimating Large Software Projects?

Recently I was talking with some guys over at Xamarin about estimating projects. This got me thinking again about why we are so bad at it. Over the years there has been a continued recognition in the software industry that we are awful at estimating large software projects. There have been a lot of people who have written on this subject and at this point I feel I'm crusty enough where I can write on it too.

We've tried to work our way around our poor estimating skills with Agile models were there is no up front estimating of the project and that's great work if you can get it. But the reality is that in many organizations there is a CFO or another person in control of the budget who is making the decision to allow a project to be funded. He wants to know, "How much is it going to cost?" and he wants to know that before the first line of code is ever written. So we need that upfront waterfall estimate for them, even if we know how difficult such an estimate is likely to be.

When doing estimation of large software projects, for years I've seen a diagram floating around similar to this:


* I'm sorry about my poor drawing skills. This was done on an iPad Pro with a Pencil. The iPad Pro is outstanding (thanks Greg!!!!), my drawing skills are not.

This diagram is of something referred to as the cone of uncertainty. The story goes that when you start a project you know very little about it and thus you don't know its cost. You have a high degree of uncertainty and may think it is more or less difficult than it really is. Over time as you execute on the project you know more about it and the amount of uncertainty about the project and its cost shrinks until the project is done. When it is done you finally know exactly how much it did cost.

At the start of the project you know nothing or very little. This is when people may make some estimates which are shown by the red dots. Some of the estimates will be too high because the complexity and cost was over estimated and some will be too low because the complexity and cost was underestimated. A nice even distribution.

This is a myth we like to tell ourselves. Here is the reality:

What a difference. Most of the estimates are low, way low. A few may be close. These are likely the estimates that when you go out to bid are way higher than the others. Very rarely do any of them match the actual cost at the end.

I'm sure you are thinking this is terrible. Why does it happen? I believe there are several factors that go into it.

  • We have a tendency to think things we don't know well are simpler than they are
When writing custom software we are doing something that has never been done before. When we think about something we have never done before, we tend to think it is simpler than it is because we don't understand the complexities involved. We don't know what we don't know.

Recently I was working with a colleague and we were asked how much does it cost to add internationalization to an application. My colleague thought it was a reasonably simple task and listed off how the Android system handles string resources. I had just completed a project with internationalization and I replied that it could potentially be a large effort. It was a cross platform project so we needed a way to have cross platform resources, we would have to see if we have to do any currency conversion and deal with what exchange rates to use, perhaps we might have unit of measure conversions like pounds to KGs or miles to kilometers, we might have graphics that need to be different per culture, we may even need to tweak the layout to handle making space for languages like German with long words. The amount of complexity in there could be very large but we don't know all the pieces that could be in there until we have done it before.

Since the software we write is custom by definition, some of what we are doing is unique; things we have never done before. As such, we will tend to underestimate the effort.
  • Many times it is some of our best people doing the estimates
Another problem run into when creating estimates is that in many cases it is the senior people who do the estimates on large projects. After all, they are our architects and most experienced people. This is great, but they also tend to be the people who work the fastest and may estimate based on how long it would take them to do the task. The person doing the work may not have the same experience or velocity.
  • We only think of the primary activity and tend to gloss over all the things around it
When thinking about a feature we tend to think in terms of what it takes to actually code it. What is in many cases forgotten are all the activities around it. Creating a feature branch, checking in and out code, merging code, creating pull requests, responding to code reviews and even making tests (if not estimated elsewhere). Additionally user experience design and quality assurance iterations tend to be much larger than people think about. All of these things add up to real time which brings us too:
  • Things that seem small and inconsequential add up
Some of the activities I mentioned can sometimes be discounted. After all, it can be argued that it takes almost no time at all to create a feature branch. The problem with that thinking is that it does take time. A couple of minutes here, a couple of minutes there equates to large buckets of time. When they are not included, our estimate is necessarily too low. We don't tend to add things that aren't there but we almost always discount things that should be, so the estimate is low.
  • People are not machines but many times we estimate as though they are
Sometimes we will estimate thinking their people are going to work eight hours a day, forty hours a week on a project. This is not reality. People need time for bio brakes, admin functions and sprint ceremonies and just time to think. You should probably be thinking more like six hours a day. 
  • Lower estimates make people happy and we like to please people
This bit is one of the more insidious reasons why estimates are low. Many of us are hard wired to try and please people and this includes the people who are making the estimates. They know that they will please more people when the estimates are lower. The business more likely to be won, the project more likely to be funded.

I have a saying, unlike wine, bad news does not improve with age. I fully believe that, but it is not something we take to heart in estimating. In most cases it isn't purposeful, it is just an unconscious desire to please leading to bias for lower estimates. We give the lower estimates now because it is good news. But what I said still applies, bad news does not improve with age and clients don't tend to like finding out a project was underestimated half way through.
  • We tend to estimate the happy path
Lots of things come up when developing. Sometimes we encounter bugs with our tools, sometimes our machines break, sometimes we have to way to get access to some system or an external resource will not be available through no fault of our own. We don't normally think about these things when estimating. An underlying assumption on many of our estimates is that our machines will always work, the services will always be up and well will get immediate access to any resource we need access to. 
  • There are many ways to implement a feature
Recently we did a presentation with a prospect on why we couldn't estimate their project based on the information they had given us to far. We went over several of their features and mentioned three ways that each could be implemented with a very different level of effort for each. 

People who go out to bid in many cases think their RFP and specifications are much better defined than they actually are. Since we tend to like to please people and know lower estimates will do so, our estimates tend to be for the simpler implementations. The problem with this is there wasn't a meeting of the minds between the prospect and the person doing the estimation. They estimated a Yugo when the prospect may be envisioning a Cadillac.
  • We only think we know what the software should do
One of the reasons to go with a minimum viable product in mobile is because only through experimentation do we find out the optimum formula to achieve business outcomes and that formula will have to be tweaked through the lifetime of the application.

I was recently with a prospect and we let them know it may be several iterations before they really get their application to where it is achieving business outcomes. Incredulously they asked us, "Do you mean to say you can't write it right the first time?" to which we replied, "Unfortunately you don't know what needs to be written yet."

This is the reality of a software product. No matter how well thought out a set of waterfall up front specifications are, they will not survive contact with the implementation intact, they will be changed. This will cause reimplementation work that is almost certainly not part of the estimate.
  • We go out for competitive bids
Another problem we run into is when we go out for competitive bids. We give our potential vendors ample incentive to try and come up with best case scenarios, after all they want to get the business. They will play with the model, different ways thinks could be implemented, etc. These changes are not necessarily being made to increase our business outcomes but instead to win the work.

Of course the problem is that much of this work is time and materials. The laws of physics are not being bent with these bids. The effort in the end will be what it will be. But we give our vendors ample incentive to "get creative" in the estimating process. Which brings me to
  • Sales people like to sell
Sales people like to sell. In all consulting companies there is a desire to get the work. In most cases they are not purposely trying to do anything to undermine their clients, but they do want the work. That leads to a situation where tremendous pressure can be put on the people making the estimates to keep them low by the sales staff.

This problem is very pronounced in sales driven organizations where the sales teams can be very influential. After the business is won the delivery teams have to pick up the pieces and try to deliver on unrealistic estimates. If you are bringing in an outside consulting company be very careful to know if that partner is more delivery focused or sales focused. The estimates of a delivery focused organization are likely to be more realistic but likely higher.

I don't Just Want to Complain About How We are Bad at Estimating

If you are trying to estimate a software initiative, particularly if you are trying to bring in a partner, there are things you can do to reduce the impact of all the bad estimating.

  • Understand the bids are all too low, plan accordingly
If you went out to bid based on some specifications you came up with, know that you are at the wide opening of the cone of uncertainly and all the bids are likely clustered at the bottom. Whatever the successful partner's bid is, pad it. With that extra padding the project won't end up as over budget as it otherwise would have been. 
  • Have workshops with potential partners
While the estimates will likely be clustered around the low end of the cone of uncertainty, a way to get them as accurate as possible is to move further along the cone. The more the potential partners know, the more complexity they will understand and the more accurate their estimates are likely to be.

This has the unfortunate side effect of also likely making the estimates higher but by doing these types of workshops you can usually get a feeling for working with the different potential partners and which are better than others. We do such a workshop quite frequently, called a backlog grooming workshop. Our purpose is to move further along the cone of uncertainty and understand more about what the software needs to do while getting a feel for the prospect while they get a feel for us.
  • Examine why bids are different
If you get in several bids, don't be tempted to just take the lower one. If one is much different than the others, try to understand why that is. It may not be an apples to apples comparison, they may be thinking about something the other potential partners are not or the delivery model could be substantially different. 

We do this with planning poker as well. If someone's answer is very different than everyone else's, it pays to find out why. They could very well be thinking of something crucial to why your project may cost a different amount than the other potential partners are thinking.

  • Understand if you are dealing with a sales focused or delivery focused organization

The bids of sales focused organizations may be lower. They also may have cut all kinds of corners in the delivery plan to have that lowest bid. Just be sure you understand what you are getting. Sometimes cheaper is more expensive in the long run. 

In delivery focused organizations the delivery teams tend to have more input in the estimation process are are always asking themselves if they can implement it, because they have to. The delivery plans of such organizations tend to be better thought out and as such may have less cost overruns due to problems with the delivery model itself.
  • Think in terms of product instead of project
Something that we think a lot about with mobile software is it a project or a product. A project has a known scope and when it is done it's more or less done. (see this post by Russ Miller: Mobile Project or Mobile Product). Projects normally have fixed up front budgets, timetables and scope. Since we know the budget is likely too low, projects usually result in cost overruns. 

Products are normally done on a capacity model with a burn rate over a period of time with a variable amount of scope. With a product approach you may not fully know the amount of functionality that will be achieved but with a fixed burn rate, you do know the cost. If there is some flexibility in the scope for an MVP release or the date, a product approach with a capacity team can give you much more accurate period over period cost forecasting and to some extent sidesteps the whole problem with underestimating.
  • Estimate small things
A co-worker has opined and a small 1-2 story point estimate is likely to be 90% accurate while a 40 point epic is likely to only be 50% accurate. The more you know about something and can break it down into smaller chunks, the more accurate the estimate is likely to be. Similarly, if you break your software releases into smaller chunks where a MVP really is the minimum, then the more accurate the estimates for them will be as well.
  • Semper Gumby
At the end of the day be a little flexible with your expectations. We know people are poor estimators and we know some of the real reasons that go into it. Go in knowing that the estimates are too low but also with an understanding of what level of budgetary tolerance you can be off before business objectives are really being impacted. If you can, be somewhat flexible on scope, particularly for the first release. Complex systems take time and iterations to build correctly. 

Understand that the estimates are too low, what functionality is really critical to business success and which is not (if you think it all is, you are likely mistaken), know what tolerance you have for cost overruns and just be as flexible as you can within the constraints of meeting your business goals.

Saturday, January 14, 2017

Don't Feel Bad that You Don't Know Everything

It's been a few months since I've written a blog post. I've been busy re-learning things for my presentations, figuring out how to do all kinds of material design concepts around animations, snack bars and expansion panels, working on a podcast, creating a class, preparing for a workshop, learning the latest and greatest about MvvmCross and playing with my Swift demo code to make updates as a result of Swift 3.

That's just a small sample of of what I've been trying to keep up with in the past several months. I keep telling myself I just have this one last push and it will be over for a while, but there is always another surge of learning and work right around the corner. So here I am at 1:30 AM on a Saturday morning, trying to know everything there is to know about mobile and realizing I still don't.

Mobile and responsive web weren't the first things I did this with. Before that it was various flavors ASP.Net, Silverlight, Remoting. general .Net, VB and COM, FoxPro and before that Clipper and DBase. Most of us understand the idea that we will spend our our entire careers learning and re-tooling. But here's the rub, the velocity of change has increased dramatically and the variety of things to know has similarly increased, seemingly exponentially. If you feel confused or guilty because you don't know it all, don't be. You can't, no one can.

If you go to the latest VSLive or Evolve, don't feel guilty if the speaker is giving a presentation on whatever is new and you haven't touched it yet. Here's the reality. The speaker probably has only been working with some of these new technologies for a few months, because that's all we could have been working with them. Here's another reality check, when I listen to someone else and they know about something I don't, I think to myself "How do they keep up with all this stuff?" I suspect many other speakers similarly think that about others from time to time.

Why do we feel this way, why are we so hard on ourselves? Partially because we've got it in our heads that we have to know it all.

There was a time, probably around 1999, where I though I knew everything about using VB 6 with COM that there was to know. I even knew a great framework to use with it, CSLA. I thought I had it all locked down. I probably was a bit overconfident but wasn't overly far off in my thinking. The amount there was to know about that topic was comparatively finite. Of course all my illusions were shattered soon after when I heard rumblings of something new on the horizon that would become known as .Net.

Fast forward 18 years and I don't think anyone fully knows all there is to know about Xamarin for iOS, Android and Mac, much less everything in Forms. Not even close. There is just too much to unpack. When I think I am getting even remotely close to unpacking it, it will be time to start looking at Android O and iOS 11, oh and by the way here are the new changes to C#. Don't even get me started on Azure.

As an industry we have got to stop expecting that we will know it all. We can't know it all. Instead, keep your eyes and ears open, absorb what you can, when you need to, and write your best code. You don't need to know everything to write good code and create things that provide value. If you don't know the latest language keyword they added to C#, that doesn't make your code bad. People were writing good code without that keyword last year and more than likely your code is still just fine without it.

We have to stop being so hard on ourselves.

Sunday, July 17, 2016

Xamarin iOS Autolayout Cheat Sheet

I get questions on this a lot by people new to iOS development or new to iOS development using Xamarin. I've made this cheatsheet to help with people learning how to use autolayout. Apple's iOS uses a layout system based on absolute positioning called autolayout. Unfortunately, the layout system feels bolted on top of an absolute positioning system (which in many ways it was) and feels a little awkward at times. This guide is for anyone new to the constraint system. Using autolayout in Xamarin iOS is very similar to what it was in XCode but there are some designer differences.

Background Concepts:

Size Classes: A constraint can be made for a particular size class. Size classes are Compact, Any and Regular. Compact are for narrow or short screens while regular is for everything else. Most constraints are likely to be made with any, which means the same constraint will be used for any sized screen. To ensure you are creating constraints with 'any' selected you can on the size class selection in the upper left hand corner of the storyboard designer.

You can also change the size classes to see how they will lay out differently in different sized screens (well compact, any and regular).

Number of Constraints: A control will normally have four constraints that define its X/Y position, width and height. If you have more than four and didn't do it for a very particular reason (such as another control using the position of this control to do its layout), it is probably a mistake.

Control states: You can click on a control in interface builder to to toggle it's state from resize mode to constraint editing mode.

Resize Mode:

Constraint Editing Mode:

Valid Control State Indication: A control with a blue background is considered to have a valid set of constraints. A control with no constraints at all is also considered valid (it will have a fixed X/Y position and a fixed height and width).

A control with incomplete constraints will have an orange background.

A control with conflicting constraints will have a red background. This normally means you did something wrong.

Constraint editing mode

Normally you will choose a set of horizontal constrains and a set of vertical constraints.

Common Horizontal Constraint Sets:
Start at a fixed amount of space from the left with a fixed width.

Start at a fixed amount of space from the right with a fixed width.

Start and end with a fixed amount of space to the left and the right (control stretches and shrinks as the width of the screen does).

Control a fixed amount of space to the left or right of the center (can even be exact center) with a fixed width.

Common Vertical Constraint Sets:
Start at a fixed amount of space from the top with a fixed height.

Start at a fixed amount of space from the bottom with a fixed height.

Start and end with a fixed amount of space to the top and the bottom (control stretches and shrinks as the height of the screen does).
Control a fixed amount of space to the top or bottom of the center (can even be exact center) with a fixed height.

Control Combinations: Constraints don't just have to constrain controls to the edges of the screen or the center, they can constrain controls against other controls. Take the following example, the button is constrained to the bottom of the view with a fixed height. The image view is constrained at the top of the view and the bottom by the top of the button. So the button will always be the same height and space to the bottom and the image view will shrink and grow in height as the form does, always leaving the same spacing to the button.

Hiding controls: Controls that are not visible still have their constraints in place. That is to say they will still occupy the same space on the view and any other controls that are constrained by they will still lay out as if the control were visible. This is very different behavior than other systems like Android XML where if a control is marked as gone, the other controls re-adjust as if it didn't exist. In order to make an iOS control gone and any other controls constrained against it readjust, you may have to make some constraints have a zero height and / or width (constraint property) and then reset the constraints height and width when it becomes visible again.

There is certainly more to be learned about constraints but for anyone trying to understand them, I hope it gets you started.

Wednesday, July 6, 2016

Xamarin Build Services - Nuget Restore with VSTS

Recently I wrote a post about about setting up a continuous delivery process using VSTS, Xamarin and MacInCloud. One of the things I mentioned was using the restore Nuget packages task in order to make sure the appropriate files were available to the build server. It seemed to work great in my test project. Then I was working with a real project using the same process and introduced MvvMCross into the mix; suddenly I encountered an error on the build server similar to "'MvvmCross.Core' already has a dependency defined for 'MvvmCross.Platform'." Here is a Stackoverflow discussion on this:

The Problem:
This error is caused by the version of Nuget that is currently deployed with the Xamarin Studio on the Mac has problems with certain Nuget restore operations. After conferring with the support group from MacInCloud they stated a solution similar to what I am detailing here might work, it did.

The Solution:
Do not use the Nuget restore task in VSTS, instead deploy the latest version of Nuget in your source control and include a shell script to restore the packages using a newer version of Nuget.

1) Download the latest version of Nuget.exe and add it to your source control repository. You can download it from the following site. You should get the latest 3.X version.

Take the downloaded nuget.exe and place it in a directory in your repository. I placed mine under a folder called Nuget off the repository root.

2) Create a shell script file. It is just a text file with a .sh extension. I called mine In the file is a single line:

mono nuget/nuget.exe restore $1

I saved this file to the root of the repository.

3) Add a new shell script task to the build script in VSTS. The Script Path points to the location of the file in the repository and the Arguments setting should point to the solution file that Nuget packages need to be restored for.

With this alternative method of restoring Nuget packages on a Mac build server, more complex restores work without the errors that are encountered using the default Nuget restore task.

Tuesday, May 31, 2016

Setting Up Builds With Xamarin Using MacInCloud and VSTS

For many of us doing mobile work the ability to create great continuous integration and continuous delivery solutions has been hampered by the lack of enterprise support for Macs and generally the lack of available build agents that work with iOS and even Android projects. Recently MacInCloud released a Visual Studio Team Services (VSTS) build agent that can work with native, Xamarin and Cordova development for iOS and Android. For Magenic this is exactly what we need so I have been working through creating a repeatable process that we can use project after project to stand up this sort of build service.

To start with I want to talk about the kinds of build services we can set up:

Continuous Integration (CI) - A build designed for when developers check in code. Usually ensures that the code being checked in compiles and that all unit tests pass.

Continuous Delivery (CD) - Happens when code is promoted to an environment, such as QA, built and deployed out to the app deployment service to be used for testing. Code promotion usually happens as code is merged into a branch for that purpose, such as a QA branch.

There is also a concept of Continuous Deployment, that is in many ways similar to Continuous Delivery expect the build is pushed to the production environment. For this blog post I will focus on continuous delivery for Xamarin when code is moved to a QA branch from a Development branch. I hope to follow it up by posts for CI and also for Native technologies and Cordova using VSTS and MacInCloud.

The technologies covered by this post:
Visual Studio Team Services (VSTS)
Xamarin Android
Xamarin iOS
MacInCloud VSTS Agent

What do I want to accomplish when doing my CD Build:
Kick off when moving code into the QA branch
Register my Xamarin Account
Get all required dependencies
Change version number of Android and iOS code
Build iOS and Android code
Sign Android and iOS code
Generate required documentation (will cover in a later post)
Deploy out to HockeyApp

I won't go into how to connect the VSTS MacInCloud agent to VSTS. A good post that explains this can be found here: Getting Started with the MacinCloud VSTS Build Agent Plan. I also won't go into branching strategies, I assuming multiple branches with at least Development, QA and Release branches. I use git style repositories but the TFS style repositories will work as well.

What I assume is already done:
VSTS repository created
Branching implemented
MacInCloud VSTS agent purchased
MacInCloud agent created and connected to VSTS
iOS certificates and provisioning profiles uploaded to MacInCloud agent
Note: If you ever want to change the pool name that the agent is connected to, it will submit a request in the MacInCloud portal but it doesn't seem that the request is processed immediately. It may take a few hours for it to occur during which time you cannot build with the agent. Be cognizant of this when changing the pool name after the initial agent setup (which seems to happen nearly immediately).
Note: It may happen that the agent gets locked up or otherwise seems to stop responding. Unfortunately, at this time there is no way to manually reset the agent. You will need to create a support ticket with MacInCloud. Luckily they are very responsive and always got back to me later in the day.
Note: For Xamarin, a MacInCloud VSTS build agent can pretty much do all the build types you can do in Xamarin Studio on a Mac. However, that is not all types of .Net applications. For example, if you want to compile a UWP project as part of a Xamarin Forms solution, that part of the build would have to occur on a Windows build agent.

Setting up the build steps

Kick off when moving code into the QA branch

A last note on build steps. I called my branch to cause the CD build to run QA. Also I used the Ad-Hoc build definition to setup all the project settings for QA builds. In my VSTS project I created a new build configuration called "QA Build". I named my MacInCloud VSTS agent queue: Xamarin. My VSTS configuration for Repository, Triggers and General tabs to kick off the build looks like this:

Register my Xamarin Account

On the Build Tab we can start adding steps. There is one task that we care about to start, Activate Xamarin License. We need to add it to the build process four times. Twice at the beginning to activate the licenses for iOS and Android and twice at the end to deactivate them.

Note: Xamarin is now free and for many builds these steps may not be necessary. For some build options, such as embedding assemblies in native code for Android, an Enterprise Xamarin subscription or MSDN Enterprise license are still required. As the formal licensing switches from the Xamarin licensing scheme to the MSDN subscription scheme, these steps will likely slightly change.

Get all required dependencies

There are two tasks that can help here, Nuget Installer and Xamarin Component Restore. Put in tasks for one or both of these depending on where your packages come from. Currently, I'm using just Nuget packages and left the default to restore for all solutions in the branch.

Change version number of Android and iOS code 

Each and every build put into HockeyApp needs a unique build number that is ever increasing for it to understand what is the latest build. It does not use the upload or compile date as the primary mechanism for this, instead it uses the build number. A newer build will be thought to be older by HockeyApp if it has a lower build number than another in HockeyApp even if the version name looks higher.

Both Android and iOS has two build identifiers, one a human readable version name such as 1.3.2 or 1.3.3 and one an ever increasing sequence of numbers that is used to tell which is newer. In Android in the AndroidManifest the VersionName is the human readable name and the VersionCode is the ever increasing number to tell what is newer. For iOS the equivalent settings can be found in the info.plist under CFBundleVersion is the ever increasing number and CFBundleShortVersionString is the human readable name.

To handle this in VSTS I use a task called Version Assemblies that is part of Colin's ALM Corner Build and Release Tools. This is not a standard VSTS task so you will have to go out to the Marketplace (click Add Build Step then select "Don't see what you need? Check out our Marketplace." The Version Assemblies task allows you to search a file in search control for a RegEx match and replace it with part of the VSTS project's Build number.

Setup VSTS variables

To specify the build name I created a VSTS variable. This is done on the Variables Tab of VSTS setup. In this case I want the version name to be 1.3.0. When I change the version name, I will go here and change it. There are other ways to do this, but this is based on the use of the Version Assemblies Task. 

Then I setup the build number format on the general tab. This will use the current date, the value in the VersionNumber variable and the BuildID. 


Note: Some samples I have seen, including the one for the task, show using the TFS revision number for this and using that for the build number. I highly recommend against this. The revision number resets on a daily basis or when anything else changes in the build number. This won't lead to an ever increasing number we can use for Android and iOS to tell what the latest is.

Setup iOS placeholder versions

Due to the structure of the iOS info.plist file it won't be easy for our task to find the right keys to replace. So we are going to use some numbers that will work fine for development, but ones that our build service will be able to find and replace. In the info.plist file I set the values as follows:


Add Version Number tasks for iOS

Add two Version Assemblies tasks, one for the CFBundleShortVersionString, and one for the CFBundleVersion. Since a regex that finds and replaces for the CFBundleShortVersionString will also find the first part of the CFBundleVersion, I recommend replacing the CFBundle version first in the build order. Here is how I set those up. With the the CFBundleShortVersionString will end up with the value in our VersionNumber variable (like 1.3.1) and the CFBundleVersion will end up with the unique and ever increasing BuildID.

(You will need to open the advanced area)
Build Regex Pattern:  (?:\d\d\d\d\d\d\d\d_)(\d+.\d+.\d+.)(\d+)
Build Regex Group Index: 2
Regex Replace Pattern: (\d+.\d+.\d+.\d+.\d+)

Build Regex Pattern:  (?:\d\d\d\d\d\d\d\d_)(\d+.\d+.\d+)(.\d+)
Build Regex Group Index: 1
Regex Replace Pattern: (\d+.\d+.\d+.\d+)

Setup Android placeholder versions

In the Android Application.Manifest ensure the versionName is two numbers separated by a period such as 1.2 and that the versionNumber is a number without a period like 4.

Add Version Number Tasks for Android

Build Regex Pattern:  (?:\d\d\d\d\d\d\d\d_)(\d+.\d+.\d+)(.\d+)
Build Regex Group Index: 1
Regex Replace Pattern: versionName="\d+.\d
Prefix for Replacements: versionName="

Build Regex Pattern:  (?:\d\d\d\d\d\d\d\d_)(\d+.\d+.\d+.)(\d+)
Build Regex Group Index: 2
Regex Replace Pattern: versionCode="\d+
Prefix for Replacements: versionCode="

Build iOS and Android code

Build iOS Code

I build the iOS code first by using the Xamarin.iOS build step. I add it in and point to a solution that contains all the iOS projects and any shared PCLs. The configuration I use the $(BuildConfiguration) variable which I set to Ad-Hoc. This will build the targeted solution in the Ad-Hoc build configuration with all appropriate build settings.

Note: I leave the Signing and Provisioning profile section blank. Instead I have defined the proper certificate and profile in the Ad-Hoc configuration project properties for the iOS projects. These correspond to the signing and provisioning profiles that I have already uploaded to my VSTS build agent in MacInCloud.
Note: I encountered an error "user interaction is not allowed" which had to do with the keychain for the certificates I uploaded being locked. I'm not sure how this happened and there is likely nothing you can do to fix it yourself. I submitted a ticket and the MacInCloud support team was able to quickly change something on their end to resolve the issue.
Note: I could have also included the Android projects in the solution and they would have built. However, I wanted direct control over their building and signing so I created a solution used only for the builds that only contains the shared PCLs and iOS projects so I wouldn't build the Android projects twice. It also gives me higher level quick visibility if it was the iOS and Android part of the build that failed. 

Build Android Code

Here I used the Xamarin.Android task. Instead of pointing to a solution, it points to a particular project. I also added and additional argument, /t:SignAndroidPackage that forces it to build an .APK. 

Note: I didn't include anything in the project file about sighing the APK because that would require me to keep my signing information in the project file itself. Instead I used a separate Android Signing build task and kept the certificate's authentication information in variables in the VSTS build definition.
Note: At first I received an error, "The Android SDK Directory could not be found. Please set via /p:AndroidSdkDirectory." This had to do with the ANDROID_HOME environment variable not being set correctly in the agent. If you put in a ticket, the MacInCloud support team can fix this for you.

Sign Android and iOS code

There is nothing to do here for iOS, as the code signing occurred as part of the iOS build process. For Android I added an Android Signing task to the build steps. I also added the appropriate keystore file into source control so I could reference it as part of the build.

Setup Keystore Variables

Granted, these are not very good passwords, but they will do for this test. Create these three variables in your build definition with the proper credentials for your keystore.

Add Android Signing Task

Use the following configuration for the Android Signing. The keystore should be located within your source code checked into the branch that is building.

Note: The first time I tried this I received an error, "Path must be a string…”. After searching and finding it wasn't a setting on the task, I found this was caused by the JAVA_HOME environment variable not being setup on the VSTS build agent. I submitted a ticket and they got the JDK properly setup with this variable defined and everything started working.

Deploy out to HockeyApp

Luckily there is already a task for deploying to HockeyApp in VSTS.  We need to add two of these tasks to our build definition, one for Android and one for iOS. On the first one you add you will need to setup the connection between HockeyApp and VSTS. To do this press the manage link on the HockeyApp connection section of the task. This will bring up an area where you can create a new connection. It will ask for an API Token.

The API Token can be found in HockeyApp under Account Settings \ API Tokens. I created one with Full Access though only Upload and Release is probably sufficient. Copy the created API token back into the VSTS HockeyApp connection.

Setup Apps in HockeyApp

You will need to setup two apps in HockeyApp, one for Android and one for iOS. I manually created them and set the Bundle Identifier (iOS) and package name (Android). Each one of these HockeyApp applications gave me an App Id that I could then copy back into the HockeyApp deployment task in the VSTS build.

Finalize Deployment Settings

For both the Android and iOS HockeyApp deployment tasks I added information on what binaries to deploy. I could also set release notes (later post), debug symbols and restrictions on who can download.



Final Thoughts

This took a fair amount of trial and error to get it right. I'm hoping that this can be used by others as it is moving toward codification as part of Magenic's new recommended CD process. I'm sure there is some cleanup to be done and I wanted to get this down while it was fresh in my mind. Let me know if you have any issues with it. Here is a view of the full list of build steps:

What the builds look like in HockeyApp for Android:

For my simple solution the entire continuous delivery process takes less than 3.5 minutes.