Skip to main content

5 posts tagged with "continuous integration"

View All Tags

Deploying from ASP.Net MVC to GitHub Pages using AppVeyor part 2

"Automation, automation, automation." Those were and are Tony Blair's priorities for keeping open source projects well maintained.

OK, that's not quite true... But what is certainly true is that maintaining an open source project takes time. And there's only so much free time that anyone has. For that reason, wherever you can it makes sense to AUTOMATE!

Last time we looked at how you can take an essentially static ASP.Net MVC site (in this case my jVUNDemo documentation site) and generate an entirely static version using Wget. This static site has been pushed to GitHub Pages and is serving as the documentation for jQuery Validation Unobtrusive Native (and for bonus points is costing me no money at all).

So what next? Well, automation clearly! If I make a change to jQuery Validation Unobtrusive Native then AppVeyor already bounds in and performs a continuous integration build for me. It picks up the latest source from GitHub, pulls in my dependencies, performs a build and runs my tests. Lovely.

So the obvious thing to do is to take this process and plug in the generation of my static site and the publication thereof to GitHub pages. The minute a change is made to my project the documentation should be updated without me having to break sweat. That's the goal.

GitHub Personal Access Token#

In order to complete our chosen mission we're going to need a GitHub Personal Access Token. We're going to use it when we clone, update and push our GitHub Pages branch. To get one we biff over to Settings / Applications in GitHub and click the "Generate New Token" button.

The token I'm using for my project has the following scopes selected:

appveyor.yml#

With our token in hand we turn our attention to AppVeyor build configuration. This is possible using a file called appveyor.yml stored in the root of your repo. You can also use the AppVeyor web UI to do this. However, for the purposes of ease of demonstration I'm using the file approach. The jQuery Validation Unobtrusive Native appveyor.yml looks like this:

#---------------------------------#
# general configuration #
#---------------------------------#
# version format
version: 1.0.{build}
#---------------------------------#
# environment configuration #
#---------------------------------#
# environment variables
environment:
GithubEmail: [email protected]
GithubUsername: johnnyreilly
GithubPersonalAccessToken:
secure: T4M/N+e/baksVoeWoYKPWIpfahOsiSFw/+Zc81VuThZmWEqmrRtgEHUyin0vCWhl
branches:
only:
- master
install:
- ps: choco install wget
build:
verbosity: minimal
after_test:
- ps: ./makeStatic.ps1 $env:APPVEYOR_BUILD_FOLDER
- ps: ./pushStatic.ps1 $env:APPVEYOR_BUILD_FOLDER $env:GithubEmail $env:GithubUsername $env:GithubPersonalAccessToken

There's a number of things you should notice from the yml file:

  • We create 3 environment variables: GithubEmail, GithubUsername and GithubPersonalAccessToken (more on this in a moment).
  • We only build the master branch.
  • We use Chocolatey to install Wget which is used by the makeStatic.ps1 Powershell script.
  • After the tests have completed we run 2 Powershell scripts. First <a href="https://github.com/johnnyreilly/jQuery.Validation.Unobtrusive.Native/blob/master/makeStatic.ps1">makeStatic.ps1</a> which builds the static version of our site. This is the exact same script we discussed in the previous post - we're just passing it the build folder this time (one of AppVeyor's environment variables). Second, we run <a href="https://github.com/johnnyreilly/jQuery.Validation.Unobtrusive.Native/blob/master/pushStatic.ps1">pushStatic.ps1</a> which publishes the static site to GitHub Pages.

We pass 4 arguments to pushStatic.ps1: the build folder, my email address, my username and my personal access token. For the sake of security the GithubPersonalAccessToken has been encrypted as indicated by the secure keyword. This is a capability available in AppVeyor here.

This allows me to mask my personal access token rather than have it available as free text for anyone to grab.

pushStatic.ps1#

Finally we can turn our attention to how our Powershell script pushStatic.ps1 goes about pushing our changes up to GitHub Pages:

param([string]$buildFolder, [string]$email, [string]$username, [string]$personalAccessToken)
Write-Host "- Set config settings...."
git config --global user.email $email
git config --global user.name $username
git config --global push.default matching
Write-Host "- Clone gh-pages branch...."
cd "$($buildFolder)\..\"
mkdir gh-pages
git clone --quiet --branch=gh-pages https://$($username):$($personalAccessToken)@github.com/johnnyreilly/jQuery.Validation.Unobtrusive.Native.git .\gh-pages\
cd gh-pages
git status
Write-Host "- Clean gh-pages folder...."
Get-ChildItem -Attributes !r | Remove-Item -Recurse -Force
Write-Host "- Copy contents of static-site folder into gh-pages folder...."
copy-item -path ..\static-site\* -Destination $pwd.Path -Recurse
git status
$thereAreChanges = git status | select-string -pattern "Changes not staged for commit:","Untracked files:" -simplematch
if ($thereAreChanges -ne $null) {
Write-host "- Committing changes to documentation..."
git add --all
git status
git commit -m "skip ci - static site regeneration"
git status
Write-Host "- Push it...."
git push --quiet
Write-Host "- Pushed it good!"
}
else {
write-host "- No changes to documentation to commit"
}

So what's happening here? Let's break it down:

  • Git is configured with the passed in username and email address.
  • A folder is created that sits alongside the build folder called "gh-pages".
  • We clone the "gh-pages" branch of jQuery Validation Unobtrusive Native into our "gh-pages" directory. You'll notice that we are using our GitHub personal access token in the clone URL itself.
  • We delete the contents of the "gh-pages" directory leaving it empty.
  • We copy across the contents of the "static-site" folder (created by makeStatic.ps1) into the "gh-pages".
  • We use git status to check if there are any changes. (This method is completely effective but a little crude to my mind - there's probably better approaches to this.... shout me in the comments if you have a suggestion.)
  • If we have no changes then we do nothing.
  • If we have changes then we stage them, commit them and push them to GitHub Pages. Then we sign off with an allusion to 80's East Coast hip-hop... 'Cos that's how we roll.

With this in place, any changes to the docs will be automatically published out to our "gh-pages" branch. Our documentation will always be up to date thanks to the goodness of AppVeyor's Continuous Integration service.

Running JavaScript Unit Tests in AppVeyor

With a little help from Chutzpah...#

AppVeyor (if you're not aware of it) is a Continuous Integration provider. If you like, it's plug-and-play CI for .NET developers. It's lovely. And what's more it's "free for open-source projects with public repositories hosted on GitHub and BitBucket". Boom! I recently hooked up 2 of my GitHub projects with AppVeyor. It took me all of... 10 minutes. If that? It really is *that* good.

But.... There had to be a "but" otherwise I wouldn't have been writing the post you're reading. For a little side project of mine called Proverb there were C# unit tests and there were JavaScript unit tests. And the JavaScript unit tests weren't being run... No fair!!!

Chutzpah is a JavaScript test runner which at this point runs QUnit, Jasmine and Mocha JavaScript tests. I use the Visual Studio extension to run Jasmine tests on my machine during development. I've also been able to use Chutzpah for CI purposes with Visual Studio Online / Team Foundation Server. So what say we try and do the triple and make it work with AppVeyor too?

NuGet me?#

In order that I could run Chutzpah I needed Chutzpah to be installed on the build machine. So I had 2 choices:

  1. Add Chutzpah direct to the repo
  2. Add the Chutzpah Nuget package to the solution

Unsurprisingly I chose #2 - much cleaner.

Now to use Chutzpah#

Time to dust down the PowerShell. I created myself a "before tests script" and added it to my build. It looked a little something like this:

# Locate Chutzpah
$ChutzpahDir = get-childitem chutzpah.console.exe -recurse | select-object -first 1 | select -expand Directory
# Run tests using Chutzpah and export results as JUnit format to chutzpah-results.xml
$ChutzpahCmd = "$($ChutzpahDir)\chutzpah.console.exe $($env:APPVEYOR_BUILD_FOLDER)\AngularTypeScript\Proverb.Web.Tests.JavaScript /junit .\chutzpah-results.xml"
Write-Host $ChutzpahCmd
Invoke-Expression $ChutzpahCmd
# Upload results to AppVeyor one by one
$testsuites = [xml](get-content .\chutzpah-results.xml)
$anyFailures = $FALSE
foreach ($testsuite in $testsuites.testsuites.testsuite) {
write-host " $($testsuite.name)"
foreach ($testcase in $testsuite.testcase){
$failed = $testcase.failure
$time = $testsuite.time
if ($testcase.time) { $time = $testcase.time }
if ($failed) {
write-host "Failed $($testcase.name) $($testcase.failure.message)"
Add-AppveyorTest $testcase.name -Outcome Failed -FileName $testsuite.name -ErrorMessage $testcase.failure.message -Duration $time
$anyFailures = $TRUE
}
else {
write-host "Passed $($testcase.name)"
Add-AppveyorTest $testcase.name -Outcome Passed -FileName $testsuite.name -Duration $time
}
}
}
if ($anyFailures -eq $TRUE){
write-host "Failing build as there are broken tests"
$host.SetShouldExit(1)
}

What this does is:

  1. Run Chutzpah from the installed NuGet package location, passing in the location of my Jasmine unit tests. In the case of my project there is a chutzpah.json file in the project which dictates how Chutzpah should run the tests. Also, the JUnit flag is also passed in order that Chutzpah creates a chutzpah-results.xml file of test results in the JUnit format.
  2. We iterate through test results and tell AppVeyor about the the test passes and failures using the Build Worker API.
  3. If there have been any failed tests then we fail the build. If you look here you can see a deliberately failed build which demo's that this works as it should.

That's a wrap - We now have CI which includes our JavaScript tests! That's right we get to see beautiful screens like these:

Thanks to...#

Thanks to Dan Jones, whose comments on this discussion provided a number of useful pointers which moved me in the right direction. And thanks to Feador Fitzner who has generously said AppVeyor will support JUnit in the future which may simplify use of Chutzpah with AppVeyor even further.

Team Foundation Server, Continuous Integration and separate projects for JavaScript unit tests

Do you like to separate out your unit tests from the project you are testing? I imagine so. My own practice when creating a new project in Visual Studio is to create a separate unit test project alongside whose responsibility is to house unit tests for that new project.

When I check in code for that project I expect the continuous integration build to kick off and, as part of that, the unit tests to be run. When it comes to running .NET tests then Team Foundation Server (and it's cloud counterpart Visual Studio Online) has your back. When it comes to running JavaScript tests then... not so much.

This post will set out:

  1. How to get JavaScript tests to run on TFS / VSO in a continuous integration scenario.
  2. How to achieve this *without* having to include your tests as part of web project.

To do this I will lean heavily (that's fancy language for "rip off entirely") on an excellent blog post by Mathew Aniyan which covers point #1. My contribution is point #2.

Points #1 and #2 in short order#

First of all, install Chutzpah on TFS / VSO. You can do this by following Steps 1 - 6 from Mathew Aniyan's post. Instead of following steps 7 and 8 create a new unit test project in your solution.

Edit 29/05/2014: Matthew Manela (creator of Chutzpah) has confirmed that this is the correct approach - thanks chap!

@johnny_reilly Nope that is pretty much what you need to do.

— Matthew Manela (@mmanela) May 15, 2014

To our unit test project add your JavaScript unit tests. These should be marked in Visual Studio with a Build Action of "Content" and a Copy to Output Directory of "Do not copy". You should be able to run these tests locally using the Visual Studio Chutzpah extension - or indeed in some other JavaScript test runner. Then, and this is the important part, edit the csproj file of your unit test project and add this Import Project statement:

<Import Project="$(VSToolsPath)\WebApplications\Microsoft.WebApplication.targets" Condition="'$(VSToolsPath)' != ''" />

Ordering is important in this case. It matters that this new statement sits after the other Import Project statements. So you should end up with a csproj file that looks in part like this: (comments added by me for clarity)

<!-- Pre-existing Import Project statements start -->
<Import Project="$(VSToolsPath)\TeamTest\Microsoft.TestTools.targets" Condition="Exists('$(VSToolsPath)\TeamTest\Microsoft.TestTools.targets')" />
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<!-- Pre-existing Import Project statements end -->
<!-- New addition start -->
<Import Project="$(VSToolsPath)\WebApplications\Microsoft.WebApplication.targets" Condition="'$(VSToolsPath)' != ''" />
<!-- New addition end -->

Check in your amended csproj and the unit tests to TFS / VSO. You should see the JavaScript unit tests being run as part of the build.

The Surprisingly Happy Tale of Visual Studio Online, Continous Integration and Chutzpah

Going off piste#

The post that follows is a slightly rambly affair which is pretty much my journal of the first steps of getting up and running with JavaScript unit testing. I will not claim that much of this blog is down to me. In fact in large part is me working my way through Mathew Aniyan's excellent blog post on integrating Chutzpah with TFS. But a few deviations from this post have made me think it worth keeping hold of this record for my benefit (if no-one else's).

That's the disclaimers out of the way now...

...Try, try, try again...#

Getting started with JavaScript unit testing has not been the breeze I’d expected. Frankly I’ve found the docs out there not particularly helpful. But if at first you don't succeed then try, try, try again.

So after a number of failed attempts I’m going to give it another go. Rushaine McBean says Jasmine is easiest so I'm going to follow her lead and give it a go.

Let’s new up a new (empty) ASP.NET project. Yes, I know ASP.NET has nothing to do with JavaScript unit testing but my end goal is to be able to run JS unit tests in Visual Studio and as part of Continuous Integration. Further to that, I'm anticipating a future where I have a solution that contains JavaScript unit tests and C# unit tests as well. It is indeed a bold and visionary Brave New World. We'll see how far we get.

First up, download Jasmine from GitHub - I'll use v2.0. Unzip it and fire up SpecRunner.html and whaddya know... It works!

As well it might. I’d be worried if it didn’t. So I’ll move the contents of the release package into my empty project. Now let’s see if we can get those tests running inside Visual Studio. I’d heard of Chutzpah which describes itself thusly:

“Chutzpah is an open source JavaScript test runner which enables you to run unit tests using QUnit, Jasmine, Mocha, CoffeeScript and TypeScript.”

What I’m after is the Chutzpah test adapter for Visual Studio 2012/2013 which can be found here. I download the VSIX and install. Pretty painless. Once I restart Visual Studio I can see my unit tests in the test explorer. Nice! Run them and...

All fail. This makes me sad. All the errors say “Can’t find variable: Player in file”. Hmmm. Why? Dammit I’m actually going to have to read the documentation... It turns out the issue can be happily resolved by adding these 3 references to the top of PlayerSpec.js:

/// <reference path="../src/Player.js" />
/// <reference path="../src/Song.js" />
/// <reference path="SpecHelper.js" />

Now the tests pass:

The question is: can we get this working with Visual Studio Online?

Fortunately another has gone before me. Mathew Aniyan has written a superb blog post called "Javascript Unit Tests on Team Foundation Service with Chutzpah". Using this post as a guide (it was written 18 months ago which is frankly aeons in the world of the web) I'm hoping that I'll be able to, without too many tweaks, get Javascript unit tests running on Team Foundation Service / Visual Studio Online ( / insert this weeks rebranding here).

First of all in Visual Studio Online I’ll create a new project called "GettingStartedWithJavaScriptUnitTesting" (using all the default options). Apparently “Your project is created and your team is going to absolutely love this.” Hmmmm... I think I’ll be judge of that.

Let's navigate to the project. I'll fire up Visual Studio by clicking on the “Open in Visual Studio” link. Once fired up and all the workspace mapping is sorted I’ll move my project into the GettingStartedWithJavaScriptUnitTesting folder that now exists on my machine and add this to source control.

Back to Mathew's blog. It suggests renaming Chutzpah.VS2012.vsix to Chutzpah.VS2012.zip and checking certain files into TFS. I think Chutzpah has changed a certain amount since this was written. To be on the safe side I’ll create a new folder in the root of my project called Chutzpah.VS2012 and put the contents of Chutzpah.VS2012.zip in there and add it to TFS (being careful to ensure that no dll’s are excluded).

Then I'll follow steps 3 and 4 from the blog post:

*3. In Visual Studio, Open Team Explorer & connect to Team Foundation Service. Bring up the Manage Build Controllers dialog. [Build –> Manage Build Controllers] Select Hosted Build Controller Click on Properties button to bring up the Build Controller Properties dialog.

4. Change Version Control Path to custom Assemblies to refer to the folder where you checked in the binaries in step 2.

In step 5 the blog tells me to edit my build definition. Well I don’t have one in this new project so let’s click on “New Build Definition”, create one and then follow step 5:

*5. In Team Explorer, go to the Builds section and Edit your Build Definition which will run the javascript tests. Click on the Process tab Select the row named Automated Tests. Click on … button next to the value.

Rather than following step 6 (which essentially nukes the running of any .NET tests you might have) I chose to add another row by clicking "Add". In the dialog presented I changed the Test assembly specification to **\*.js and checked the "Fail build on test failure" checkbox.

Step 7 says:

*7. Create your Web application in Visual Studio and add your Qunit or Jasmine unit tests to them. Make sure that the js files (that contain the tests) are getting copied to the build output directory.

The picture below step 7 suggests that you should be setting your test / spec files to have a Copy to Output Directory setting of Copy always. This did not work for me!!! Instead, setting a Build Action of Content and a Copy to Output Directory setting of Do not copy did work.

Finally I checked everything into source control and queued a build. I honestly did not expect this to work. It couldn’t be this easy could it? And...

Wow! It did! Here’s me cynically expecting some kind of “permission denied” error and it actually works! Brilliant! Look up in the cloud it says the same thing!

Fantastic!

I realise that I haven’t yet written a single JavaScript unit test of my own and so I’ve still a way to go. What I have done is quietened those voices in my head that said “there’s not too much point having a unit test suite that isn’t plugged into continuous integration”. Although it's not documented here I'm happy to be able to report that I have been able to follow the self-same instructions for Team Foundation Service / Visual Studio Online and get CI working with TFS 2012 on our build server as well.

Having got up and running off the back of other peoples hard work I best try and write some of my own tests now....

Getting TypeScript Compile-on-Save and Continuous Integration to play nice

Well sort of... Perhaps this post should more accurately called "How to get CI to ignore your TypeScript whilst Visual Studio still compiles it..."

Once there was Web Essentials#

When I first started using TypeScript, I was using it in combination with Web Essentials. Those were happy days. I saved my TS file and Web Essentials would kick off TypeScript compilation. Ah bliss. But the good times couldn't last forever and sure enough when version 3.0 of Web Essentials shipped it pulled support for TypeScript.

This made me, and others, very sad. Essentially we were given the choice between sticking with an old version of Web Essentials (2.9 - the last release before 3.0) and keeping our Compile-on-Save *or* keeping with the latest version of Web Essentials and losing it. And since I understood that newer versions of TypeScript had differences in the compiler flags which slightly broke compatibility with WE 2.9 the latter choice seemed the most sensible...

But there is still Compile on Save hope!#

The information was that we need not lose our Compile on Save. We just need to follow the instructions here. Or to quote them:

Then additionally add (or replace if you had an older PreBuild action for TypeScript) the following at the end of your project file to include TypeScript compilation in your project.

...

For C#-style projects (.csproj):

<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptTarget>ES5</TypeScriptTarget>
<TypeScriptIncludeComments>true</TypeScriptIncludeComments>
<TypeScriptSourceMap>true</TypeScriptSourceMap>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Release'">
<TypeScriptTarget>ES5</TypeScriptTarget>
<TypeScriptIncludeComments>false</TypeScriptIncludeComments>
<TypeScriptSourceMap>false</TypeScriptSourceMap>
</PropertyGroup>
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />

I followed these instructions (well I had to tweak the Import Project location) and I was in business again. But I when I came to check my code into TFS I came unstuck. The automated build kicked off and then, in short order, kicked me:

C:\Builds\1\MyApp\MyApp Continuous Integration\src\MyApp\MyApp.csproj (1520): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\TypeScript\Microsoft.TypeScript.targets" was not found. Confirm that the path in the <import> declaration is correct, and that the file exists on disk.
C:\Builds\1\MyApp\MyApp Continuous Integration\src\MyApp\MyApp.csproj (1520): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\TypeScript\Microsoft.TypeScript.targets" was not found. Confirm that the path in the <import> declaration is correct, and that the file exists on disk.
</import></import>

That's right, TypeScript wasn't installed on the build server. And since TypeScript was now part of the build process my builds were now failing. Ouch.

So what now?#

I did a little digging and found this issue report on the TypeScript CodePlex site. To quote the issue, it seemed there were 2 possible solutions to get continuous integration and typescript playing nice:

  1. Install TypeScript on the build server
  2. Copy the required files for Microsoft.TypeScript.targets to a different source-controlled folder and change the path references in the csproj file to this folder.

#1 wasn't an option for us - we couldn't install on the build server. And covering both #1 and #2, I wasn't particularly inclined to kick off builds on the build server since I was wary of reported problems with memory leaks etc with the TS compiler. I may feel differently later when TS is no longer in Alpha and has stabilised but it didn't seem like the right time.

A solution#

So, to sum up, what I wanted was to be able to compile TypeScript in Visual Studio on my machine, and indeed in VS on the machine of anyone else working on the project. But I *didn't* want TypeScript compilation to be part of the build process on the server.

The solution in the end was pretty simple - I replaced the .csproj changes with the code below:

<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptTarget>ES5</TypeScriptTarget>
<TypeScriptRemoveComments>false</TypeScriptRemoveComments>
<TypeScriptSourceMap>false</TypeScriptSourceMap>
<TypeScriptModuleKind>AMD</TypeScriptModuleKind>
<TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Release'">
<TypeScriptTarget>ES5</TypeScriptTarget>
<TypeScriptRemoveComments>false</TypeScriptRemoveComments>
<TypeScriptSourceMap>false</TypeScriptSourceMap>
<TypeScriptModuleKind>AMD</TypeScriptModuleKind>
<TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
</PropertyGroup>
<Import Project="$(VSToolsPath)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(VSToolsPath)\TypeScript\Microsoft.TypeScript.targets')" />

What this does is enable TypeScript compilation *only* if TypeScript is installed. So when I'm busy developing with Visual Studio on my machine with the plugin installed I can compile TypeScript. But when I check in the TypeScript compilation is *not* performed on the build server. This is because TypeScript is not installed on the build server and we are only compiling if it is installed. (Just to completely labour the point.)

Final thoughts#

I do consider this an interim solution. As I mentioned earlier, when TypeScript has stabilised I think I'd like TS compilation to be part of the build process. Like with any other code I think compiling on check-in to catch bugs early is an excellent idea. But I think I'll wait until there's some clearer guidance on the topic from the TypeScript team before I take this step.