Skip to main content

5 posts tagged with "c#"

View All Tags

Snapshot Testing for C#

If you're a user of Jest, you've no doubt heard of and perhaps made use of snapshot testing.

Snapshot testing is an awesome tool that is generally discussed in the context of JavaScript React UI testing. But snapshot testing has a wider application than that. Essentially it is profoundly useful where you have functions which produce a complex structured output. It could be a React UI, it could be a list of FX prices. The type of data is immaterial; it's the amount of it that's key.

Typically there's a direct correlation between the size and complexity of the output of a method and the length of the tests that will be written for it. Let's say you're outputting a class that contains 20 properties. Congratulations! You get to write 20 assertions in one form or another for each test case. Or a single assertion whereby you supply the expected output by hand specifying each of the 20 properties. Either way, that's not going to be fun. And just imagine the time it would take to update multiple test cases if you wanted to change the behaviour of the method in question. Ouchy.

Time is money kid. What you need is snapshot testing. Say goodbye to handcrafted assertions and hello to JSON serialised output checked into source control. Let's unpack that a little bit. The usefulness of snapshot testing that I want in C# is predominantly about removing the need to write and maintain multiple assertions. Instead you write tests that compare the output of a call to your method with JSON serialised output you've generated on a previous occasion.

This approach takes less time to write, less time to maintain and the solid readability of JSON makes it more likely you'll pick up on bugs. It's so much easier to scan JSON than it is a list of assertions.

Putting the Snapshot into C##

Now if you're writing tests in JavaScript or TypeScript then Jest already has your back with CLI snapshot generation and shouldMatchSnapshot. However getting to nearly the same place in C# is delightfully easy. What are we going to need?

First up, a serializer which can take your big bad data structures and render them as JSON. Also we'll use it to rehydrate our data structure into an object ready for comparison. We're going to use Json.NET.

Next up we need a way to compare our outputs with our rehydrated snapshots - we need a C# shouldMatchSnapshot. There's many choices out there, but for my money Fluent Assertions is king of the hill.

Finally we're going to need Snapshot, a little helper utility I put together:

using System;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
namespace Test.Utilities {
public static class Snapshot {
private static readonly JsonSerializer StubSerializer = new JsonSerializer {
ContractResolver = new CamelCasePropertyNamesContractResolver(),
NullValueHandling = NullValueHandling.Ignore
};
private static JsonTextWriter MakeJsonTextWriter(TextWriter sw) => new JsonTextWriter(sw) {
Formatting = Formatting.Indented,
IndentChar = ' ',
Indentation = 2
};
/// <summary>
/// Make yourself some JSON! Usage looks like this:
/// Stubs.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}..\\..\\..\\data.json", myData);
/// </summary>
public static void Make<T>(string stubPath, T data) {
try {
if (string.IsNullOrEmpty(stubPath))
throw new ArgumentNullException(nameof(stubPath));
if (data == null)
throw new ArgumentNullException(nameof(data));
using(var sw = new StreamWriter(stubPath))
using(var writer = MakeJsonTextWriter(sw)) {
StubSerializer.Serialize(writer, data);
}
} catch (Exception exc) {
throw new Exception($"Failed to make {stubPath}", exc);
}
}
public static string Serialize<T>(T data) {
using (var sw = new StringWriter())
using(var writer = MakeJsonTextWriter(sw)) {
StubSerializer.Serialize(writer, data);
return sw.ToString();
}
}
public static string Load(string filename) {
var content = new StreamReader(
File.OpenRead(filename)
).ReadToEnd();
return content;
}
}
}

Let's look at the methods: Make and Load. Make is what we're going to use to create our snapshots. Load is what we're going to use to, uh, load our snapshots.

What does usage look like? Great question. Let's go through the process of writing a C# snapshot test.

Taking Snapshot for a Spin#

First of all, we're going to need a method to test that outputs a data structure which is more than just a scalar value. Let's use this:

public class Leopard {
public string Name { get; set; }
public int Spots { get; set; }
}
public class LeopardService {
public Leopard[] GetTheLeopards() {
return new Leopard[] {
new Leopard { Spots = 42, Name = "Nimoy" },
new Leopard { Spots = 900, Name = "Dotty" }
};
}
}

Yes - our trusty LeopardService. As you can see, the GetTheLeopards method returns an array of Leopards. For now, let's write a test using Snapshot: (ours is an XUnit test; but Snapshot is agnostic of this)

[Fact]
public void GetTheLeopards_should_return_expected_Leopards() {
// Arrange
var leopardService = new LeopardService();
// Act
var leopards = leopardService.GetTheLeopards();
// UNCOMMENT THE LINE BELOW *ONLY* WHEN YOU WANT TO GENERATE THE SNAPSHOT
Snapshot.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}..\\..\\..\\Snapshots\\leopardsSnapshot.json", leopards);
// Assert
var snapshotLeopards = JsonConvert.DeserializeObject<leopard[]>(Snapshot.Load("Snapshots/leopardsSnapshot.json"));
snapshotLeopards.Should().BeEquivalentTo(leopards);
}
</leopard[]>

Before we run this for the first time we need to setup our testing project to be ready for snapshots. First of all we add a Snapshot folder to the test project. The we also add the following to the .csproj:

<ItemGroup>
<Content Include="Snapshots\**">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>

This includes the snapshots in the compile output for when tests are being run.

Now let's run the test. It will generate a leopardsSnapshot.json file:

[
{
"name": "Nimoy",
"spots": 42
},
{
"name": "Dotty",
"spots": 900
}
]

With our snapshot in place, we comment out the Snapshot.Make... line and we have a passing test. Let's commit our code, push and go about our business.

Time Passes...#

Someone decides that the implementation of GetTheLeopards needs to change. Defying expectations it seems that Dotty the leopard should now have 90 spots. I know... Business requirements, right?

If we make that change we'd ideally expect our trusty test to fail. Let's see what happens:

----- Test Execution Summary -----
Leopard.Tests.Services.LeopardServiceTests.GetTheLeopards_should_return_expected_Leopards:
Outcome: Failed
Error Message:
Expected item[1].Spots to be 90, but found 900.

Boom! We are protected!

Since this is a change we're completely happy with we want to update our leopardsSnapshot.json file. We could make our test pass by manually updating the JSON. That'd be fine. But why work when you don't have to? Let's uncomment our Snapshot.Make... line and run the test the once.

[
{
"name": "Nimoy",
"spots": 42
},
{
"name": "Dotty",
"spots": 90
}
]

That's right, we have an updated snapshot! Minimal effort.

Next Steps#

This is a basic approach to getting the goodness of snapshot testing in C#. It could be refined further. To my mind the uncommenting / commenting of code is not the most elegant way to approach this and so there's some work that could be done around this area.

Happy snapshotting!

Getting up to speed with Bloomberg's Open API...

A good portion of any devs life is usually spent playing with APIs. If you need to integrate some other system into the system you're working on (and it's rare to come upon a situation where this doesn't happen at some point) then it's API time.

Some APIs are well documented and nice to use. Some aren't. I recently spent a goodly period of time investigating Bloomberg's Open API and it was a slightly painful experience. So much so that I thought it best to write up my own experiences and maybe I can save others time and a bit of pain.

Also, as I investigated the Bloomberg Open API I found myself coming up with my own little mini-C#-API. (It's generally a sure sign you've found an API you don't love if you end up writing your own wrapper.) This mini API did the heavy lifting for me and just handed back nicely structured data to deal with. I have included this wrapper here as well.

Research#

The initial plan was to, through code, extract Libor and Euribor rates from Bloomberg. I had access to a Bloomberg terminal and I had access to the internet - what could stop me? After digging around for a little while I found some useful resources that could be accessed from the Bloomberg terminal:

  1. Typing โ€œWAPI&lt;GO&gt;โ€ into Bloomberg lead me to the Bloomberg API documentation.
  2. Typing โ€œDOCS 2055451&lt;GO&gt;โ€ into Bloomberg (I know - it's a bit cryptic) provided me with samples of how to use the Bloomberg API in VBA

To go with this I found some useful documentation of the Bloomberg Open API here and I found the .NET Bloomberg Open API itself here.

Hello World?#

The first goal when getting up to speed with an API is getting it to do something. Anything. Just stick a fork into it and see if it croaks. Sticking a fork into Open API was achieved by taking the 30-odd example apps included in the Bloomberg Open API and running each in turn on the Bloomberg box until I had my "he's alive!!" moment. (I did find it surprising that not all of the examples worked - I don't know if there's a good reason for this...)

However, when I tried to write my own C# console application to interrogate the Open API it wasn't as plain sailing as I'd hoped. I'd write something that looked correct, compiled successfully and deploy it onto the Bloomberg terminal only to have it die a sad death whenever I tried to fire it off.

I generally find the fastest way to get up and running with an API is to debug it. To make calls to the API and then examine, field by field and method by method, what is actually there. This wasn't really an option with my console app though. I was using a shared Bloomberg terminal with very limited access. No Visual Studio on the box and no remote debugging enabled.

It was then that I had something of a eureka moment. I realised that the code in the VBA samples I'd downloaded from Bloomberg looked quite similar to the C# code samples that shipped with Open API. Hmmmm.... Shortly after this I found myself sat at the Bloomberg machine debugging the Bloomberg API using the VBA IDE in Excel. (For the record, these debugging tools are aren't too bad at all - they're nowhere near as slick as their VS counterparts but they do the job.) This was my Rosetta Stone - I could take what I'd learned from the VBA samples and translate that into equivalent C# / .NET code (bearing in mind what I'd learned from debugging in Excel and in fact sometimes bringing along the VBA comments themselves if they provided some useful insight).

He's the Bloomberg, I'm the Wrapper#

So I'm off and romping... I have something that works. Hallelujah! Now that that hurdle had been crossed I found myself examining the actual Bloomberg API code itself. It functioned just fine but it did a couple of things that I wasn't too keen on:

  1. The Bloomberg API came with custom data types. I didn't want to use these unless it was absolutely necessary - I just wanted to stick to the standard .NET types. This way if I needed to hand data onto another application I wouldn't be making each of these applications dependant on the Bloomberg Open API.
  2. To get the data out of the Bloomberg API there was an awful lot of boilerplate. Code which handled the possibilities of very large responses that might be split into several packages. Code which walked the element tree returned from Bloomberg parsing out the data. It wasn't a beacon of simplicity.

I wanted an API that I could simply invoke with security codes and required fields. And in return I wanted to be passed nicely structured data. As I've already mentioned a desire to not introduce unnecessary dependencies I thought it might well suit to make use of nested Dictionaries. I came up with a simple C# Console project / application which had a reference to the Bloomberg Open API. It contained the following class; essentially my wrapper for Open API operations: (please note this is deliberately a very "bare-bones" implementation)

The project also contained this class which demonstrates how I made use of my wrapper:

And here's what the output looked like:

This covered my bases. It was simple, it was easy to consume and it didn't require any custom types. My mini-API is only really catering for my own needs (unsurprisingly). However, there's lots more to the Bloomberg Open API and I may end up taking this further in the future if I encounter use cases that my current API doesn't cover.

Update (07/12/2012)#

Finally, a PS. I found in the Open API FAQs that "Testing any of that functionality currently requires a valid Bloomberg Desktop API (DAPI), Server API (SAPI) or Managed B-Pipe subscription. Bloomberg is planning on releasing a stand-alone simulator which will not require a subscription." There isn't any word yet on this stand-alone simulator. I emailed Bloomberg at [email protected] to ask about this. They kindly replied that "Unfortunately it is not yet available. We understand that this makes testing API applications somewhat impractical, so we're continuing to work on this tool." Fingers crossed for something we can test soon!

Note to self (because I keep forgetting)#

If you're looking to investigate what data is available about a security in Bloomberg it's worth typing โ€œFLDS&lt;GO&gt;โ€ into Bloomberg. This is the Bloomberg Fields Finder. Likewise, if you're trying to find a security you could try typing โ€œSECF&lt;GO&gt;โ€ into Bloomberg as this is the Security Finder.

Making PDFs from HTML in C# using WKHTMLtoPDF

Update 03/01/2013#

I've written a subsequent post which builds on the work of this original post. The new post exposes this functionality via a WCF service and can be found here.

Making PDFs from HTML#

I wanted to talk about an approach I've discovered for making PDFs directly from HTML. I realise that in these wild and crazy days of PDF.js and the like that techniques like this must seem very old hat. That said, this technique works and more importantly it solves a problem I was faced with but without forcing the users to move the "newest hottest version of X". Much as many of would love to solve problems this way, alas many corporations move slower than that and in the meantime we still have to deliver - we still have to meet requirements. Rather than just say "I did this" I thought I'd record how I got to this point in the first place. I don't know about you but I find the reasoning behind why different technical decisions get made quite an interesting topic...

For some time I've been developing / supporting an application which is used in an intranet environment where the company mandated browser is still IE 6. It was a requirement that there be "print" functionality in this application. As is well known (even by Microsoft themselves) the print functionality in IE 6 was never fantastic. But the requirement for usable printouts remained.

The developers working on the system before me decided to leverage Crystal Reports (remember that?). Essentially there was a reporting component to the application at the time which created custom reports using Crystal and rendered them to the user in the form of PDFs (which have been eminently printable for as long as I care to remember). One of the developers working on the system realised that it would be perfectly possible to create some "reports" within Crystal which were really "print to PDF" screens for the app.

It worked well and this solution stayed in place for a very long time. However, some years down the line the Crystal Reports was discarded as the reporting mechanism for the app. But we were unable to decommission Crystal entirely because we still needed it for printing.

I'd never really liked the Crystal solution for a number of reasons:

  1. We needed custom stored procs to drive the Crystal print screens which were near duplicates of the main app procs. This duplication of effort never felt right.
  2. We had to switch IDEs whenever we were maintaining our print screens. And the Crystal IDE is not a joy to use.
  3. Perhaps most importantly, for certain users we needed to hide bits of information from the print. The version of Crystal we were using did not make the dynamic customisation of our print screens a straightforward proposition. (In its defence we weren't really using it for what it was designed for.) As a result the developers before me had ended up creating various versions of each print screen revealing different levels of information. As you can imagine, this meant that the effort involved in making changes to the print screens had increased exponentially

It occurred to me that it would be good if we could find some way of generating our own PDF reports without using Crystal that would be a step forward. It was shortly after this that I happened upon WKHTMLtoPDF. This is an open source project which describes itself as a "Simple shell utility to convert html to pdf using the webkit rendering engine, and qt." I tested it out on various websites and it worked. It wasn't by any stretch of the imagination a perfect HTML to PDF tool but the quality it produced greatly outstripped the presentation currently in place via Crystal.

This was just the ticket. Using WKHTMLtoPDF I could have simple web pages in the application which could be piped into WKHTMLtoPDF to make a PDF as needed. It could be dynamic - because ASP.NET is dynamic. We wouldn't need to write and maintain custom stored procs anymore. And happily we would no longer need to use Crystal.

Before we could rid ourselves of Crystal though, I needed a way that I could generate these PDFs on the fly within the website. For this I ended up writing a simple wrapper class for WKHTMLtoPDF which could be used to invoke it on the fly. In fact a good portion of this was derived from various contributions on a post on StackOverflow. It ended up looking like this:

With this wrapper I could pass in URLs and extract out PDFs. Here's a couple of examples of me doing just that:

//Create PDF from a single URL
var pdfUrl = PdfGenerator.HtmlToPdf(pdfOutputLocation: "~/PDFs/",
outputFilenamePrefix: "GeneratedPDF",
urls: new string[] { "http://news.bbc.co.uk" });
//Create PDF from multiple URLs
var pdfUrl = PdfGenerator.HtmlToPdf(pdfOutputLocation: "~/PDFs/",
outputFilenamePrefix: "GeneratedPDF",
urls: new string[] { "http://www.google.co.uk", "http://news.bbc.co.uk" });

As you can see from the second example above it's possible to pipe a number of URLs into the wrapper all to be rendered to a single PDF. Most of the time this was surplus to our requirements but it's good to know it's possible. Take a look at the BBC website PDF generated by the first example:

Pretty good, no? As you can see it's not perfect from looking at the titles (bit squashed) but I deliberately picked a more complicated page to show what WKHTMLtoPDF was capable of. The print screens I had in mind to build would be significantly simpler than this.

Once this was in place I was able to scrap the Crystal solution. It was replaced with a couple of "print to PDF" ASPXs in the main web app which would be customised when rendering to hide the relevant bits of data from the user. These ASPXs would be piped into the HtmlToPdf method as needed and then the user would be redirected to that PDF. If for some reason the PDF failed to render the users would see the straight "print to PDF" ASPX - just not as a PDF if you see what I mean. I should say that it was pretty rare for a PDF to not render but this was my failsafe.

This new solution had a number of upsides from our perspective:

  • Development maintenance time (and consequently cost for our customers) for print screens was significantly reduced. This was due to the print screens being part of the main web app. This meant they shared styling etc with all the other web screens and the dynamic nature of ASP.NET made customising a screen on the fly simplicity itself.
  • We were now able to regionalise our print screens for the users in the same way as we did with our main web app. This just wasn't realistic with the Crystal solution because of the amount of work involved.
  • I guess this is kind of a DRY solution :-)

You can easily make use of the above approach yourself. All you need do is download and install WKHTMLtoPDF on your machine. I advise using version 0.9.9 as the later release candidates appear slightly buggy at present.

Couple of gotchas:

  1. Make sure that you pass the correct installation path to the HtmlToPdf method if you installed it anywhere other than the default location. You'll see that the class assumes the default if it wasn't passed
  2. Ensure that Read and Execute rights are granted to the wkhtmltopdf folder for the relevant process
  3. Ensure that Write rights are granted for the location you want to create your PDFs for the relevant process

In our situation we are are invoking this directly in our web application on demand. I have no idea how this would scale - perhaps not well. This is not really an issue for us as our user base is fairly small and this functionality isn't called excessively. I think if this was used much more than it is I'd be tempted to hive off this functionality into a separate app. But this works just dandy for now.

Using the PubSub / Observer pattern to emulate constructor chaining without cluttering up global scope

Yes the title of this post is *painfully* verbose. Sorry about that. Couple of questions for you: - Have you ever liked the way you can have base classes in C# which can then be inherited and subclassed in a different file / class

?

  • Have you ever thought; gosh it'd be nice to do something like that in JavaScript...
  • Have you then looked at JavaScripts prototypical inheritance and thought "right.... I'm sure it's possible but this going to end up like War and Peace"
  • Have you then subsequently thought "and hold on a minute... even if I did implement this using the prototype and split things between different files / modules wouldn't I have to pollute the global scope to achieve that? And wouldn't that mean that my code was exposed to the vagaries of any other scripts on the page? Hmmm..."
  • Men! Are you skinny? Do bullies kick sand in your face? (Just wanted to see if you were still paying attention...)

The Problem#

Well, the above thoughts occurred to me just recently. I had a situation where I was working on an MVC project and needed to build up quite large objects within JavaScript representing various models. The models in question were already implemented on the server side using classes and made extensive use of inheritance because many of the properties were shared between the various models. That is to say we would have models which were implemented through the use of a class inheriting a base class which in turn inherits a further base class. With me? Good. Perhaps I can make it a little clearer with an example. Here are my 3 classes. First BaseReilly.cs: ```cs public class BaseReilly { public string LastName { get; set; }

public BaseReilly()
{
LastName = "Reilly";
}
}
Next BoyReilly.cs (which inherits from BaseReilly): ```cs
public class BoyReilly : BaseReilly
{
public string Sex { get; set; }
public BoyReilly()
: base()
{
Sex = "It is a manchild";
}
}

And finally JohnReilly.cs (which inherits from BoyReilly which in turn inherits from BaseReilly): ```cs public class JohnReilly : BoyReilly { public string FirstName { get; set; }

public JohnReilly()
: base()
{
FirstName = "John";
}
}
Using the above I can create myself my very own "JohnReilly" like so: ```cs
var johnReilly = new JohnReilly();

And it will look like this:

I was looking to implement something similar on the client and within JavaScript. I was keen to ensure code reuse. And my inclination to keep things simple made me wary of making use of the prototype. It is undoubtedly powerful but I don't think even the mighty Crockford would consider it "simple". Also I had the reservation of exposing my object to the global scope. So what to do? I had an idea.... ## The Big Idea

For a while I've been making use explicit use of the Observer pattern in my JavaScript, better known by most as the publish/subscribe (or "PubSub") pattern. There's a million JavaScript libraries that facilitate this and after some experimentation I finally settled on higgins implementation as it's simple and I saw a JSPerf which demonstrated it as either the fastest or second fastest in class. Up until now my main use for it had been to facilitate loosely coupled GUI interactions. If I wanted one component on the screen to influence anothers behaviour I simply needed to get the first component to publish out the relevant events and the second to subscribe to these self-same events. One of the handy things about publishing out events this way is that with them you can also include data. This data can be useful when driving the response in the subscribers. However, it occurred to me that it would be equally possible to pass an object when publishing an event. **And the subscribers could enrich that object with data as they saw fit.

** Now this struck me as a pretty useful approach. It's not rock solid secure as it's always possible that someone could subscribe to your events and get access to your object as you published out. However, that's pretty unlikely to happen accidentally; certainly far less likely than someone else's global object clashing with your global object. ## What might this look like in practice?

So this is what it ended up looking like when I turned my 3 classes into JavaScript files / modules. First BaseReilly.js: ```js $(function () {

$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.LastName = "Reilly";
});

});

Next BoyReilly.js: ```js
$(function () {
$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.Sex = "It is a manchild";
});
});

And finally JohnReilly.js: ```js $(function () {

$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.FirstName = "John";
});

});

If the above scripts have been included in a page I can create myself my very own "JohnReilly" in JavaScript like so: ```js
var oJohnReilly = {}; //Empty object
$.publish("PubSub.Inheritance.Emulation", [oJohnReilly]); //Empty object "published" so it can be enriched by subscribers
console.log(JSON.stringify(oJohnReilly)); //Show me this thing you call "JohnReilly"

And it will look like this:

And it works. Obviously the example I've given above it somewhat naive - in reality my object properties are driven by GUI components rather than hard-coded. But I hope this illustrates the point. This technique allows you to simply share functionality between different JavaScript files and so keep your codebase tight. I certainly wouldn't recommend it for all circumstances but when you're doing something as simple as building up an object to be used to pass data around (as I am) then it works very well indeed. ## A Final Thought on Script Ordering

A final thing that maybe worth mentioning is script ordering. The order in which functions are called is driven by the order in which subscriptions are made. In my example I was registering the scripts in this order: ```html

<script src="/Scripts/PubSubInheritanceDemo/BoyReilly.js"></script>
<script src="/Scripts/PubSubInheritanceDemo/JohnReilly.js"<>/script>
So when my event was published out the functions in the above JS files would be called in this order: 1. BaseReilly.js
2. BoyReilly.js
3. JohnReilly.js
<!-- -->
If you were so inclined you could use this to emulate inheritance in behaviour. Eg you could set a property in `BaseReilly.js` which was subsequently overridden in `JohnReilly.js` or `BoyReilly.js` if you so desired. I'm not doing that myself but it occurred as a possibility. ## PS
If you're interested in learning more about JavaScript stabs at inheritance you could do far worse than look at Bob Inces in depth StackOverflow [answer](<http://stackoverflow.com/a/1598077/761388>).

JavaScript - getting to know the beast...

So it's 2010 and I've started using jQuery. jQuery is a JavaScript library. This means that I'm writing JavaScript... Gulp! I should say that at this point in time I *hated* JavaScript (I have mentioned this previously). But what I know now is that I barely understood the language at all. All the JavaScript I knew was the result of copying and pasting after I'd hit "view source". I don't feel too bad about this - not because my ignorance was laudable but because I certainly wasn't alone in this. It seems that up until recently hardly anyone knew anything about JavaScript. It puzzles me now that I thought this was okay. I suppose like many people I didn't think JavaScript was capable of much and hence felt time spent researching it would be wasted. Just to illustrate where I was then, here is 2009 John's idea of some pretty "advanced" JavaScript: ```html function GiveMeASum(iNum1, iNum2) { var dteDate = new Date(); var iTotal = iNum1 + iNum2; return "This is your total: " + iTotal + ", at this time: " + dteDate.toString(); }

```

I know - I'm not to proud of it... Certainly if it was a horse you'd shoot it. Basically, at that point I knew the following: - JavaScript had functions (but I knew only one way to use them - see above)

  • It had some concept of numbers (but I had no idea of the type of numbers I was dealing with; integer / float / decimal / who knows?)
  • It had some concept of strings
  • It had a date object

This was about the limit of my knowledge. If I was right, and that's all there was to JavaScript then my evaluation of it as utter rubbish would have been accurate. I was wrong. SOOOOOOOOOOOO WRONG! I first realised how wrong I was when I opened up the jQuery source to have a read. Put simply I had *no* idea what I was looking at. For a while I wondered if I was actually looking at JavaScript; the code was so different to what I was expecting that for a goodly period I considered jQuery to be some kind of strange black magic; written in a language I did not understand. I was half right. jQuery wasn't black magic. But it was written in a language I didn't understand; namely JavaScript. :-( Here beginneth the lessons... I started casting around looking for information about JavaScript. Before very long I discovered one Elijah Manor who had helpfully done a number of talks and blog posts directed at C# developers (which I was) about JavaScript. My man! - How good C# habits can encourage bad JavaScript habits part 1

For me this was all massively helpful. In my development life so far I had only ever dealt with strongly typed, compiled "classical" languages. I had little to no experience of functional, dynamic and loosely typed languages (essentially what JavaScript is). Elijahs work opened up my eyes to some of the massive differences that exist. He also pointed me in the direction of the (never boring) Doug Crockford, author of the best programming book I have ever purchased: JavaScript: The Good Parts. Who could not like a book about JavaScript which starts each chapter with a quote from Shakespeare and still comes in at only a 100 pages? It's also worth watching the man in person as he's a thoroughly engaging presence. There's loads of videos of him out there but this one is pretty good: Douglas Crockford: The JavaScript Programming Language. I don't want to waste your time by attempting to rehash what these guys have done already. I think it's always best to go to the source so I'd advise you to check them out for yourselves. That said it's probably worth summarising some of the main points I took away from them (you can find better explanations of all of these through looking at their posts): 1. JavaScript has objects but has no classes. Instead it has (what I still consider to be) the weirdest type of inheritance going: prototypical inheritance. 2. JavaScript has the simplest and loveliest way of creating a new object out there; the "JavaScript Object Literal". Using this we can simply var myCar = { wheels: 4, colour: "blue" } and ladies and gents we have ourselves a car! (object) 3. In JavaScript functions are first class objects. This means functions can be assigned to variables (as easily as you'd assign a string to a variable) and crucially you can pass them as parameters to a function and pass them back as a return type. Herein lies power! 4. JavaScript has 6 possible values (false, null, undefined, empty strings, 0 and NaN) which it evaluates as false. These are known as the "false-y" values. It's a bit weird but on the plus side this can lead to some nicely terse code. 5. To perform comparisons in JavaScript you should avoid == and != and instead use === and !==. Before I discovered this I had been using == and != and then regularly puzzling over some truly odd behaviour. Small though it may sound, this may be the most important discovery of the lot as it was this that lead to me actually *trusting* the language. Prior to this I vaguely thought I was picking up on some kind of bug in the JavaScript language which I plain didn't understand. (After all, in any sane universe should this really evaluate to true?: 0 == "") 6. Finally JavaScript has function scope rather than block scope. Interestingly it "hoists" variable declaration to the top of each function which can lead to some very surprising behaviour if you don't realise what is happening.

I now realise that JavaScript is a fantastic language because of it's flexibility. It is also a deeply flawed language; in part due to it's unreasonably forgiving nature (you haven't finished your line with a semi-colon; that's okay - I can see you meant to so I'll stick one in / you haven't declared your variable; not a problem I won't tell you but I'll create a new variable stick it in global scope and off we go etc). It is without question the easiest language with which to create a proper dogs breakfast. To get the best out of JavaScript we need to understand the quirks of the language and we need good patterns. If you're interested in getting to grips with it I really advise you to check out the Elijah and Dougs work - it really helped me.