What is my opinion about Knockout vs. Angular ?

My Angular.JS video is getting way more traction than I thought, with over 2K views and a lot of comments on Youtube and Facebook. One of the comments on Facebook came today was:

What is your opinion about Knockout VS Angular what is the best !! and thanks for the Good tutorial Mr Mohamed

An interesting question. There is an easy answer that is both technically and politically correct, that goes like: There is no “best”, each has pros and cons. For which one to use, “it depends”.

But I bet this is not good enough for anyone interested in the question, so, I’ll expand a bit here…

A Personal Opinion

“A personal opinion you ask, a personal opinion you get, so, treat it like one, no more”

Mohamed Meligy

Angular.JS does more than binding DOM to JS objects, like routing and enforcing code organization, and connecting to REST server APIs, etc., so, the more accurate comparison is against Durandal.JS, which uses Knockout for DOM binding, Sammy.JS for routing, Breeze.JS for REST data interaction (can be used with Angular), etc.

Knockout is old, and very mature. It was designed to be very easy to plug it into any jQuery plugin or jQuery UI widget. It was also designed to work with all browsers down to IE6. With Durandal.JS you also get the other parts that Knockout itself does not cover. If you are writing an app that depends on very complex “existing” jQuery components that you didn’t write yourself, it may be a better option.

Angular.JS is relatively new. It doesn’t feel like new when you see so many tutorials around and very enthusiast community around it, but it does feel so when you look at things like how the official UI components are in complete refactor/reorganization mode for quite a while. Mind you, they still work nicely though.

Angular.JS does not care as much about legacy browsers. The lowest they support is IE8 and only with DOM/EcmaScript5/JSON shims, and when things don’t work in IE, the whole thing fails with no particular error line to start with.

Having said that, “for me” Angular is the future. I’m not saying that KnockoutJS will die or whatever, it’ll be stupid of me to think so. Angular.JS is very functional as it is now and I used it to save us time in a current project (yes, even with IE support), and along with time and very passionate community (just like Knockout started), it’s expected to get better.

Does this answer the question?

Maybe, and maybe not. I have done quite a bit with KnockoutJS, but not much Durandal.JS, hence this should be taken with a grain of salt. There are several comparisons on the web that go into more detail. I just wanted to write my personal take here so that I can refer to it later when people ask.

Of course, like many opinions, my opinion itself may change as I learn more or as both libraries evolve more.

So, yeah, I highly encourage you. Go ahead and make your own conclusion. Needless to mention, those two libraries are not the only two in their category too ;)

Manually Compressing Any ASP.NET Page, PageMethod, MVC Action, HTTPHandler,..

Compressing A Single ASP.NET Response Manually

This post is about compressing your HTTP result without using IIS Dynamic Compression. I’ll save the story to the end and give you the code first:

using System;
using System.IO.Compression;
using System.Web;
namespace WebCompressionSample
    public static class ResponseCompressor
        public static void Compress(HttpContext context)
            // Quite often context will be something like `HttpContext.Current`
            // which in some conditions may not be available.
            // This is likely due to mixing conerns in calling code, 
            // so, it's up to you whether to handle differently.
            if (context == null)
            // Among other things, this header tells the server 
            // whether the client can decompress compressed responses, 
            // and what compression format(s) it supports
            string acceptEncoding = context.Request.Headers["Accept-Encoding"];
            if (string.IsNullOrEmpty(acceptEncoding))
            // The two common compression formats in web are GZip and Deflate
            if (acceptEncoding.IndexOf("gzip",
                StringComparison.OrdinalIgnoreCase) > -1)
                // Read the response using a GZip compressor ,
                //    and replace the output with compressed result
                context.Response.Filter = new GZipStream(
                        context.Response.Filter, CompressionMode.Compress);
                // Tell the client the ouput they got is compressed in GZip
                context.Response.AppendHeader("Content-Encoding", "gzip");
            else if (acceptEncoding.IndexOf("deflate",
                StringComparison.OrdinalIgnoreCase) > -1)
                // Read the response using a Deflate compressor ,
                //    and replace the output with compressed result
                context.Response.Filter = new DeflateStream(
                    context.Response.Filter, CompressionMode.Compress);
                // Tell the client the ouput they got is compressed in Deflate
                context.Response.AppendHeader("Content-Encoding", "deflate");

Well, this shows how to do the compression itself. Depending on how you do ASP.NET, you probably will call it differently.

In my case, I called it manually from an ASP.NET Webforms PageMethod (more on why below), but if you are using ASP.NET MVC for example, you probably want to wrap it in an ActionFilter and apply that to the action you want to compress its output. Let me know in the comments or on twitter if you have a problem implementing it in a particular situation.

IIS Dynamic Compression

IIS 7+ has built in dynamic compression support (compressing output of server-side scripts like ASP.NET, PHP, etc.). It’s not by default because compressing dynamic content means running the compression for every request (because it doesn’t know what the server-side script will generate for every request, the point of using server-side programming is generating dynamic content!).

Static compression on the other side (caching static files like styles and scripts) is on by default because once the static resource is compressed, the compressed version is cached and served for every future request of the same file (unless the file changes of course).

General advice: I’d say if your server side scripts expect to return large text-based content (say, large data, even after paging, etc. like large reports or whatever), always turn dynamic compression on, at least for the pages that expect to return large data sets of text.

In many cases though the majority of large files will be scripts (and possibly images) will be the larger parts though, which are often taken care of (for scripts for example) by IIS static compression or ASP.NET Bundling.

ASP.NET and Tools 2012.2 Update & Web Essentials Extension

For those who didn’t see the news flying everywhere, there is a new ASP.NET (mainly tooling) update announced on Scott Guthrie’s blog:

Release Notes:

If are using the great Web Essentials Visual Studio extension, and you updated the extension yesterday and noticed in Web Essentials change log that several editors like LESS and CoffeeScript editors were removed, this was as the editors moved into the official update.

This means if you want to use the recent update with Web Essentials (strongly recommended), you probably should also update Web Essentials to the latest version first before installing the update (Scott mentions that as well).

Creating Explicit Route To Website Homepage In ASP.NET MVC

Typically, you have this standard piece of routing configuration:

    name: "Default",
    url: "{controller}/{action}/{id}",
    defaults: new {controller = "Home", action = "Index", id = UrlParameter.Optional}

Which does the job of matching “/” to HcomeController‘s Index action.

The reason it works is simple, the id parameter is obviously  optional, the action parameter if not present will be set to Index, so, this routing will match “/home/index/”, and similarly will match “/home/”. But the controller also will be set to HcomeControlimageler if it’s not present (the Controller suffix is a convention in class name), so, this matches “/home/” and “/”.

Using Explicit Route

Until here, there is no point of this blog post. So, lets say for whatever reason you are using other routing rules, because your desired routes don’t follow this very convention or because you don’t want to couple public URL structure to internal class names or have another reason to make your routes explicit, how do you set a route for the homepage or website root “~/” URL in this case?

One possible case is to have this exact default route still there, but make sure it’s at the very end of your route registrations, after all other have been registered, so, it has the least priority in matching. The drawback is that it’s still more generic than its purpose, technically allows using it for other actions than the one handling the homepage, and every request to homepage will go through all routes tried first (which is usually not a very slow thing to be honest).

So, a nicer way is to be able to have an explicit homepage route. Maybe even one that can be put on the top of route registrations to make the request to our homepage the fastest ever (although again, it’s not a big deal or difference, but nice to have). Turns out the way is VERY easy, just set the URL to an empty string “”.

    name: "Homepage",
    url: "",
    defaults: new {controller = "Home", action = "Index"}

Yeah, that’s it (the “name” doesn’t matter BTW, and you can obviously use whatever controller/action too). Even if you are using the default routes, you can combine it with them, and put it before them too. Empty string will NOT match any URL that is not empty (you can argue all other requests are matched against this, but comparing a string against an empty one must be VERY quick, right?)

Note that this all is application specific, so, the root here is website root “~/” not server root, so, this should still work if your website is hosted under some virtual directory too.

Not that it matters a lot, but thought some of you would be interested :).

Have fun.


Grouping ASP.NET MVC Controllers And Views By Features

If you don’t like your folders to be named after blah software pattern (that happens to be MVC), and would like instead of having “Views” and “Controllers” folders (not many use “Models” folder), to see things like “Administration”, “Members”, “FeatureA”, “FeatureB” etc. folders in your web application instead, it’s not that hard.

Putting Controllers Next To Their Views, Literally

The easiest (likely not most convenient) way to do this is to move your controller class files to their corresponding view folders! Controllers are picked up by ASP.NET MVC by base type and class naming convention, not location or filename, so you can put them anywhere in the project and they’ll still be picked up.

This is how it works for the default ASP.NET MVC 4 “Internet Website” template:


The obvious problem now is that it’s harder to find them. Depending on how you usually work, you can renamed the file to make it show first (Add a couple of underscores to it for example), or maybe it doesn’t make a difference at all if you locate files by CTRL+ , (comma) or CTRL+SHFIT+T (if you use Resharper).

I’m kind of with the 2nd option (leaving them as they are, and locating them using keyboard), especially that you usually shouldn’t have so many actions in your views, and not have have so many partials in the same folder anyway, so, even finding the controller in the Solution Explorer shouldn’t be a big problem (compare the “Home” controller ease to the ‘”Account” controller).

Replacing Views With Features

The first way still feels dirty, controllers in the “Views” folder? Isn’t “Views” part of the MVC pattern language also? Let’s rename that to “Features” (or whatever makes more sense to you). Then our application will look like:


But then we need to tell the RazorViewEngine to look for views in our new folder. There are several ways to do this, I’ll go for the stupid one for simplicity. In your ASP.NET MVC 4 global.asax.cs file, add the following to Application_Start:():

var razorViewEngine = ViewEngines.Engines.OfType<RazorViewEngine>()
razorViewEngine.ViewLocationFormats = 
    .Select(format => format.Replace("/Views/", "/Features/"))
razorViewEngine.PartialViewLocationFormats =
    .Select(format => format.Replace("/Views/", "/Features/"))


The code is very straight forward. We just replace the “Views” in what the engine looks for to be “Features”.

there is another place that we’ll still need to modify, the “_ViewStart.cshtml” file has the default Razor layout file set with full path, change it to:

    Layout = "~/Features/Shared/_Layout.cshtml";

Now the website should work fine.

Starting With Features

If you were to just add another folder called “Features” and under it add the various feature folders, each including the Controller and Views(s) used with it may (make more sense,, maybe even leave things like “Shared” in the original “Views:” folder and delete the “controllers” one),, these are the changes that would still be needed:

  • Manipulating the RazorViewEngine so that it looks for views in that features folder. Instead of replacing the existing entries, we insert new ones (copy existing ones to a List<string>, insert the new values at position 0 so that they are looked up first, and returning this as array to the view engine property).
  • Copying the “web.config” in the “Views” folder (not the one in the root of your web application) to your features folder
  • Copying the “_Appstart.cshtml” file from the “Views” folder to your features folder, as this is what provides common view initialization like setting the default layout file

Or Just Use Areas!

If all you really needed was grouping features, and you don’t mind the Controllers, and Views folders under each feature (or maybe even like them), then you could just use ASP.NET MVC Areas feature and have all the application features split in corresponding areas.

However, I assume if you really wanted just that, you wouldn’t be looking for this post anyway :-).


Yes, I know, there is still more bloat folders in the ASP.NET MVC application. Some of them you may (or may not) use like “Content” or “Scripts””, some of them maybe shouldn’t be there or should have been in a different project, but this just to show some of what can be done using the existing hooks, especially that using them didn’t turn to be that hard.


Lowercase URLs in ASP.NET MVC, The Easy .NET 4.5 Way And Other NuGet Options

If you wonder why you should care about creating lower case URLs at all, or what this actually even means, skip to the appendix at the end.

Earlier, the easiest way I have found to create lower case URLs (URL paths, not query strings) was to use the NuGet package LowercaseRoutesMVC, and modify the routing code, to use the library’s own MapRouteLowercase() extension method, instead of the built in MapRoute()extension.

For example, instead of:

    new { controller = "Home", action = "Index", id = UrlParameter.Optional }

You write:

    new { controller = "Home", action = "Index", id = UrlParameter.Optional }

The .NET 4.5 Way

.NET 4.5 introduced a new property to the RouteCollection class instances (the “routes” parameter in the code above is an instance of it).

The new property is LowercaseUrls.. Remember that routing is not shipped as part of ASP.NET MVC code, but part of the NET framework itself. So, this property is available to you as long as your project uses .NET 4.5+, whether it’s ASP.NET MVC 3 or ASP.NET MVC 4.

Usage is very simple, just set the property before you map your routes (or after the mappings were added, works as well):

routes.LowercaseUrls = true;

The Result

Either you are using the third party NuGet package, or the standard .NET 4.5 property, after making the changes to the routes, this is what you achieve:

Lowercase URL Generation

All helpers used in in ASP.NET MVC to generated action URLs registered in the URL route mappings will generate lowercase paths. For example, Razor code:

@Html.ActionLink("Log On", "LogOn", "Account")

Will generate the following HTML:

<a href="/account/logon">Log On</a>

Lowercase URL Resolution

Of course the generation wouldn’t be complete without revolving the lowercased URL path back to the correct controller action. In the above example, the “/account/logon” URL will be resolved (assuming the default routing used in the first example in the post) to the LogOn action of the AccountController controller.

However, this is not accurate in fact, because typically we all get this already. Routes in ASP.NET MVC are not case-sensitive by default, so, both /account/logon is the same as /Account/LogOn to ASP.NET MVC.

If you want to force redirection to the lower case paths, for SEO or else, you can (very easily) do that with IIS 7+ and Url Rewrite module. I learned about it by trying, but here is an example of how to use it, and also an official video.

Using The Built-In Way Vs. The 3rd Party Options

The obvious difference is that the NuGet package require you to change every route call explicitly while the built-in way takes over all routes. If you are decided that any non-lowercase route is a developer mistake, then the built-in option may be a better way to minimize developer human error, if you need control or need to exclude some routes from this, then maybe the NuGet package is more suitable.

Note that this specific package hasn’t been updated since last Novemeber, but there are other similar packages on NuGet anyway, and it shouldn’t be too hard to implement one yourself.

If you care about routes and URLs as much, you may also considering writing tests for them.


There is another NuGet package specific to ASP.NET MVC 4, which includes experimental ASP.NET Web API as well. The package is: LowercaseRoutesMVC4. If you are interested in the project source code and more,, check out the LowercaseRoutesMVC project page. Again, note that this has been last updated on November 2011.

You’d expect the built-in property to affect Web API as well as it’s part of .NET 4.5 not 4.0, and as mentioned before it surely works with ASP.NET MVC 4 as well as ASP.NET 3. I’d personally use it by default in any future project..

Appendix: Why Should You Care?

Lower case URLs have already become the de-facto standard for so long. Historically, when all path URL parts were one-to-one mapped to physical file paths (excluding hashes, and query strings), this required URLs to be case sensitive because Unix file paths are case sensitive (and hence, Linux and so, unlike Windows). Search engines also had to respect this as well, as there is no guarantee that /directory/file.html is the same as /directory/Filename.html, or /Directory/filename.html or /DIRECTORY/filename.HTML, etc..

So, in brief, it’s better for SEO, and it’s becoming the industry standard anyway (for those who care), as showed earlier, even Microsoft has made enforcing lowercase URLs although not the default but a really easy thing to implement (regardless of the technology you use, the URL Rewrite module is enforcing lowercase URLs on this blog for example, which uses PHP (WordPress).

By generating lowercase URLs combined with the Url Rewrite Module, you save the users extra redirection steps, unless they decide to write the URL in a non-lowercase manner themselves.


Creating Absolute Urls Of Controller Actions In ASP.NET MVC

An Example Of The Need To Use Absolute URLs

I have been doing some work around twitter and ASP.NET MVC. The way twitter authentication works, is that I have a page that creates a twitter URL, redirects the user to twitter, imagethe users accepts to use the application associated with the website, and twitter redirects the user to a callback URL, the completes processing of the user credentials. In order to set a callback URL dynamically (especially in development, when the callback is likely a localhost one), we need to send the absolute URL to twitter.

Other examples might include having a “permanent URL” for some resource (product, blog post, etc..), or maybe a link to be used in emails or so. There can be many usages, so, let’s see how to do it!

How We Did It In Webforms

In webforms, the easiest way to do it was to use the Control.ResolveClientUrl() method.

Typically you pass a URL relative to the current code file (.aspx, .master or .ascx file). and it returns the corresponding absolute URL of that. Of course when the file is a control or a master page file, we don’t always want/have a path relative to this file. The work around for this was passing a relative URL that starts with “~/”. As you probably know, “~/” represents the root of the website.


Assuming your website is running in “https://localhost:4444/my-application”, calling:

var contactUsUrl = Page.ResolveClientUrl("~/About/Contact-Us.aspx");

from any page or control wiill return “https://localhost:4444/my-application/About/Contact-Us.aspx”.

Fully Qualified Urls In ASP.NET MVC

Similar to webforms, where we used a current control as a starting point, we can use the controller (or view, if we really want to [hint: we don’t]), to access the current instance of “UrlHelper” class (the “Url” property of an ASP.NET MVC Controller), which gives us access to the routing system that comes with ASP.NET in general, and gives shortcut methods specific to ASP.NET MVC, like Url.Action().

This will return us the relative URL though, to convert this to an absolute / fully-qualified URL, we use Request.Url.AbsoluteUri (Controller.Request is the current HttpRequestBase instance) to get the absolute Uri information, and the “UriBuilder” class to create the Url.


Turns out you can call any of the UrlHelper methods and get an absolute URL directly if you call the overload that accepts a “protocol” value (also called “scheme”, that’s  “http”, “https”, etc..), even if the protocol is the same protocol used in the current request.

Original Example

Going with the same assumptions in the webforms example, replacing “contact-us.aspx” page with a controller “AboutController”, and an action “ContactUs” that has ActionName set to “Contact-Us”, adding the following code inside any ASP.NET MVC action:

var contactUsUrlBuilder =
    new UriBuilder(Request.Url.AbsoluteUri)
            Path = Url.Action("Contact-Us", "About")

var contactUsUri = contactUsUrlBuilder.Uri;
var contactUsUriString = contactUsUrlBuilder.ToString();
// or contactUsUrlBuilder.Uri.ToString()

can be used to get “https://localhost:4444/my-application/About/Contact-Us”.

Updated Example

We can get “contactUsUriString” in the previous example in a different way, by calling:

var contactUsUriString =
    Url.RouteUrl("" /* route name, add if needed */,
                 new // route values, add more if needed
                         action = "Contact-Us",
                         controller = "About"

Or alternatively even more compact:

var contactUsUriString =
    Url.Action("Contact-Us", "About",
               routeValues: null /* specify if needed */,
               protocol: Request.Url.Scheme);

Of course we could change the action name and routing etc to maintain lower case, or do it from IIS or so, but doing this would be too much to the point this blog post is concerned about

Hope this was useful to you.


Should you choose a budget Windows VPS?

This is copied from one of my replies in WebHostingTalk. Thought it might be useful for GuruStop readers too.

The purpose of the reply was to provide a review to Burst.NET, my budget VPS hosting (with some references to a premium (yet premium) VPS, SoftSys Hosting), but the real value of the reply, is to help give realistic expectations about various differences when deciding to choose a "good" budget VPS Windows hosting.

The Review

Generally speaking as a "current" customer with Burst.NET. Their equation is:
Budget Price = Good Service + Enough Support + Poor SLA

So, most of the time the server is working, most of the time it’s working, it works really fast (compared to my chosen specs, 1.5GB RAM Windows), the network also is really nice especially their West Coast (LA) location.

Given that, sometimes the VPS is slow, sometimes too slow, once it was slow that was unusable.

When the server is just slow, the services hosted on it didn’t seem to be much affected, when the server is too slow or reaches the unusable state, I get between 5 minutes to 2 hours downtime (happened only once).

This draws light to support and SLA. Support is generally fine. You get a response around an hour (and some report less times), and problems usually get completely solved in less than 2-3 hours.


I have had mainly few issues in my first month:

  1. Something went wrong with installing the VPS by BurstNet. I was just signed up and confused as hell why the thing isn’t there, and suspecting my form entries, etc. Solved by getting the VPS up for me.
    Support was really fast when I indicated this was an "outage" issue (Selection in ticket, don’t use it often please as to guarantee the level of support when you really need it).
  2. The server was too slow shortly after it was up. Support solved it "somehow" (likely like the next one).
  3. The server was too slow that it took ages to boot, and didn’t feel connected even (I was checking it via VNC, as RDP was down as web as all). Solved by removing some abusing user on server.
  4. Two random reboots in two separate days. I tolerated them with plan to contact support if they happen again.


I wouldn’t tolerate such issues except from a budget provider, knowing how fast the network is, and how much service I need (Currently I only run a couple of blogs around 40K page views per month), this service was fine for me, as what mattered to me really is the high performance most of the time not the high availability most of the time (if no downtime but the server is slow for all users, that’s worse for me).

I also do not have much options to tolerate these or not if I choose Burst.NET. Their SLA clearly states those may happen. I discussed it twice with support and they said remember we are a budget provider so our SLA says that "issues" could happen because of other users on the same host server, and those are expected to happen every now and then and are not part of SLA. I was hoping that Xen virtualization prevents this, but clearly, it doesn’t.

(Note to blog reader: Xen one of the best Linux hosts virtualization options to run Linux/Windows guests)

But again, I read the SLA before joining (most people don’t, which is very wrong), and raised concerns on them, calculated the costs and was more practical than idea on my real needs and priorities, and then went with them with proper expectations, although to be honest experiencing can hurt even when expecting!


So, if I want to improve this, I won’t leave Burst.NET to another budget host (like ThrustVPS or whatever). I think Burst.NET is doing the best it can within the cost you pay for it, and any other budget host will be doing similarly.

If I want to improve this, I’d instead work on my own budget and try to improve it, then go to a more premium class hosting. I already have one VPS (I don’t own, but manage) on SoftSys Hosting, and I don’t expect this one to even reboot randomly, and know that if it does I should at least get a note or find it in their announcements page. "Realistically" I can’t ask this from any "budget" provider, not just Burst.NET).

(note to blog reader, I chose SoftSys Hosting to compare because they are the cheapest premium VPS)

Although, I’d really be happy if Burst.NET provides premium Windows VPS hosting, as they have both options for Linux, and as their "fast" reliable West Coast network is a real killer feature for me that I don’t see "many" other hosts (even premium) providing.

Are you interested in these topics?

I have been doing a lot of research recently around various premium and budget class hosting providers, especially in the VPS area, although I have been tweeting about them instead of blogging.

If you think this is an interesting topic for you, please let me know, as then I might decide to write more about it.  Actually, I’m even considering creating a dedicated website for hosting matters, as the sites out there are all so full of advertising materials and "trick the customer" words (like highlighting first-month-only prices instead of real ones, and presenting the monthly value of annual payments instead of monthly-payments price, etc..). What do you think?

On Selenium, Or, My Choice of Automated UI Testing Frameworks

This morning I got a nice little email from a dear Egyptian friend, Ebeid Soliman (@ebeid_soliman) asking the following:

I know this may be something answered by google, but I trust your opinion.
What is the best free automated UI testing framework/tool you used ? or know ?

I actually already have a long draft on the subject showing the framework I use, and how to get basic stuff working on it, since this one is not yet complete, let me for now share my reply to him with you, as raw as possible …

(I have added some titles to make the long reply more readable)

The Reply


Choosing a framework

Look, I haven’t tried many. Only Watin and Selenium, and even Watin didn’t dig it enough.

The people around me all seem to be using Selenium. This is not only the story though…

Selenium is meant to be cross-browser, while Watin is more of an IE thingy. You should be able to just swap a line of code to change the browser. In reality Selenium works best with Firefox. But this is usually not a problem because usually you don’t use those frameworks to confirm visual look (like screenshots and pixel-by-pixel comparison) or even cross-browser stuff. Usually you use them to ensure on the long run that the system “still” works the way it’s supposed to, the more you develop features and introduce changes, to make sure those didn’t break existing flows (for example, the presence of menu items, going to product page and completing an order, signing up, etc..).


The State of Selenium

Selenium has two current versions, version 1.x, depends on JavaScript interop to the browser. It has a “server” kind of thing (Called Remote Control, RC), that communicates to some Firefox Window it opens, and this window opens another window for running your site, and because it is a child window, the RC browser window controls it via JAvaScript to perform operations (like clicking, going to certain URL), and queries (like checking for existence or value of some HTML element) via JavaScript.

This version is more solid, has so many features, so much coupled with Firefox, runs on Java, but is no-longer in active development.

The more current version of Selenium is called Selenium WebDriver, or Selenium 2.0. This version is meant to be more cross-browser thingy (I have had few issues in minor tries with Chrome), and much faster (official word is 4x in Firefox, although speed doesn’t really matter usually, because you make explicit slowness anyway to simulate user thinking and wait for pages to load, etc..). It uses browser native interfaces to run the tests instead of JavaScript. And you can download it as a Nuget package (not sure whether it still depends on Java).

The problem with this version is that it’s still not as feature-rich as the previous one, you can easily run into limitations and things you can’t do straight forward as in previous version. They try to release new versions very quickly (I think few weeks, like 2 or 4, can’t remember), so, it sure is going to rock, but it’s not still there quite yet.


Real World Requirements

Both of them allow you to run one central server that communicates to client machines if you want for example this server to run your tests in parallel on multiple test machines. At the current customer we have this triggered from a CI server (Cruise) which uses the Selenium library to call the Selenium Grid (Server) and then Selenium (1.3 I guess) connects to client test machines to run our tests. Of course running tests in parallel without affecting each other is another story on its own that is hard to get right as well.

So, we have hundreds of tests (now over 1000)  running on Selenium 1.x with custom stuff (like for example an extension to allow using jQuery selectors to select elements, and custom code to overcome its JS nature to be able to reference the actual site window object instead of the wrong Selenium window itself), but I also know of another colleagues at another client getting good experience with Selenium 2.0 WebDriver. They also found some nice stuff from the community, and, they also wrote their own custom hacks to overcome some of its gotchas.



So, UI automation is not exactly nice or straight when you start doing real stuff (for small tries, you’ll be super amazed, it feels like magic to see the browser windows spin up and click themselves and go the boring steps to re-run some scenario! Generally speaking the least evil is Selenium, because it has the biggest community support, hence google-ability if you may (finding solutions created by others easier and get more stuff done). Although Selenium 2.0 is not as robust as Selenium 1.x, if you are going to use it for a new project, obviously not using Selenium 2.0 will be simply adding technical debt to your test code base, so, go with Selenium 2.0 and see how the journey feels for you, starting by exploring the Nuget packages available for it of course (Just search for Selenium and read package descriptions).

And I am always available if you hit something with it maybe I can help :)


More on Selenium

Since it was the main mention, you can learn more about Selenium from:


Hopefully I review my old draft and see what from it I can add here or make another “code-full” post from. The best help you can give me to do this faster is trying to learn about it, and throwing me some questions that can later turn into blog posts here. There are a lot of posts in this blog that couldn’t have existed without my friends throwing challenges and various queries in my inbox!

#MvcConf 2 – Call For Speakers


Assuming some of you have attended live or watched the recordings for the past MVCConf conference. It’s a virtual conference concerned (as the name tells) about everything related to Web MVC Frameworks in .NET (ASP.NET MVC, FubuMVC, Spark, …).

Videos from the previous MvcConf event can be found at:

http://www.viddler.com/explore/mvcconf/videos/ and http://tekpub.com/conferences/mvcconf

MvcConf 2

They plan to have a second event after the great success of the first one. And they started a call-for speakers. See:


Quoting Details


Tuesday Feb 1st 8AM – 5PM CST




Check back 1/17

Call For Speakers

If you would like to speak at this years conference. Fill out the Speaker Proposal form.

An Awesome Conference

MvcConf is a virtual conference focused on one thing: writing awesome applications on top of the ASP.Net MVC framework. Your brain will explode from taking in so much hard core technical sessions. Sounds fun eh?

This is a community event and we want the best and brightest sharing what they know.

We intend to record each session and make them available online for viewing. We intend to make the videos available free of charge, depending on conference sponsorships.

Giving Back

Keeping this conference a community event is important. We are donating a portion of the proceeds from the event to the jQuery project.


Speaker proposal from can be found at: http://www.mvcconf.com/speakerproposal