Serializing A PagedList Using JSON.NET In ASP.NET MVC – Gotcha And Workaround

Recently I discovered that there’s no one standard way for AJAX-driven server-side paging in ASP.NET MVC (in Web API, you can expose an IQueryable). For the case in hand, I decided to use PagedList for the server bit of the game.


The PagedList interface looks a bit like this (for demonstration only, real code is a bit different, check its source code for the real stuff):

It provides nice properties for paging, and exposes itself as enumerable and has an indexer. Apart from this snippet, the library also provides an extension method ToPagedList() to apply to any enumerable and allow it to populate the properties from it (by enumerating on it, and by calling the Count() method).

We were also using JSON.NET for JavaScript serialization, which is pretty much defacto standard nowadays.

The JSON.NET Serialization Problem

JSON.NET has a nice implementation, if you serialize a class that implements IEnumerable<T>, and you don’t have a special treatment rule for it (via what JSON.NET calls “Converter” classes), when you serialize an instance of the class to JavaScript, it will be serialized as a JavaScript array, where the enumerated objects are the contents of this array. Makes lots of sense right?

Well, yes, except when you have a custom collection like PagedList, and you want to treat it as an object that has several properties, not as an array. JSON.NET does provide a solution for this actually, all you need to do is apply the [JsonObject] attribute to your class.

Unless you don’t own the source code of this class.

In this case, you need to inherit from it. By doing this, I lose the nice ToPagedList() extention method (because it creates an object of the PagedList class directly), but luckily it does nothing but calling new PagedList() with the parameters we give it, so, we don’t lose much.

Here’s how my implementation looks like:

Apart from having to copy the constructors to pass parameters to the base ones, have you noticed the extra Items property in there?

That’s because the Subset member it includes is actually a field, not a property, and JSON.NET won’t serialize that by default, I could override it somewhere else, but since I’m fiddling with this here, it made sense to just stick a property to the class.

Bonus: A Bit More On Implementation

In my actual code, I have added the Dynamic Linq NuGet package, and assumed a single property for sorting (which was fair for the situations where I intend to use this), so, I complemented the code above with another class that looks like this:

This allows the controller to instantiate an instance of the SerializablePagedList class, pass it all the way to whatever repository method I have.

The repository method will take it as a parameter, work out what IQueryable it needs, and instead of passing it to UI, it calls CreatePagedListFromQueryable(), which returns an innocent-looking PagedList object (because SerializablePagedList inherits PagedList) that the repository can pass back to the controller, which can serialize it to JavaScript without a problem, then the rest is all JavaScript code to work out how create the paging parameters, and how to use the returned paging hints.

Even more, now that I think about it, maybe I should change the return type to SerializablePagedList, to make the Items property visible to the developer (because they’d think it’s magic, and in coding, magic is BAD). I’ll leave this as an exercise for you :)

Final Words / Disclaimer

The motivation behind this post is that I found the problem of serializing PagedList using JSON.NET a challenge and I wanted to help others work it out faster than I did. Is this how I’d recommend doing paging? Well, I don’t know, but if it’s what you choose, I hope that I have saved you some time.

And more importantly, is it good enough to be the defacto standard I mentioned I was after in the beginning of the post? Not really. I think it’s not bad, but definitely not the best. I’d love to see less clever (read: hacky), and more simpler solutions.

Manually Compressing Any ASP.NET Page, PageMethod, MVC Action, HTTPHandler,..

Compressing A Single ASP.NET Response Manually

This post is about compressing your HTTP result without using IIS Dynamic Compression. I’ll save the story to the end and give you the code first:

Well, this shows how to do the compression itself. Depending on how you do ASP.NET, you probably will call it differently.

In my case, I called it manually from an ASP.NET Webforms PageMethod (more on why below), but if you are using ASP.NET MVC for example, you probably want to wrap it in an ActionFilter and apply that to the action you want to compress its output. Let me know in the comments or on twitter if you have a problem implementing it in a particular situation.

IIS Dynamic Compression

IIS 7+ has built in dynamic compression support (compressing output of server-side scripts like ASP.NET, PHP, etc.). It’s not by default because compressing dynamic content means running the compression for every request (because it doesn’t know what the server-side script will generate for every request, the point of using server-side programming is generating dynamic content!).

Static compression on the other side (caching static files like styles and scripts) is on by default because once the static resource is compressed, the compressed version is cached and served for every future request of the same file (unless the file changes of course).

General advice: I’d say if your server side scripts expect to return large text-based content (say, large data, even after paging, etc. like large reports or whatever), always turn dynamic compression on, at least for the pages that expect to return large data sets of text.

In many cases though the majority of large files will be scripts (and possibly images) will be the larger parts though, which are often taken care of (for scripts for example) by IIS static compression or ASP.NET Bundling.

My Quest To Find The Best NodeJS Http Client With HTTPS and Basic Auth

This may be a completely useless post, but it took me a bit to figure out some of the pieces until I got here, so, I thought maybe it’s worth sharing!

What’s this & how did I come by it?

Instapaper is a nice service that provides you with “read it later” bookmarklet. You use this bookmarklet to add any web page to their log and you can read that page later via their site, a mobile application (from them or 3rd party), or via Kindle (or other book readers)!

Apart from their bookmarklet, they provide a simple API to add URLs programmatically. This is useful if you for example want to extract URLs from other services, say twitter favourites. They also provide a Full API (OAuth API) for doing everything their site and apps do, except some things in this so called Full API are (at least to be) only for paid subscribers.


Instapaper’s simple API is quite interesting though, according to the docs, you can use HTTP and provide only username (email) -not even password- to add a URL, and you can use POST or GET. That’s scary for anyone who thinks about security, but -maybe- OK for what the API does. They also provide the API in proper HTTPS with basic HTTP authentication (and if you want more, you can always use the Full API’s OAuth).

I wanted to use their API in HTTPS using POST and Basic Authentication. In my tests I was happy to try HTTP and/or GET and/or sending username in clear-text. I also thought I’d write this in Node because I thought it should be easy. Node people often serve REST/HTTP APIs, so, they must be good at consuming them too (even though in reality most consumers are client JS not node), right? Well, maybe yes.

The Code

I can explain the challenges, how I overcame them, and then show the final code, but I have spoken too much already, let’s begin with the code.

var basicAuthenticationParams = {
username: ‘MY_EMAIL’,
password: ‘MY_PASSWORD’

var requestParams = {
url: ‘’
};“”, {
form: requestParams,
auth: basicAuthenticationParams
function (error, response, body) {
// This is only returned for network errors, not server errors
if (!!error) {
console.log(“Network Error:”);
throw error; // it’s actual JS Error object not string

var statusCode = !!response ?
parseInt(response.statusCode, 10) || null
: null;
// I could have checked for 200,
// but technically 200-299 are success codes,
// and more importantly, Instapaper sends 201 for success
if (!!statusCode && statusCode >= 200 && statusCode < 300) {
console.log(); // Extra line separator

console.log(“HTTP Status:”);
console.log(); // Extra line separator

// In instapaper’s case, by default,
// they just send status code in body too


See? It was all very straight forward. Well, it wasn’t for me. I tried several NPM packages to get this effect, and none was going well. The “request” package was the one that did the trick. this is generally accepted in Node, the built-in API is very low level and there is some NPM package that gives you the nicer one.

For all the other packages I tried, they didn’t have any dependencies on other packages. This “request” package gave me the usual shock seeing so many NPM packages downloaded and installed, but this is the philosophy of NPM, build small reusable pieces built on top of other pieces.

Getting the request headers right (Oh my!)

While the packages were not working, interchanging HTTP status codes between 403 (invalid credentials) and 400 (missing parameters), it was very hard to tell what I needed. Even associating to proxy to get the output to Fiddler didn’t go well with for example the “easyHttp” package (there is a similarly-named NuGet package for .NET BTW, but it’s not related at all I think).

I went to my favourite Chrome Postman extension, which works nicely like Fiddler for the “compose request” part (actually, arguably is even way easier). The request parameters (was using username as another request parameter, not using Basic Authentication, just to test) didn’t seem to work until I changed the default “form-data” to “x-www-form-urlencoded”


Which added the following header to the request: “Content-Type: application/x-www-form-urlencoded”. This kind of makes sense. This overly permissive API feels more optimized for sending from web-pages, where forms are a front concept.

I tried to add the header when I was using other packages but that didn’t seem to tick it. I tried this one, and as you can see didn’t even need to set that. I just used the “form” property of the options though instead of “body” or “”json” properties.

This is why I’m posting this, to hopefully help someone googling get up and running quicker than me. This is not just Instapaper, this can be very helpful in so many situations.

For the rest of you (.NET / non NodeJS devs)…

If you are not a Node person but still want to have a fiddle, after you install Node (which includes NPM package manager by default), then you save the code to a JS file “some-file.js”, preferably in a new folder.

Assuming Node and NPM are in the PATH (installer default), you can open Cmd or PowerShell console in that folder and run npm install request then node some-file.js (no quotes in both), and see the results in front of you.

My 1st YouTube Video: Angular.JS Directives Demo

So, my wife has been encouraging me to get this starting over the past 4 years or so. I was more busy with offline groups at first, followed by traveling to work in different countries, and then later all sorts of new stuff I wanted to learn around .NET, software craft in general, and web development and trends. At last, I started, and uploaded my first YouTube video.

The Video

To make sure I actually get to the point where I have something to upload, I used the Minimum Viable Product approach. I looked for a simple topic that I still think you guys might be interested in, I spent some time on editing the video but still didn’t wait till I’m 100% happy about the quality (particularly of my own voice) before publishing. The idea is that if I continue to do this I might be able to get more out soon and practice will naturally increase the quality. Was that the right way to do it? Only your feedback can tell!

Simple Angular.JS Demo: Using HTML Directives Even Without JavaScript Controllers

Now It’s Up To You!

As you may expect, I’m really looking forwards to all sorts of feedback on this video. Let me know whether you liked it, and what would you’d like to see in future videos. If there is any question, problem, or idea that you’d be interested in seeing me talk about it, please let me know.

Thanks a lot.

Easy Ways To Discover Feed URL of A Website / Page

All blogs and news websites provide some sort of aggregation feed, usually RSS or ATOM. This allows users to add the feed URL to their favourite aggregator and stay updated with future stuff when they come. This post shows how to get a URL to subscribe to, and how to get multiple URLs if the site provides multiple formats.

A .NET library

One easy way to do this in .NET is using the Argotic Syndication Framework library, the code will look like:

Here is what you get when you run the code against


A few notes on this approach:

  • You must have realised the Where check in the code, the library seems to capture any related link in the HTMl, not just syndication links. that’s why we needed to filter them explicitly
  • Quite often when you have a main site that has different branches, you get more than one feed link, for CNN for example you get different feeds for certain site languages, for you got one for the site itself and another for members of the service. Arguably, this is not always what you want when you add the site to a reader kind of application
  • As expected, this code is quite slow in debug mode, takes about 1.5 seconds to run alone! In release mode (build configuration set to “Release” and so web,config)


Google API

In its simplest form, the syndication discovery is a matter of finding a link tag with a proper rel attribute (typically set to alternative), and a type attribute holding the attribute, however, in real life, at least historically, there used to be many variations of the way the discovery was implemented (read the next section for more).

One of those who managed to get right URLs for different edge cases was Google Reader. Apart from Google Reader itself, whose closing was part of the reason I wrote this post, Google allows you to use their systems to get the right syndication feed URL of a given page via simply calling a public JSONP API.

This is the structure of the data parameter returned by the previous call:


To learn more about this specific API, check:

To learn what’s special about JSONP requests and why jQuery needs to treat it differently, read the $.getJSON() documentation:


Background Of The Problem

Even though social media has made people depend on links shared on social media sites (by their peers, or the creators of the feed), the trend of adding a syndication feed to website is a trend that continued to increase in many product and subscription websites, especially that it’s easy to automate social media posts from the feeds after that.


As Google Reader will be retired in July, I thought for a minute about what it’d take to put some web based reader together. This was before I learned about the existing awesome alternatives like Feedly and so many others.

Then I remembered there was an application I was working on in 2007, one feature we needed and a colleague worked on was getting RSS posts from personal blogs of the site members. I remember seeing him doing all sorts of crazy Regular Expression matches of so many formats to get the URL. Turns out at least at this time different blog providers used different ways to advertise the feed URL in blog homepage markup, there were so many cases, it took my colleague several days to cover a large set of test cases from different providers that we knew our users were using.

I wanted to see whether this problem was still a thing n 2013, and tried to see what options we had, hence came this post, you know, just for fun :).

Hope some of you were interested in this too!

Grouping ASP.NET MVC Controllers And Views By Features

If you don’t like your folders to be named after blah software pattern (that happens to be MVC), and would like instead of having “Views” and “Controllers” folders (not many use “Models” folder), to see things like “Administration”, “Members”, “FeatureA”, “FeatureB” etc. folders in your web application instead, it’s not that hard.

Putting Controllers Next To Their Views, Literally

The easiest (likely not most convenient) way to do this is to move your controller class files to their corresponding view folders! Controllers are picked up by ASP.NET MVC by base type and class naming convention, not location or filename, so you can put them anywhere in the project and they’ll still be picked up.

This is how it works for the default ASP.NET MVC 4 “Internet Website” template:


The obvious problem now is that it’s harder to find them. Depending on how you usually work, you can renamed the file to make it show first (Add a couple of underscores to it for example), or maybe it doesn’t make a difference at all if you locate files by CTRL+ , (comma) or CTRL+SHFIT+T (if you use Resharper).

I’m kind of with the 2nd option (leaving them as they are, and locating them using keyboard), especially that you usually shouldn’t have so many actions in your views, and not have have so many partials in the same folder anyway, so, even finding the controller in the Solution Explorer shouldn’t be a big problem (compare the “Home” controller ease to the ‘”Account” controller).

Replacing Views With Features

The first way still feels dirty, controllers in the “Views” folder? Isn’t “Views” part of the MVC pattern language also? Let’s rename that to “Features” (or whatever makes more sense to you). Then our application will look like:


But then we need to tell the RazorViewEngine to look for views in our new folder. There are several ways to do this, I’ll go for the stupid one for simplicity. In your ASP.NET MVC 4 global.asax.cs file, add the following to Application_Start:():


The code is very straight forward. We just replace the “Views” in what the engine looks for to be “Features”.

there is another place that we’ll still need to modify, the “_ViewStart.cshtml” file has the default Razor layout file set with full path, change it to:

Now the website should work fine.

Starting With Features

If you were to just add another folder called “Features” and under it add the various feature folders, each including the Controller and Views(s) used with it may (make more sense,, maybe even leave things like “Shared” in the original “Views:” folder and delete the “controllers” one),, these are the changes that would still be needed:

  • Manipulating the RazorViewEngine so that it looks for views in that features folder. Instead of replacing the existing entries, we insert new ones (copy existing ones to a List<string>, insert the new values at position 0 so that they are looked up first, and returning this as array to the view engine property).
  • Copying the “web.config” in the “Views” folder (not the one in the root of your web application) to your features folder
  • Copying the “_Appstart.cshtml” file from the “Views” folder to your features folder, as this is what provides common view initialization like setting the default layout file

Or Just Use Areas!


p>If all you really needed was grouping features, and you don’t mind the Controllers, and Views folders under each feature (or maybe even like them), then you could just use ASP.NET MVC Areas feature and have all the application features split in corresponding areas.

However, I assume if you really wanted just that, you wouldn’t be looking for this post anyway :-).


Yes, I know, there is still more bloat folders in the ASP.NET MVC application. Some of them you may (or may not) use like “Content” or “Scripts””, some of them maybe shouldn’t be there or should have been in a different project, but this just to show some of what can be done using the existing hooks, especially that using them didn’t turn to be that hard.


Mapping José Romaniello’s Review Domain Using FluentNHibernate

This article is the third in a series of articles not written by me, but by José F. Romaniello. He is a big NHibernate guy, so, he created a sample domain trying to evaluate how close the latest Entity Framework 4.1 Code First stuff is getting to NHibernate features,

Later, he chose to show how to do the same Code-First mapping using NHibernate and confORM, NHibernate mapping library that is created by Fabio Maulo, a primary developer in NHibernate source code.


I asked him to do it also with FluentNHibernate, so, he took the time and effort to create a nicely put visual studio solution. at some point he gave it to me as I was familiar with FluentNHibernate in general, not with automapping, which we wanted to use for this sample. so, I am now posting about this experiment.

This article’s very late than it should. Apologies to those who have been waiting.

Convention Based Mapping, AKA, Automapping

My audience is slightly different than Jose, I might need to explain this one. skip if not needed.

When you do Code-First (or Domain First with POCOs, or your favourite name), normally you need to define the mapping between you classes and DB tables class-by-class. In each class you define which table(s) it maps to and which properties correspond to which columns, etc..

Your mapping library, being Entity Framework 4.1 Code-First bits, NHibernate XML/HBM mapping bits, confORM, FluentNHibernate, etc… can give you some help with that by having some defaults for mappings. For example, assuming the table name is the same as the class name, properties map to columns with the same name, assuming your class a single column PK and is called “Id” or “<ClassName>Id’, etc..

Of course they might also allow you to modify those defaults to your likings, and to have exceptions to specific classes/columns to those defaults (conventions). The purpose still is to need as few such customizations as possible.

This is what was first introduced in FluentNHibernate as Automapping, and is also starting to get known as convention based mapping while other libraries apply it.

The Domain

Let me steal borrow the class diagram Jose had for his test/review domain:


The Libraries

All needed to do was a NuGet command that brings FluentNHibernate and all its dependencies:


Remember, you don’t need to do it from the PowerShell console, right click the project and “Add Project Library Dependency” and finding the FluentNHibernate library in the wizard (select “Online” from the left and type FluentNHibernate in the top right text box) will simply do.

I did this just to show the dependencies easier in the picture. (note also I already had this in my project, so, created a new project “CleanFNhSample” just to show you this).

There is another dependency that i needed, not just because Jose used (and created)., but also because it’s really convenient also. NHibernate by default uses custom types for collections of type “set”, because those were not supported before .NET 4.0.. Jose has a code file NuGet package that makes using the built-in .NET 4.0 HashSet type instead of the custom type possible. it’s called “NHibernate.SetForNet4”:


Like the other packages, I could have added this from console.  just showing you both.

The Application Code

I’ll be showing snippets of the code here, then later giving you a GitHub URL to play with it as you wish Smile.

The main test code Jose created uses the domain classes in the diagram above is pretty simple. It creates a new NHibernate (NH) configuration (including class mappings to database), uses the mappings to create schema in the database (replacing anything that might exist, yes. This is not the only option), and creates an NH session (equivalent to EF DataContext if I over-simplify it), creates some records, and tries to save them to DB.

Now before we get into “ConfigureNHibernate()” method which is mainly the most important method here, let’s look at some other classes…

Project Structure

This is how my project looks right now:


The only difference from Jose’s confORM review and even his original FluentNHibernate work is the “Mapping” folder. While the few ConfORM samples used to have different methods in the same file to do the mappings, FluentNHibernate people seem to prefer (or I think they do) having a different class per part of the mapping. Of course you can still have those classes in the same file and no one would even complain!


So, there are 3 FluentNHibernate classes: in there. We’ll see why those are needed after we look at the main configuration. Note that not all this configuration is required as we were trying to meet specific requirements that were met in confORM demo.

Main Configuration

Each part exists for a specific purpose that in mentioned in the comments above it.

I hope the comments are enough. We need to set some mapping rules in StoreConfiguration class, apply some non-default standards for our collections in CollectionConvention class, and some exceptions for a field of Order entity in the OrderOverride class.

I know you might be worried about the big method, and the fluent API that makes the whole thing is one statement. From my experience, this makes debugging each part of the statement that might fail a problem, but this is not the case here, because, NH configuration / mapping problems only show when you build configuration or create session factory anyway. It doesn’t fail on the individual steps. But don’t worry about that too, usually (but not all the time), when you navigate the InnerException (sometimes recursively through multiple inner exceptions), you get specific information what part was wrong and maybe what was wrong about it too.


FluentNHibernate Classes

The way you write conventions, overrides, and other kinds of FluentNHibernate stuff is by creating classes that implement certain interfaces or inherit from certain base classes. There are so many interfaces provided but you only need to implement the ones that allow you to provide other defaults than FluentNHibernate own defaults. Also, a lot of the features in the classes could have been added to configuration method, but you don’t want it to grow even bigger, at least not for this explanation sample!


StoreConfiguration Class

In this class we do not implement an interface, instead, we derive from the default mappings class, which defines all defaults in automapping. This one was originally created by Jose and for myself I’m not sure whether the typical use is by having such a class or doing it directly in configuration.

What this method does is specify which classes represent DB entities and should be mapped automatically. It tells FNH to map all types in the EntityBase class namespace, except Enums and except EntityBase itself.


If I understand correctly, the part that we should have needed is only the namespace part. in “ShouldMap” and the “IsConcreteBaseType” (which basically tells FNH to not try to create separate table for EntityBase and make all other tables reference it for their PK. Instead, it should treat EntityBase as part of its children classes themselves as if the “Id” property was implemented in each of the classes not in their base class). I believe we shouldn’t have needed to exclude the enum explicitly (if it’s there the configuration is invalid, not some easier effect like having a table for enum values).


CollectionConvention Class

As mentioned, conventions are applied by implementing I….Convention interfaces, and there are many of them to override whatever defaults you might want. In this example, if we wanted our collection convention to only subset of collections we could have also implemented “ICollectionConventionAcceptance”. You can implement as many I<SomeCriteria>Convention and <SomeCriteria>ConventionAcceptance in the same class as you like.


Here we tell NHibernate to apply cascading to all (you can only restrict it to updates or deletes or none, check NH docs). We also tell FNH that all collections we use will be of type “set”.  This is different from what we had in the configuration because the configuration part tells NHibernate what actual type to use when we want our collections to be sets, and that’s why it’s not per-class or per-collection (note the conventions are like mappings for each collection but done all at once), The convention here makes NHibernate understand to treat it as “set” and use the Collection Factory we provided in the configuration for dealing with sets.


Also note that usually I had to add extra code to check if the collection is collection of string or int or similar non-entity type and have the instance.Name(“Value”) or else it doesn’t work. Just while writing this article I tried it and found that I didn’t need to add it anymore (works without it normally).


OrderOverride Class

If you have ever worked with normal ClassMap<TEntity> in FluentNHibernate, this one is going to be familiar. Having an interface instead of base class, and using an “Override” method instead of constructor, and having to use the parameter mapping (or whatever you call it) instead of calling “Map” methods directly are the only differences.

If you are not, then simply those overrides classes places to add mapping for the specific classes and properties that have special cases, without having conventions that apply to many other classes at the same time. Here we had a read only property in the Order class and we wanted to set its access (property, field, none, …) to read only. Jose and I thought it might be good example of overrides. This is also the place if you use special SQL formula for one specific property or so.


NuGet And Resources

Now, as promised, you can have the code on GitHub so that you can clone, fork, or simply download it and have a look.

The Code URL is:

I’d highly recommend you not only browse the code online, but also use TourtoiseGit or whatever client you may feel friendly (including console) to bring the code locally and start playing with it

However, if you insist on just old-school download-the-code style, no problem, here is the direct download link:

Sorry for the length of the post and I hope it wasn’t a show stopper that made you just skip to end.


For context, here are some links that were mentioned during the post:

Twitter OAuth, Persistent OAuth, TweetSharp: Presentation & Code Nuggets

This is a PowerPoint Presentation (and extraction of the contents) I made as per a couple of friends’ request (@EmadAshi and @AmrEldib) to show how OAuth works along with Twitter and how easy it is to cache OAuth credentials.

As I was doing related work for TweetToEmail. I felt a PowerPoint presentation will be even better than a blog post for this one, but here you get the two.

The Presentation

The Contents

Application Registration

  • A Twitter user creates a Twitter Application
    • If the application is web based, it needs to provide a URL. “Localhost” is not accepted as a domain for this URL
  • A Twitter Application gets two pieces of information
  • Consumer Key
  • Consumer Secret
  • A Twitter Application will use these in all coming requests.

Initializing The Process

  • User comes to the application and it decides to authenticate against Twitter
  • Application makes a request using Consumer Key and Secret to obtain “Oauth Request Token”, which consists of two parts
    • Token
    • Token Secret
  • Application makes authentication URL including the “Oauth Request Token” parameter, and optionally a “Call-back URL” (if different than default URL in first step)

User Authentication

  • The user is redirected to Twitter, the URL contains the “Oauth Request” to identify application authentication session
  • Assuming the Twitter User being logged in and authorizes the Application
    • If the application is a desktop application, Twitter gives the a user a number “Verifier” to manually write back to the application
    • If the application is a web application, the user is redirected back to the application call-back URL with a complex “Verifier” parameter in the URL

Obtaining the Access Token

  • The Application makes a request to Twitter including the “Oauth Request Token” and the “Verifier”
  • It obtains an “Access Token”, likewise it consists of two-parts:
    • Token
    • Token Secret
  • The application needs to send the Consumer Key and Secret and Access Token in every future request that needs the Twitter User privileges

Caching Credentials

  • The application needs at least one authorization process as before
  • The Access Token returned can be saved in session/DB/whatever and then re-used later
  • The application can later use the Access Token directly along with the Consumer Key / Secret to communicate with Twitter without going through any of the previous steps

Sample Code (TweetSharp v 2.0)

Request Token & Redirect


Getting Access Token


Hints for Web Applications

  • The method GetAuthenticationUrl() has an overload that accepts a call-back URL for the user to be redirected to after obtaining verifer
  • The important part in RequestToken is the Token part, not the secret.
  • All parts of AccessToken are important and required
  • When the user is redirected back from Twitter to your application, you get the following QueryString parameters sent to you
    • oauth_token: The Token part of the Request Token
    • oauth_verifier: The verifier required to obtain the Access Token later

Using Cached Access Token


Related Links

Generic SaveOrUpdate Method To Persist New/Dirty Entities With Entity Framework 4

In NHibernate there is a Save(entityObject) method, which creates a new row in the database with the given entity object, also, has an Update(entityObject) which updates the row corresponding to the entity object with the property values of this object. It also has a SaveOrUpdate(entityObject) method, which checks the whether the entity object corresponds to an existing row in the database, and chooses whether to call Save(…) or Update(…) based on that.

The way I usually do web applications across multiple tiers, when not using view models specifically, makes me encapsulate much code in Services layer that sometimes does not need to care about whether the given entity is persisted in database or not. Thus wanted to have similar method using Entity Framework as ORM.

Of course I have implemented the method number of times and the code evolved based on which version of Entity Framework I’m coding against, and my knowledge of the framework internals as well.  Actually, when you work with so many ORMs like I did, a new ORM or ORM version turns to only sound like “What’s new in the manual?” thing.

The Method

You must have heard about the new Entity Framework 4 Feature CTP 4.  I was playing with the example code of the Code First enhancements walkthrough, and writing some tweaks, I found myself implementing this function this way in the process ….

The method is implemented as an extension method to the ObjectContext class to call it on any context instance. Also, no namespace is there so that it’s available without having to add any using namespace … statement to fine it available.

Also, toy notice the method does NOT call context.SaveChanges(). This is on purpose so that the calling code is still in control when to submit the context changes, and makes the method similar in effect to other ObjectContext methods.

The Calling Code

A sample calling code to this method would be similar to below. This is not the code I used when testing it but the easiest to understand, as most people know the repository pattern already. Also, in many places there have been over-simplifications on purpose (no extra base classes or DI containers etc..)), but this is to serve the sample clarity purpose solely.

I’m assuming you have checked the walkthrough already, but if not, it’s enough to know that Book is a class that represents an entity, and BookCatalog is a class that inherits ObjectContext.

Conclusion / Disclaimer

This method is not special at all. I just noticed that I have not posted code into my blog for very long, and wanted to change this by creating something as small as this one, hopefully to be followed by more advanced stuff that I should be taking out from my internal helper/toolkit libraries. I have stuff related to Entity Framework, NHibernate, Spark, ASP.NET, and others, but those are to come at another time…

MSDN Code Gallery – New Code Sample Sharing Area from Microsoft

Microsoft has recently opened a new sub-site of MSDN, MSDN Code Gallery. Here’s their main statement:

Download and share sample applications, code snippets and other resources

MSDN Code Gallery is your destination for downloading sample applications and code snippets , as well as sharing your own resources.

Usually, people would go for community sites for code samples sharing, or create some open source area like CodePlex, creating projects that only work as sample base. Others would use those or the sample codes available in different MSDN dev centers from time to time for downloading code samples.
Now, we have the place for those little snippets :).

Start Downloading Code Samples and Create Your Own Right Away!