(Solution) Windows 8.1 hangs on sleep/standby, can’t wake up

The Problem

When you send your physical computer running Windows 8.1 to sleep (stand-by), the computer screen goes black but all other power lights remain on, and you cannot wake it up by pressing any key or whatever, you have to hard reset it by pressing the power button till it stops.

Root Cause

The main problem in my case at least was a conflict between a VirtualBox network driver that it installed on Windows 8.1 (the host) to support “Bridge” networks in virtual machines (clients). Windows hangs at trying to turn this off.

Note that it’s not the same device you see in front of other network adapters in device managers. The one you see is the “host only network”.

The Solution

I managed to solve the problem by uninstalling VirtualBox, and then installing it again without Bridge network support. In the install wizzard “Custom Setup” step, under “VirtualBox Application” -> “VirtualBox Networking”, I disabled “VirtualBox Bridged Networking”, and continued the setup with everything else set (the default).

virtualbox

Note that I tried removing bridged networking from the virtual machines that used it and it didn’t solve it. I had to remove the feature as explained above.

But I need bridged networking!

Ideally, most of the use cases for bridged networking will be solvable using NAT networking instead, use this for your virtual machines unless you really really need the bridging, and then you need to choose between it and the ability to set the computer to sleep, at least until BirtualBox solves the problem.

Applies To: (Version information)

I experienced this problem with VirtualBox 4.2.18. Most likely the problem itself will be solved in some future version.

Update

If you want to watch for progress on this issue, check out the issue report in VirtualBox issue tracker:
https://www.virtualbox.org/ticket/12063

Maintaining your personal brand online with @TathamOddie – New Video

My new video featuring @TathamOddie on maintaining your personal brand both online and offline is now up…

Tatham Oddie is a well known public figure in Microsoft world as he speaks in so many .NET conferences around the world and is an active contributor to several high profile open source projects like WebFormsMVP.

In this video Mohamed Meligy interviews Tatham on online personal branding and ow you can makes the best out of people you meet in social media and offline groups. Tatham shares his experience on how to make it easy for people to recognize you and communicate with you for both social and business benefits.

http://youtu.be/6UBIRiEglsM

Manually Compressing Any ASP.NET Page, PageMethod, MVC Action, HTTPHandler,..

Compressing A Single ASP.NET Response Manually

This post is about compressing your HTTP result without using IIS Dynamic Compression. I’ll save the story to the end and give you the code first:

using System;
using System.IO.Compression;
using System.Web;
 
namespace WebCompressionSample
{
    public static class ResponseCompressor
    {
        public static void Compress(HttpContext context)
        {
            // Quite often context will be something like `HttpContext.Current`
            // which in some conditions may not be available.
            // This is likely due to mixing conerns in calling code, 
            // so, it's up to you whether to handle differently.
            if (context == null)
            {
                return;
            }
 
            // Among other things, this header tells the server 
            // whether the client can decompress compressed responses, 
            // and what compression format(s) it supports
            string acceptEncoding = context.Request.Headers["Accept-Encoding"];
 
            if (string.IsNullOrEmpty(acceptEncoding))
            {
                return;
            }
 
            // The two common compression formats in web are GZip and Deflate
 
            if (acceptEncoding.IndexOf("gzip",
                StringComparison.OrdinalIgnoreCase) > -1)
            {
                // Read the response using a GZip compressor ,
                //    and replace the output with compressed result
                context.Response.Filter = new GZipStream(
                        context.Response.Filter, CompressionMode.Compress);
                // Tell the client the ouput they got is compressed in GZip
                context.Response.AppendHeader("Content-Encoding", "gzip");
            }
            else if (acceptEncoding.IndexOf("deflate",
                StringComparison.OrdinalIgnoreCase) > -1)
            {
                // Read the response using a Deflate compressor ,
                //    and replace the output with compressed result
                context.Response.Filter = new DeflateStream(
                    context.Response.Filter, CompressionMode.Compress);
                // Tell the client the ouput they got is compressed in Deflate
                context.Response.AppendHeader("Content-Encoding", "deflate");
            }
        }
    }
}

Well, this shows how to do the compression itself. Depending on how you do ASP.NET, you probably will call it differently.

In my case, I called it manually from an ASP.NET Webforms PageMethod (more on why below), but if you are using ASP.NET MVC for example, you probably want to wrap it in an ActionFilter and apply that to the action you want to compress its output. Let me know in the comments or on twitter if you have a problem implementing it in a particular situation.

IIS Dynamic Compression

IIS 7+ has built in dynamic compression support (compressing output of server-side scripts like ASP.NET, PHP, etc.). It’s not by default because compressing dynamic content means running the compression for every request (because it doesn’t know what the server-side script will generate for every request, the point of using server-side programming is generating dynamic content!).

Static compression on the other side (caching static files like styles and scripts) is on by default because once the static resource is compressed, the compressed version is cached and served for every future request of the same file (unless the file changes of course).

General advice: I’d say if your server side scripts expect to return large text-based content (say, large data, even after paging, etc. like large reports or whatever), always turn dynamic compression on, at least for the pages that expect to return large data sets of text.

In many cases though the majority of large files will be scripts (and possibly images) will be the larger parts though, which are often taken care of (for scripts for example) by IIS static compression or ASP.NET Bundling.

My Quest To Find The Best NodeJS Http Client With HTTPS and Basic Auth

This may be a completely useless post, but it took me a bit to figure out some of the pieces until I got here, so, I thought maybe it’s worth sharing!

What’s this & how did I come by it?

Instapaper is a nice service that provides you with “read it later” bookmarklet. You use this bookmarklet to add any web page to their log and you can read that page later via their site, a mobile application (from them or 3rd party), or via Kindle (or other book readers)!

Apart from their bookmarklet, they provide a simple API to add URLs programmatically. This is useful if you for example want to extract URLs from other services, say twitter favourites. They also provide a Full API (OAuth API) for doing everything their site and apps do, except some things in this so called Full API are (at least to be) only for paid subscribers.

The API

Instapaper’s simple API is quite interesting though, according to the docs, you can use HTTP and provide only username (email) -not even password- to add a URL, and you can use POST or GET. That’s scary for anyone who thinks about security, but -maybe- OK for what the API does. They also provide the API in proper HTTPS with basic HTTP authentication (and if you want more, you can always use the Full API’s OAuth).

I wanted to use their API in HTTPS using POST and Basic Authentication. In my tests I was happy to try HTTP and/or GET and/or sending username in clear-text. I also thought I’d write this in Node because I thought it should be easy. Node people often serve REST/HTTP APIs, so, they must be good at consuming them too (even though in reality most consumers are client JS not node), right? Well, maybe yes.

The Code

I can explain the challenges, how I overcame them, and then show the final code, but I have spoken too much already, let’s begin with the code.

var basicAuthenticationParams = {
username: ‘MY_EMAIL’,
password: ‘MY_PASSWORD’
};

var requestParams = {
url: ‘https://www.gurustop.net’
};

request.post(“https://www.instapaper.com/api/add”, {
form: requestParams,
auth: basicAuthenticationParams
},
function (error, response, body) {
// This is only returned for network errors, not server errors
if (!!error) {
console.log(“Network Error:”);
throw error; // it’s actual JS Error object not string
}

var statusCode = !!response ?
parseInt(response.statusCode, 10) || null
: null;
// I could have checked for 200,
// but technically 200-299 are success codes,
// and more importantly, Instapaper sends 201 for success
if (!!statusCode && statusCode >= 200 && statusCode < 300) {
console.log(“Successful!”);
console.log(); // Extra line separator
}

console.log(“HTTP Status:”);
console.log(response.statusCode);
console.log(); // Extra line separator

// In instapaper’s case, by default,
// they just send status code in body too
console.log(“Body:”);
console.log(body);
});

NPM

See? It was all very straight forward. Well, it wasn’t for me. I tried several NPM packages to get this effect, and none was going well. The “request” package was the one that did the trick. this is generally accepted in Node, the built-in API is very low level and there is some NPM package that gives you the nicer one.

For all the other packages I tried, they didn’t have any dependencies on other packages. This “request” package gave me the usual shock seeing so many NPM packages downloaded and installed, but this is the philosophy of NPM, build small reusable pieces built on top of other pieces.

Getting the request headers right (Oh my!)

While the packages were not working, interchanging HTTP status codes between 403 (invalid credentials) and 400 (missing parameters), it was very hard to tell what I needed. Even associating to proxy to get the output to Fiddler didn’t go well with for example the “easyHttp” package (there is a similarly-named NuGet package for .NET BTW, but it’s not related at all I think).

I went to my favourite Chrome Postman extension, which works nicely like Fiddler for the “compose request” part (actually, arguably is even way easier). The request parameters (was using username as another request parameter, not using Basic Authentication, just to test) didn’t seem to work until I changed the default “form-data” to “x-www-form-urlencoded”

image

Which added the following header to the request: “Content-Type: application/x-www-form-urlencoded”. This kind of makes sense. This overly permissive API feels more optimized for sending from web-pages, where forms are a front concept.

I tried to add the header when I was using other packages but that didn’t seem to tick it. I tried this one, and as you can see didn’t even need to set that. I just used the “form” property of the options though instead of “body” or “”json” properties.

This is why I’m posting this, to hopefully help someone googling get up and running quicker than me. This is not just Instapaper, this can be very helpful in so many situations.

For the rest of you (.NET / non NodeJS devs)…

If you are not a Node person but still want to have a fiddle, after you install Node (which includes NPM package manager by default), then you save the code to a JS file “some-file.js”, preferably in a new folder.

Assuming Node and NPM are in the PATH (installer default), you can open Cmd or PowerShell console in that folder and run npm install request then node some-file.js (no quotes in both), and see the results in front of you.

My 2nd On YouTube: Chrome Website App Shortcuts

The first video I published on YouTube (on Angular.JS directives and data-binding) seemed to be going very well. This made me easily fall into the issue I avoided before, which is worrying too much about what might follow. To get that worry off me, I chose a simple topic targeting different audience, recorded and edited it in one night, and just published.

The Video

This video targets Google Chrome user. It shows a productivity tip that I heavily rely on on my daily PC usage. I have many application-like websites pinned to my taskbar, ranging from TweetDeck to Outlook 365 Web Access. In this video, I simply show how to create these icons. the video is only about 5 minutes in length.

Load any website like an application using Google Chrome
http://youtu.be/q40qlJjPgIA

Going Forwards: Suggestions Please!

There are several things I need to work on to make these experiments more useful (and fun) for everybody. Mainly I need to get used to talking to the mic so that I don’t get that dry throat that I don’t usually have even when facing many people in my offline live events, but also, I need to find topics that YOU guys and ladies are interested in. I’ll try to stick with short videos for now, but please, if you have any idea for the next video, just let me know, and I promise to consider it seriously.

Thank you very much.

New Entity Framework Feature Request: Migrations: Allow Multiple Migration SQL Generator per Provider

I’ve just added this feature request to Entity Framework Uservoice site and was wondering whether you’d like to up-vote / share / tweet it if you think it’s good to have.

Migrations: Allow Multiple Migration SQL Generator per Provider

http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3812292-migrations-allow-multiple-migration-sql-generator

Inline, here’s what the feature request is about:

One way to allow community to contribute to EF Migrations is allow us to create more `MigrationOperation`s.

For example, people can then add things like full text index, etc., things that are not in the core EF Migrations and are just boring SQL statements that need to be combined together.

Currently, to build a provider agnostic `MigrationOperation`, you need to write the SQL generated for the operation in a class derived from `MigrationSqlGenerator`. If you want to add `CreateFullTextCatalog` operation and support SQL Server for example, you inherit `SqlServerMigrationSqlGenerator`, add a `Generate` method for your operation, and ensure all other operations generators are still called from the base class.

Not just that thius is ugly, but now the `MigrationConfiguration` has to use your generator for `SqlClient` provider, the `SetGenerator` and `GetGenerator` methods in ‘MigrationConfiguration` only deal with one generator per DB-type provider.

So, if someone else wrote their own `MigrationOperation` and `SqlServerMigrationSqlGenerator` for it, I cannot use both in the same `MigrationConfiguration`.

This makes creating generic `MigrationOperation` quite a bad idea, maybe people will just forget about Db-agnostic code and build their operations deriving from `SqlOperation` and generating a provider-specific statement from their operation, but this means the entire `MigrationSqlGenerator` is unusable, right?

The simplest answer for this seems to be supporting multiple `MigrationSqlGenerator` for the same provider in `MigrationConfiguration`, or else maybe provide a different way to make building provider-agnostic operations possible.

 

Thanks.

 

Related Information

If you are interested in how I got to learn about how this works, well, there is of course trying to create my own `MigrationOperation`, and reading source code via Resharper downloading symbols and doing decompiling for me. However, this article is also a good source to learn about writing EF6 Migrations Operation:

EF6: Writing Your Own Code First Migration Operations

 

Enjoy.

NodeJS On Windows – Choosing Components To Install

In case you didn’t know, NodeJS v0.10.0 has been recently released, a number of interesting changes were mentioned in the blog post, mainly around making some core APIs easier to use, which required several in changes in the codebase to support, and caused some interesting performance characteristics changes .

I couldn’t wait, so, I looked for the Windows x64 installer, downloaded and ran that, the usual Next .. Next .. installer, well, until I saw this installer step:

Node JS Custom Setup Step

Although I can’t think of a scenario where I’d deselect any of these, just the fact that I could clearly see and choose what Node installer will do was great. I have an interest in the NPM part installation behaviour you know, this screen gets me pretty close to more understanding and utilization of this understanding, which I may blog about later.

Great job, Node team :-).

Identify Your Weaknesses & Optimize For Them

It is very important to understand your weaknesses and optimise your methods for them.

For example, I have a very hesitant random memory. At any point in time, I’ll remember some things in deep details, remember only certain characteristics (important or otherwise) of certain other things, vaguely recognize the existence of other things, and completely forget about the rest.

Ensuring which memory has which degree of remembering doesn’t seem to have a direct relation to be related to the nature of things themselves (people faces/names, papers, told/witnesses situations, etc.), how important the things are or relevant to current time, or even how old the memory is.

 

That’s why, every time I have made a record about what I want my future me to remember, and made sure this record is searchable (physically or digitally) in a way that does NOT require remembering a certain hint (because I tried that, and I ‘sometimes’ forget hints), the future me gets happy, when I search things that sometimes I am not even sure they did exist.

Yes, I just had one of these moments :)

P.S.

Of course, understanding and encountering for your weaknesses should not be a reason to stop trying to overcome them!

TweetDeck’s "Interactions" Column Is Brilliant

In Twitter’s own website, there’s now a “Connect” tab next to “Home” once you log in. In this tab, you see all the your new followers, others retweeting you, and your twitter mentions.

I’m usually interested in this kind of information, not just mentions but any interest in what I have on my twitter account. I used to setup email notifications for some of them, even though I don’t always read them promptly as it’s common to have a few more followers at the same day some days.

image

Recently I discovered that TweetDack has this very feature. You can add an “Interactions” column, and it will show all your:

  • Mentions (as expected)
  • New Followers (ungrouped, unlike the twitter website)
  • Retweets
  • Favourites (of your tweets, by others)

This is the normal evolution of the “Replies” / “Mentions” column. I thought that other advanced twitter users would be interested in that sort of thing as well.

By the way, this is tested in TweetDack Chrome application. I have set Chrome to open this app in its own window using its own icon (just right click the app icon in any new tab and you’ll find the option). Since even the desktop version of TweetDack is just a Chrome frame, I didn’t even bother install it. It started horribly after moving to being a web version, but I think it’s really nice now again, almost as good as the old native TweetDack!

On Communicating Your Message More Than It’s On Telstra NextG

Background

For so many years, the Australian mobile network provider Telstra had exclusive right to use the 3G network frequency 850 MHz. All other carriers were only allowed to use the 2100 MHz band (“Optus” had 900 MHz in rural areas, although didn’t have them in so many government-licensed towers).

The low frequency band has contributed a lot to Telstra’s mobile success. The 2G networks had used it for long making it easier to extend to support 3G and Telstra had already more towers, the low frequency also can cross walls easier, making it better for in-house coverage. It was definitely a killer.

The Challenge

How did Telstra advertise this band? Their target was conveying a message about a real feature that’s just awesome, sounds easy? Well, it included a few challenges:

– Not all people are “educated” or “smart enough” to be good at networking and stuff, or even numbers in general. It’s easy for users to see this whole 2100 vs 850 as a minor detail, or even think it’s equal to 900 and go to Optus.

– The advantage wasn’t exclusive forever, hopefully by the time it’s open to others there’ll be better options, at least Telstra will have more towers (in reality, when this happened, Telstra was already starting its 4G network), They don’t want other network to claim they’re equal.

– Involving the user in technical terms can have bad consequences. Only a few phones support 3G 850 MHz, but all phones support 2G 850 MHz, even worse, those are called CDMA for 3G (sometimes WCDMA or GPRS) and GSM (2G). The user can easily think that a phone that supports 850 MHz GSM is good for 3G 850 MHz. Users are “too dumb” to always remember the difference.

– It may be possible for Telstra to have agreements with smaller providers (It happened, with the “3” network, they also resell their “2G” network to smaller provier), and they didn’t want users to expect they can get the full experience by going to these other providers.

The End

imageWhat did Telstra do? The solution was very easy and we all know it. Telstra simply made up a name for their 3G 850 MHz. They called it the NextG network. It doesn’t really matter which name because it’s just a product name, but having invented the name it means it becomes their exclusive trademark, that’s even better! Now everybody knew they’d have to choose between different 3G providers, and Telstra’s own NextG.

The Takeaway

Users even would sometimes ask other providers when are you going to support NextG; and the best they could say is that they’ll have “equivalent” networks, like Optus “YesG” which sounded like just a try (the first name sounded genuine standard not company specific, while “Yes” used to be common part of Optus products, also recently “Vodafone” started supporting 3G 850 MHz, while still working on 4G). Neither of them could legally say “we support NextG”.

The story is very interesting, even when you have a great message, the trick is how to deliver it, and when it comes to communication and delivering your message, the most successful sales and marketing stories are true gold treasures of inspiration.

Final Note

For so long I believed that I have several non-technical topics to discuss on this blog that may be interesting to the sort of audience GuruStop.NET gets. And only in very rare situations I was able to post these thoughts. The inspiration for talking abut this story was originally a group conversation on Facebook, but it was inspirational enough for me to write my thoughts down, once they were written, there was no excuse not to share them with you.

I hope you like this sort of posts. At least they’re better than not blogging at all (well, again, I hope they are, let me know if you have different opinions).

P.S. I must also explain that I never worked for or with Telstra. This is my own interpretation of the story based on reading so many forum posts and blog posts when trying to choose a carrier and buy a phone when  I first arrived Australia in Q4 2010.