Prabhu Kumar

a tech twaddler..

VSTS Incident Postmorten

without comments

Taylor Lafrinere from our VSTS team published two Root Cause Analyses which are really a good read. Sometime we all hit that one issue which makes you pull your hair out :-) I still remember the caching bug we hit in my previous team, where cache read/write were messing up validity check with dd/mm and mm/dd, maybe I should write about it someday. Here are RCA links,

Preliminary postmortem

Complete postmortem

Written by Prabhu Kumar

March 5th, 2018 at 6:24 pm

Posted in Uncategorized

The case of curious characters

with 2 comments

We stumbled upon an interesting issue the other day at work. A was working on adding search feature to a table/grid of cells, consisting of multiple string and date fields. The implementation used a client side search utility provided by our client SDK, which internally used tries for indexing and searching on string tokens.

The feature was working well overall, but strangely the search was not working on date fields in IE (v11). It worked fine on other browsers. So if you searched for a string like ‘3/23’, it would work in Chrome and Firefox, but not in IE o_O

What is special about these date fields, we wondered, that makes this issue specific to IE? On a closer look, we found that the trie wasn’t getting properly constructed in IE. We jumped into the client SDK code, looked around but did not find anything suspicious, we also tried a bunch of other things like changing system date format, trying to enter date in a string field in another column and searching on it but the results didn’t really provide any clues.

V then stepped in and looked at the part of SDK where the trie was getting built and found that it somehow was failing to add date fields to the index. On debugging further, V found that the date string contained characters we had not expected, it wasn’t a usual string, it had stuff in it that was failing the trie construction.

How were the date fields getting added to the index?

Our implementation was calling toLocaleDateString() on the Date object and passing the string off to the search utility to build the index. It turns out that toLocaleDateString() were returning different values in Chrome vs in IE. Here’s a small piece of code that demonstrates this,

If you run this code in Chrome and IE, this is what you’ll see.




IE (v11)


What? Chrome reports length of the string as 8, which is what you’d expect. IE has got its own characters to the party. Turns out IE adds Left-to-Right markers in the string. The value 0x200E is the unicode code point for Left-to-Right mark. This marker gets added before every token in the date string, thus adding 5 characters to the string length.

The answer on this stackoverflow thread sums up the issue nicely,

Any of the output of toLocaleString, toLocaleDateString, or toLocaleTimeString are meant for human-readable display only

If the intent is anything other than to display to the user, then you should use one of these functions instead:

  • toISOString will give you an ISO8601/RFC3339 formatted timestamp
  • toGMTString or toUTCString will give you an RFC822/RFC1123 formatted timestamp
  • getTime will give you an integer Unix Timestamp with millisecond precision

Written by Prabhu Kumar

April 4th, 2017 at 11:31 pm

Posted in Uncategorized

To patch or not to patch

with 2 comments

I was reading this post on designing rest APIs on, when I remembered an interesting discussion I had had a while ago when working on a feature. Warning: This might be mostly rant.

The feature in discussion here allowed the publisher of an extension to reply to reviews left by users on the extensions product page. The publisher could only create and edit a reply, delete had to be done via a support email to our team, but rest APIs were available to delete a reply that required admin permissions. As you can notice, these nicely fall into crud operations. Now, the reviews feature had already been implemented, modeled using REST. How would you model or design the new reply feature in terms of REST?

Do you consider the reply as a separate resource, and model crud operations on the new resource? Do you expect someone to HTTP GET a reply only? Does a reply make sense without the context of the review? Treat the reply like a sub-resource with its own crud operations?

Or are you the kind that thinks of replies as a property of reviews? A review either has a reply or not. Creating a reply would be like updating the review so you do it via a patch review call. Editing a reply is pretty much same so that too is done via patch. In this case the patch payload has instructions on what to do for the server. If the payload says update-reply, you either create or edit the reply. If the payload says delete-reply then you delete the reply associated with that review. But is this scheme ugly?

If you think of replies as a property, a call to get reviews for a product returns all the reviews and each review object has a reply object if one exists. Do you think getting replies should be optional and done based on a request Param? Something like get /extensions/my-awesome-extension/reviews?filteroption=includeReplies

We went with the latter approach. What are your thoughts on this?

Written by Prabhu Kumar

March 27th, 2017 at 11:22 pm

Posted in Uncategorized

Ratings and reviews on VS Marketplace!

without comments

We’ve enabled a rating and review system on VS Marketplace for VSTS and VSCode extensions. Until now, download count of an extension served as a proxy for estimating the quality of an extension but no more!

You can see a 5 star rating on the extension on the marketplace homepage. Note that rating and review was already available for Visual Studio extensions. This enables it for VSTS and VS Code extensions as well.

Hovering over the stars shows you the exact rating and the number of people who have rated this extension.


Clicking on the extension takes you to the details page, where we show the average rating of the extension and the number of ratings on the banner,


If you notice carefully, you can see that that color of the stars on the banner will change between orang-ish or red-ish based on the background color on which it is rendered. This is done so that the stars have a nice contrast and can be seen clearly against the background. Here’s an example: red stars on a light background and orange stars on a dark background,



You can click on the stars to scroll down to the details section,


The detailed section consists of, as you can probably guess, details of the reviews. You need to be logged-in to leave a review, you can use your Microsoft Account or any other AAD backend account for this. The detailed section shows the picture and the display name associated with your profile. You can easily change this by clicking on your name at the top and then editing your profile details,



The name and picture you set here will be used in the review details section. So you have the control to change this anytime.

Clicking on the ‘Write a review’ button brings up the review submit dialog, (who’s excited about pink buttons! :-)


You need to provide a rating, that’s mandatory. The submit button will be disabled until you select a rating. The review comment is optional and you can choose not to enter any text, though I recommend entering the text as it helps the developer get more details out of the review and figure out what you like/dislike about the extension.

After you provide a rating and review comment, click on ‘submit’ and your review will magically appear in the details section!


If you see a review that’s offensive or just plain spam, use the flag icon on the review to report it. We have three categories that show up currently,


You can select the most relevant option while reporting a review. Our team will run through the reported reviews and take appropriate action based on the content of the review.

That’s it for now, stay tuned for more!

You can read more about this here,

We’d love to hear any feedback, feel free to leave a comment or ping me on twitter at @prabhuk

Written by Prabhu Kumar

March 23rd, 2016 at 7:26 pm

Visual Studio 2015 and .NET 4.6 Released (and more!)

without comments


Big release day today! Microsoft today announced the release of Visual Studio 2015, Visual Studio 2013 Update 5, TFS 2013 Update 5, .NET 4.6

Check out the blog posts below for more details on what’s new in the release.

The Visual Studio Blog

Soma’s blog post

ScottGu’s Blog

.NET Blog – Announcing .NET 4.6

Note that one big guy missing from the list is TFS 2015, it’s still in RC2 and will be RTM’ed very soon.

Take the tools out for a spin and if you have any feedbacks or suggestions send them across using Send-a-Smile, User Voice or the Visual Studio Connect Site

Written by Prabhu Kumar

July 20th, 2015 at 9:37 pm

Using SonarQube IntelliJ plugin for Code Analysis

without comments

SonarQube provides a plugin for IntelliJ (and Eclipse as well) which is a great tool to perform dev-box code analysis before committing or checking-in your changes. It gives the developers a chance to check and make sure they aren’t introducing any new defects or technical debt in the code they have added or modified. Here’s how to set up the plugin and get going.

Install SonarQube IntelliJ Plugin

  • Launch IntelliJ and go to File -> Settings -> Plugins
  • Search for ‘sonarqube’ and install the plugin


Setting up SonarQube plugin

  • In IntelliJ go to File -> Settings -> Other Settings -> SonarQube
  • Add details about the sonar server here. The plugin will use this to download the quality profile/analyzers etc.
  • This plugin executes the analysis in preview mode where no data is pushed to the server.


Associate your IntelliJ project with Sonar project

  • Right click on the project in IntelliJ and select "Associate with SonarQube…"
  • Search for the sonar project and select it


Running the analysis

  • Make your code changes
  • Right click on the project and select Analyze -> Run Inspection by Name…


  • In the search box type "Sonarqube" and select "SonarQube Issue" from the result list
  • In the "Inspection Scope" dialog, select Custom Scope and set its value to Changed Files. This will ensure that the analysis is run on the files modified by you.



  • The plugin will run the preview analysis and display the results in the inspection tab. The inspection shows issues in two files which were modified before the analysis.


Written by Prabhu Kumar

July 14th, 2015 at 2:42 pm

Branch Policies in Visual Studio Online

with 18 comments

Branch policies in VSO allow you to set certain rules against branches in your Visual Studio Online git repos. They are more or less like gated check-ins which TFS has had since forever. Visual Studio Online supports the below policies by default:

  • Changes must be submitted to a branch only via Pull Requests
  • A build must complete successfully before changes can be merged to the destination branch
  • Add certain reviewers if the pull request modifies files in certain paths in the repo

To know more details on the complete workflow involving pull requests on visual studio online, check this excellent post

Setting up branch policies

To setup branch policies login to your visual studio online account and navigate to the team project, which has the git repo you want to set the policy on. You will need to have administrator privileges on the team project to setup the policy. After you’ve navigated to your team project, click on the settings wheel icon on the top right corner, this will take you to the admin panel of your team project. Select the ‘version control’ tab, and on the left rail select the branch you want to set the policy on (master branch in my case), and click on ‘branch policies’ tab. Refer the figure below, the click points are highlighted in yellow.


Under ‘Automatically build pull requests’, select both check boxes. You will need to provide a build definition here which VSO will queue every time a pull request is submitted or updated with a new commit. The second check box, ‘Block the merge’, is actually optional, if you want to allow the merge even on build break, you can uncheck this. Though I’m not sure why you’d want to do that.

The next section, ‘Code review requirements’, allows you to control how changes can be submitted to master branch. The first check box, ‘Require code reviews using pull request’, ensures that any changes to master happen only via pull requests and no one is able to push their changes directly to master. ‘Allow users to approve their own changes’, allows to you add yourself as one of the reviewers and approve the changes, which is nuts really :-)

The last option, ‘Add a new path’, enables you to add reviewers optionally depending on the files involved in the commits. For scenarios where you really want Dave C to look at the changes if anyone modifies files under \kernel\base. It has support for wild chars as well.

After the policies are set, when someone tries to push their changes directly to master, they see this:



So what you need to do now, is move your changes to a feature branch, push that branch to server and create a pull request to merge the changes to master. This workflow is explained in the link shared above.

Now let’s say, the feature branch was pushed and a pull request was created, but the change list has a silly syntax error. You will see the branch policies show up in the right rail and a build will be queued for verification.



Since the pull request had a syntax error, the build will fail and attempting to merge the changes to master will be blocked.



The next step is to fix the build failure, add a commit to the pull request and make sure you have at least one approval from reviewers. As soon as you push your local branch to server, a new build will be queued automatically and the status updated.


To enhance this even further, you can improve your build definition by, let’s say, adding a unit test build step and a code analysis build step, to ensure that all unit tests pass, before the pull request can be accepted and merged into master.

Written by Prabhu Kumar

July 13th, 2015 at 8:53 pm

Setting up an on premise build agent with Visual Studio Online

with 3 comments

In this post we’ll look at how to configure an on-premise build agent to work with your visual studio online account. If you haven’t given Visual Studio Online a try yet, I suggest you head over to, sign-in using your Microsoft account, create a free VSO account and take it for a spin! If you’re new to VSO, these channel9 videos should help get you up to speed.

Create a team project and add sample code

For our purposes here, we’ll create a new team project and add a simple Windows Forms application to it and set up the build agent to build this project. After you log into VSO, select the ‘new’ link from under ‘Recent projects and teams’ section to create a new team project. I selected Agile process template and git as the underlying version control, but these aren’t necessary for our discussion here, you can try out other combinations as well.


Navigate to the newly created team project and select ‘Open with Visual Studio’. This should launch Visual Studio and might ask you for credentials to connect to VSO. Clone the repository and add a new solution to the repo, commit the changes and push them to the server. The project should now show up in the VSO portal.



Configuring the build agent

For the demo here, I’ll be using my trusty Lenovo ThinkPad as the build agent. Log into your VSO account and select the gear icon on top right of the screen to go into your account settings. Click on ‘Control Panel’ and select ‘Agent Pools’. image

Click on ‘Download agent’ to download the VSO agent binaries and the scripts to configure the agent. Save to your favorite location on the disk and extract it (for e.g. to c:\agent). You might need to unblock the zip file before extracting it, just right click on the zip file, select properties and check the ‘unblock’ box. The zip file contains the binaries for the build agent and also a powershell script, ConfigureAgent.ps1, which will help you setup the machine as a build agent.


Open a powershell prompt and navigate to c:\agent and run ConfigureAgent.ps1. This will launch a command prompt window and ask you details about the agent, here’s what to select:

Name for this agent: leave as default or give a fancy name

URL for the Team Foundation Server: your VSO account URL (in my case this was

Configure this agent against which agent pool?: default

Work folder path: leave as default

Install the agent as Windows Service: Y


Your build agent is now setup and if you go to the account settings and refresh the agent pool, you should see the new agent listed under the default pool.


And that’s it! The build agent is now ready, we just need to create a build definition which will use this agent to build our project.

Note: The last step in the build agent setup, install as windows service, is optional. But I couldn’t get the project to build if the agent isn’t installed as a windows service. I will investigate this and update later.

Create a build definition

Go to the Build tab of your team project and click on + to create a new build definition. You can start from an empty build definition; add the ‘MSBuild’ task to the definition, browse and select the solution you want to build.


Under the ‘general’ tab, choose the default queue. Remember this is what we used while creating the build agent.


Give the build definition a name and save it. Select ‘Queue build…” after this to trigger a build. Within few seconds you should see the build being run. If you want to publish the build artifacts to a file share or any other location, you can add another task to the build definition ‘Publish build artifacts’, and set it up accordingly.


Written by Prabhu Kumar

July 7th, 2015 at 1:36 am

Posted in Development

Tagged with

Creating a simple java web app using IntelliJ IDEA and setting up remote debugging

with 2 comments

I had to get this setup up and running at work, thought it’ll be a good idea to jot it down here. The first step is to install IntelliJ IDE from here. I installed the ultimate edition which has a free 30-day trial, but the steps below should work well with the free community edition as well. We’ll be hosting the app on Tomcat server (running on a remote machine) so go ahead and install it from here. I installed version 8 using the windows service installer. And of course, since you’re developing a java app make sure you have the jdk installed.

Launch IDEA and create a new project, we’ll call it SimpleJavaWebApp. Select Java Enterprise and Web Application. Make sure the project SDK is set correctly and application server is set to the version of Tomcat you installed.


Let’s add a Java servlet to the project. Right click on the src folder in project explorer and select New –> Servlet, give the servlet a name and add it to the project.


Open and copy paste the below code in the doGet() function,

If you are seeing an error which says “java: try with resources is not supported in –source 1.6”, go to project properties by right clicking on the project and selecting Open Module Settings, select Project on the left rail and change the Project Language Level to 8.

Let’s modify index.jsp to put an entry point to our servlet,

Modify web.xml file and put the below servlet configuration in it, the url pattern is case sensitive so make sure it matches your servlet name exactly,

Go to Build –> Rebuild Project, and make sure the project is building fine. Let’s now package our application in WAR format (Web application ARchive) and deploy it on a machine running Tomcat.

Right click on the project and select Open Module Settings, click on Artifacts on the left rail and select + to add a new artifact type. Click on Web Application : Archive and select the project name.


Now when you build the project you will find file SimpleJavaWebApp_war.war generated under \SimpleJavaWebApp\out\artifacts\SimpleJavaWebApp_war folder.

Let’s deploy our app now, go to the machine where you installed Tomcat (it could be the same machine too), and under the Tomcat installation directory, copy the above WAR file under the webapps folder. For me the path is “C:\Program Files (x86)\Apache Software Foundation\Tomcat 8.0\webapps”. To make sure your app is working as expected, navigate to http://localhost:8080/SimpleJavaWebApp_war/ and check if the web page loads up correctly.


So the bulk of the work is done. We’ve created a simple Java web app, added a java servlet to it, deployed the application on Tomcat and made sure that the servlet code is invoked correctly. We’ll now look at how to remotely debug this app. This is useful is cases where you have the application running on a server, and your code and source enlistment are on a different machine.

To get remote debugging working, we need to instruct Tomcat to start the JVM in a “debug” mode and then attach to the JVM from IDEA.

Open Tomcat server properties, go to the Java tab and add the below entry under Java Options (make sure you add this in a new line),



Restart the server and check if you can access SimpleJavaWebApp from a remote machine. I setup the server and deployed the war file on a different machine and navigated to below URL to check,


We now need to create a debug configuration in IDEA to connect to this machine. Go to Run –> Edit Configurations… Click on the + icon and add Tomat Server –> Remote configuration. Make sure you specify the host IP address correctly. You can also modify the ‘Open Browser’ option so that the java app launches when you start debugging.


Switch to ‘Startup/Connection’ tab and set the TCP port to the one you used while setting up Tomcat, 1043 in this case,


Save the debug configuration and set a breakpoint in the doGet() function in file. Now start debugging. You should see the web browser launch and when you click on the link to invoke the servlet, your breakpoint should be hit.


In case you see an error in IDEA which says ‘unable to connect : connection refused’, you might need a firewall exception for incoming connections on port 1043 (and 8080 too). So go to Windows Firewall settings and create an inbound rule on TCP port 1043 to allow incoming connections, and that should fix the problem.

Written by Prabhu Kumar

March 29th, 2015 at 4:48 pm

Hello VS!

without comments

After having worked for AppEx (a.k.a Bing Apps or MSN Apps) for over 3 years, it’s time for me to move on. We shipped the MSN Sports app on pretty much every platform over the last few years, and it was fun all along! Met some great folks in the team, made some amazing friends and got to learn a lot from some of the smartest out there. The next chapter begins at Visual Studio Online, with an exciting new project! It’s going to be hardcore tech and I’m really looking forward to all the fun :-)

Also, this blog could use some updates once in a while ;-)

Written by Prabhu Kumar

March 22nd, 2015 at 10:16 pm

Posted in Uncategorized