Wednesday, November 23, 2011

Slide deck from my Newcastle SharePoint User Group presentation


Friday, September 9, 2011

@Pluralsight has some fantastic online training material

When I went to Tech Ed 2011, I got a free month access to @PluralSight developer training. From what I have watched so far, I would say that this is some of the best online material I have watched. It has been incredibly useful to me to improve my knowledge of jQuery this week and also acting as a way of reminding myself about doing development in a SharePoint 2007 environment.

I will definitely be subscribing when my free month is up.

Sunday, August 21, 2011

How I built the CSS Mega Menu #MindManager Macro

I previously posted on how I am using MindManager to generate a CSS3 based Mega Menu.

Trying to find information on creating macros in MindManager seems to be a little difficult, so I am going to explain how I did it. It is not really very complex, but if you want to try it yourself, this will set you on the path.

I am using MindManager 9 from Mindjet. If you don’t have it already, you can download the trial from here.

The first thing to do when you open MindManager is to go to File Options and turn on the Developer ribbon tab.


From the Developer tab in the ribbon, we can then open the Macro Editor


MindManager uses SAX Basic for its macros, so if you are a C# developer like me, you’ll have to dust off your basic programming skills.

Let’s review what we want this macro to do. I want a mind map with three levels (plus the central topic) to it which will represent all of the navigation items in my Mega Menu.


Our goal is to generate an HTML page which looks like this.


So first, lets look at what the HTML code needs to look like. We need to include the mega menu style sheet menu.css. I got this from here:

<!DOCTYPE html Public "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<meta http-equiv="Content-Type" content="Text/html; charset=utf-8" />
<link rel="stylesheet" href="menu.css" Type="Text/css" media="screen" />
<Title>Mega Drop Down Menu</Title>

I create a <li> and <div> tags to encapsulate the level 1 topics and encapsulate each third level item in an unordered list. Here is a part of the markup:

<ul id="menu">
<li><a href="#" class="drop">One</a>
<div class="dropdown_5columns">
<div class="col_1">
<h3>One One</h3>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
<li><a href="#" class="drop">Two</a>
<div class="dropdown_5columns">
<div class="col_1">
<h3>Two One</h3>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
<div class="col_1">
<h3>Two Two</h3>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>
  <li><a href="#">Sub Menu</a></li>

I am going to hard code the location of my output folder, I am sure you could parameterise this on the map, but I am being lazy here. The location is going to be c:\projects\Ozippy\ and the output file is going to be MegaMenu.html.

I am not going to explain all of the Basic syntax, so if you aren’t familiar with it, you’ll need to refer to the SAX Basic documentation.

The main method is going to setup the output file and name it the same as the central topic in the map. Note that we use this syntax to get the Central Topic:

'Get the central topic of the current map as the starting point.
Set rootTopic=ActiveDocument.CentralTopic


I have two methods which write the header and footer of the HTML file.



Now I iterate through the level two and level three topics writing out the HTML as I go.

Note that I get all of the subtopics using this syntax:

For Each levelTwoTopic In levelOneTopic.SubTopics


And that is it. Run your macro and generate your mega menu html file. As long as you have your menu.css in the same folder as the HTML file, you should be able to launch it and see the results.

I should note that the CSS3 style sheet allows you to format the menu in many ways. I have just selected the simple five column option. If that doesn’t suit you can just adapt it accordingly. Tags: ,,

Friday, August 19, 2011

Using MindManager to generate a CSS3 based Mega Menu

Over the years I have come to recognise how really important Information Architecture is to the success of Intranet and Internet sites. For quite some time now I have been a big fan of using the Mega Menu approach to navigation. Jakob Nielsen acknowledges that Mega Menus can work well if designed correctly.

Many organisations use this type of navigation now. See some examples here:

One of the biggest challenges is getting people to agree on what the menu should contain and how it should be structured. I love MindManager from Mindjet and use this tool all the time when doing workshops or even just designing things myself. It is a great way for people to visualise how the menu structure should be created and makes it easy to change as we discuss things. However, there is nothing like actually visualising the mega menu in a browser and being able to interact with it. So I have written a macro for MindManager which will take the navigation structure and generate an HTML page which displays it as a Mega Menu.

I have used this tutorial as the basis for the mega menu which does everything with CSS3. No need to use jQuery or anything else at this point.

So, I can create a MindMap that looks like this:


Run the macro and generate an HTML file which looks like this:


So as I am working with the people making the decisions, I can regenerate the menu whenever I need so that we can see how it looks.

When I have it right I can then integrate it into my site and refine the presentation and style to suit.

Wednesday, August 3, 2011

Getting MSMQ configured for K2 on a domain controller

I was installing K2 on a domain controller development environment and the instructions for getting MSMQ configured didn't quite work. K2 kept companing that MSMQ was not configured correctly.

This post explains how to configure MSMQ on a domain controller:

Thursday, July 7, 2011

Why is #SharePoint 2010 still sending Added Colleague notifications?

I was looking into an issue as to why SharePoint was sending the ‘added you as a colleague’ notification even though the SPS-EmailOptin user profile property was disabled. This property controls whether the user receives notifications when people add them as a colleague.

My investigation turned up a couple of interesting things:

  • The code that sends the email is NOT a timer job. It is sent inline when you press the OK button to add the colleague. (The code is in the Microsoft.Office.Server.UserProfiles namespace)
  • If the property is disabled, the email will always be sent irrespective of whether the user profile has a value for that property.

So, I suggest you don’t disable the property unless you always want this email to be sent.

Wednesday, June 29, 2011

Would you like to save users 1,000+ hours per year?


If this grabbed your attention, in keeping with the theme ‘Time and Energy matters’ this post suggests how important it is to performance test and optimise your SharePoint environment?

How about getting the load time for the home page of your intranet from an average 5 seconds to less than 1 second?


Having worked with SharePoint for many years, I have done a number of performance testing engagements. With my most recent engagements, I thought it might be a good idea to share some of my thoughts and experiences, with the hope that it will help others to improve the performance and reliability of their solutions to benefit the users of the solutions.

Why would you be concerned about performance testing?

Performance testing has many aspects to consider as you’ll see, but what happens if you don’t consider performance testing your solutions?

First and foremost I would suggest, if the users of the solution have a poor user experience relating to speed and availability, this will have a number of outcomes:

  • Poor perception of your solution (and maybe you)
  • Frustrated users
  • Potentially lower uptake and sustained usage
  • Loss of energy and time for users and your business


Let’s look at a very simplistic hypothetical scenario.

If I have 5000 users and on average each user opens the Intranet home page once per day to look for some information and it takes 5 seconds to load the home page, then the total energy expended is 25,000 seconds or 6.94 hours per day. (just for 1 page load!)

So if I can cut that page load time down to 1 second, the the number is 5000 seconds or 1.38 hours per day.

So that equates to a saving of 111 hours every four weeks for a single page load!

Clearly you would be aiming for far more than one page view per person per day, so you can see that there might be very significant savings to be made by investing some time and energy into making sure that the site performs as well as possible.

Of course it is not all about time saving. I suggest we should be aiming to ensure that the user perception of performance is very good. The happier the users are with the system, the more chance there is they will use it and get benefit from it, and consequently the more value your organisation will receive from its investment.

Technical issues

I have seen numerous technical issues uncovered through performance and load testing including:

  • 3rd party or custom web parts with memory leaks which cause frequent IIS app pool re-cycles
  • Infrastructure issues such as load balancer failures and server failures.
  • Badly performing web parts under load
  • Memory leaks in code
  • Server configuration issues


The path to better performance



These are the tools I use in a performance testing exercise. Everyone seems to have their favourites, so you might substitute some of them with an alternative.

Firefox Used with various plugins
Fiddler2 Examining HTTP traffic to/from your workstation
Y-Slow I use this in particular to look at the page payload with and without a primed cache.
SharePoint developer dashboard Very useful for seeing how long SharePoint is taking to load a page and what it is doing under the covers.
Selenium Used for executing UI tests. This can drive browsers through C# and other languages
Internet Explorer 8 or 9 The JavaScript debugger can be useful although you can use Firefox for this.
Visual Studio 2010 Ultimate This edition provides the web testing tools for load testing.

I would suggest that we break down the path to better performance into two sections:

1 Individual page load time Looking at the payload of the page, caching options, custom code and database queries.
2 Performance and reliability under stress What happens to the page load time, resource utilisation and reliability when the farm is under load?


Individual Page Load

I recently went through two different performance testing exercise for new Intranets. In both cases, the target audience is more than 5000 people geographically dispersed.

I undertook performance tests in a variety of ways using the tools mentioned above. I saw very inconsistent results with the page load time ranging from 50 seconds to 2 seconds. Clearly something was not well with our farm.

Page payload and caching

The first thing I did was to look at the payload of the page. What is being loaded, how long is it taking and what is being cached.

First I use Y-slow to determine the size of the page with and without a primed cache. This shows what will be loaded the first time by the browser and what will then be loaded in subsequent times after the cache is primed. I did notice that the primed cache page load size was bigger than I would have expected.

I use fiddler to examine this further. The actual content of the page was not really a problem. I didn’t see any really large images that might cause a problem.

Two things I did notice though were:

  • whenever I pressed F5 on the browser, I saw a lot of HTTP 304 responses from the server. This is normal behaviour, but is quite different from when I just click a link on a page. Basically the browser is confirming with the server that the objects have not changed since the last time they were cached.
  • I noticed that some images that I would have expected to be cached were being served on every load and I was seeing an HTTP 200 message. When I examined these requests further, the items were being served from SharePoint libraries and for some reason had a cache expiry set in the past. I never worked out why SharePoint was doing this. However, when I enabled blob caching on the SharePoint server, this problem went away and those items were no longer being assigned the cache expiry headers. (Subsequently, this issue has re-appeared for some images so I am looking into it further.)

Then I thought, perhaps I should look at the page output cache in SharePoint to try to further reduce the page load time. This allows the SharePoint server to cache the page and not reassemble it for every page load, thus cutting down on the server resources required.

One gotcha with this is if you have personalised content on the page. On the home page for one organisation, I did have personalised data from their Newsfeed. Once I enabled the page output caching, this stopped working, so I had to switch it off again. The results of using page output cache would be more prominent in load testing than individual page loads.

I also use developer dashboard to see what is happening server side during the page load and where the most work is being performed.

Asynchronous loading of data

Something to consider is that one of the most important things is the users perception of the page load time. If you have a lot of processing on a page which will block the page from loading immediately, consider whether you can use the technique of loading some of the content asynchronously. In other words, let the page load and then process some web parts and controls asynchronously so that the user has an experience of seeing the page very quickly, and only has to wait if they need to see the data that is loaded asynchronously.

Some examples of this might be calls to other systems for data or calls for data that is initially hidden by a tab or some other graphical element.

Some SharePoint 2010 web parts have an option to automatically load data asynchronously without any coding.

Fiddler can be very useful for helping to identify where a page load is being blocked by some call for data.


So lets say that we have got our page load to a pretty good level. Let’s also assume that we are on a very fast link to the data centre. What happens for users that are on the other side of the world? This could probably turn into a long discussion about web acceleration and lots of other technology options. For this post, let’s just focus on the implications.

When I was testing a SharePoint solution hosted in Australia from client workstations in South America and the UK, it was really driven home to me what the impact of the number of requests and latency really is. For those locations, the latency was 300ms or more. That means that there is .3 of a second delay for EVERY request/response. If your page requires 100 requests because of all of the items on it, it might take a minimum of 30 seconds to load.

I started looking at a product called Aptimize which dynamically optimises pages with CSS sprites, reorganises JavaScript, compression etc. on the fly. It looks good, but I have not tried it in a production scenario yet. It does have some interesting tools and does provide a lot of insight though.

TIP: While doing some performance analysis from those international locations, I noticed that when using multi-lingual variations, you hit the root URL for the site, are then redirected to variationroot.aspx and then get redirected to the home URL for the language you are configured for. These redirects were taking up to 5 seconds to occur before the client even started to render the page. Therefore, if possible, set the default page URL in the browser at the locations to the full address such as http://sharepoint/en/pages/default.aspx rather than just http://sharepoint.

Why were we experiencing such inconsistent performance?

So now we had the files caching as expected, we still had very inconsistent performance. So now, I started to isolate individual Web Front end servers to rule out the load balancer. This was simply done by changing my hosts file on the client to point to a specific server rather than go through the load balancer. What I discovered was that one server worked quite well and consistently and three other servers did not.

With no errors being reported in any logs, we decided that perhaps it was the network. So we ran some file transfer tests between servers and monitored the throughput and performance. What we discovered was that the file transfers would ‘pause’ and restart sporadically.

It turned out that the servers had dual NICs with ‘teaming’ software installed. This was causing network issues on the servers and once we disabled the teaming, performance became consistent. I can only assume this is some sort of issue in the teaming software.

Another thing to consider is whether the network adapters are set to auto sense the speed. It is better to set them to a specific speed to ensure that you are getting maximum throughput.

Custom caching

So now we had consistent page load times of about 2.5 seconds for the home page. Not bad by my reckoning. However, given that this home page will be loaded every time a user opens a browser, I wanted to see what else we could do to improve the performance even further.

We looked at a number of the web parts on the home page and considered how often the data would need to change. Many of them would not change that frequently, but we didn’t want to only rely on the page output cache which typically refreshes every 3 minutes by default.

So we subclassed a number of web parts including the content queries and cached the information in the HttpRuntime.Cache.

Doing this allowed us to get the primed cache load time down to approximately 0.6 seconds!

This does of course introduce an issue of what happens when I must update the cached information right now.

The first answer we came up with was to intercept a query string of resetCache=1 in a control on a page and then remove the objects from the HttpRuntime cache. This worked well until we realised that in our medium farm, we have multiple WFEs and the current solution would only reset the cache on a single server.

So we needed a solution which would allow us to bypass the load balancer and go to every WFE and reset the cache.

The way we solved this was to write an Application page which would use .Net to get a list of all of the web servers in the farm and then call the resetCache on every server. This works well.

Performance and reliability under stress

So now we have the individual page load working the way we want, it is time to load test the farm to make sure that the day we launch it, it won’t fail or have such bad performance that people will be disappointed with the site.

There are lots of posts and information about load testing sites out there, so I won’t try to re-write those. I will simply provide my perspective and experiences.

We used Visual Studio 2010 Ultimate which includes the web testing facilities to perform the load tests. You can download a trial edition if you want to give it a try.

I think some people get a bit confused about some terminology and what they actually want to test with a load testing exercise.

Let me suggest that initially with our test we want to put our infrastructure under as much stress as possible to identify any weak points.

Visual Studio 2010 Ultimate allows a maximum of 250 virtual users. Often the workstation you run it on may not have enough capacity to simulate that many users. You might find the CPU, RAM or network adapter simply can’t pump out enough requests to stress the environment adequately. So you might consider running multiple agents or simply run VS2010 on multiple machines at the same time.

You need to make sure that you record the performance counters for all of the servers in your infrastructure to understand where bottlenecks might occur. You should also watch for application pool re-cycles which usually look like the server memory utilisation rises until the app pool recycles and it then drops suddenly. This is often caused by a memory leak in a web part or some other code.

I suggest testing for at least 10 to 15 minutes at a time. Don’t use think time when you configure your load test. What we want to understand is what the max requests per second are that our farm can deal with and still respond within an acceptable time frame.

Bill Baer’s post on RPS is quite useful. I used it to estimate how many requests per second our farm needed to handle to deal with the expected peak load for 5000 users. The estimate worked out to about 80 RPS in our case.

Another consideration is to use multiple users during your test. You can link a csv of user names and passwords to the test. This can also be important, because page and caching behaviour may be different depending on the permissions the users have to the site.

So in our first test when we hammered the servers as hard as possible we broke the load balancer. This was a good result. Much better to do it now than on launch day!

Once that was resolved, we tweaked our load tests and then started to build up the number of ‘users’ to work out where the threshold of RPS vs. page load time was. It worked out in this case that it was about 100 RPS with a 3 second page load. This means that we should be able to cope with the expected peak load and have some additional head room.

We also reviewed the performance of our SQL environment. Two changes we made to the SQL configuration were:

  • Create one TEMPDB per processor core
  • Increase the RAM to 48GB

After doing this, we re-ran the tests and saw another increase in performance.

Client Testing (including Javascript)

It is not necessarily obvious that Visual Studio does NOT execute JavaScript during a web/load test. It is just sending and receiving http requests.

If you want to do some UI testing where JavaScript does execute, one way of doing this is to use Selenium. This is only something I tried recently but it works well. There are a number of ways you can use it. I started using the Firefox plug-in, where you can record and playback a scenario. Then I moved to using the .Net API to drive Firefox/IE/Chrome. It is definitely something I will continue to look into the future.

I heard Chris O’Brien talking about Hammerhead on the SharePoint Pod Show which also allows you to measure page load times including running JavaScript.


Every project and scenario is different, so your experience and process might differ from mine, but my message to you is that it is really important to consider testing and optimising your SharePoint solution. Ensure that users perception of the solution performance does not get in the way of adoption and respect your users time and energy and don’t waste it unnecessarily.

Useful links:

SharePoint Pod Show episodes

Friday, June 24, 2011

Interesting perspective on using Data Visualisation

I enjoyed this video on Data visualisation. It gives me some ideas for what we might do with SharePoint.

David McCandless: The beauty of data visualization

Wednesday, June 22, 2011

Mindjet have re-launched their DevZone


MJ_Corporate_logo Having been a long time user and fan of MindManager from Mindjet, it is great to see that they have re-launched the DevZone and support for developing add-ons for MindManager. Tags: ,,

Tuesday, June 21, 2011

SharePoint 2010 SEO Analysis with the IIS SEO Toolkit | Tristan Watkins on IT Infrastructure

This seems to be a very useful tool for crawling sites and getting insights into broken links and other issues. Just tried it against the dev site in my VM and when it had created 1GB of data I decided to stop. I’ll try it again on a smaller site Smile.

SharePoint 2010 SEO Analysis with the IIS SEO Toolkit

by Tristan Watkins

The IIS.NET Search Engine Optimization (SEO) Toolkit provides a powerful analysis tool that can generate reports for web editors and can automatically generate sitemaps and robots.txt files as well. These reports not only provide insight in to page rank improvements but also help content editors identify missing/duplicate content and find broken links. This post provides an overview of how the tools can be used by content editors or web managers who do not have access to the server infrastructure and what you can expect to see when running an SEO Analysis against an out of the box SharePoint 2010 Publishing site

SharePoint 2010 SEO Analysis with the IIS SEO Toolkit | Tristan Watkins on IT Infrastructure

“The small things matter”

A colleague of mine pointed me at this video from Nigel Marsh. A short and entertaining presentation which describes nicely I think some of the challenges we face and the opportunities we have to control our own lives at work and at play. My ex boss, David Shein, always used to quote Jack Welch who said, ‘Control your own destiny or someone else will’….

Nigel Marsh is the bestselling author of "Fat, Forty and Fired" and "Overworked and Underlaid" and the Regional Group CEO of Young and Rubicam Brands for Australia & New Zealand. Finding the balance between work and life is an ongoing battle.

Monday, June 20, 2011

Do we make the most of our time for things that matter?

Ric Elias talks about his near death experience. Time and energy matter...

Chief Detail Officer

I really enjoy the presentations that Rory Sutherland does on TED. He has some very insightful comments and is funny too.

In this video, the thing that stands out to me is the idea that if you don’t have a big budget and big projects, there are lots of small things that we should be looking at to have a big impact. He suggests a ‘Chief Detail Officer’ to be responsible for looking at these things.

I also love his analogy of Tiger Woods with consulting. Tags: ,,
Rory Sutherland–Sweat the small stuff

Wednesday, May 18, 2011

Learnings about upgrading elements with SharePoint 2010

Doing the right thing and deploying all of my project elements as features and solutions has led to some interesting behaviors when trying to change certain items such as:

  • Master Pages
  • Page Layouts
  • Content Types
  • CSS
  • Javascript

I experienced behaviors such as changes to Master Pages and page layouts not appearing on my site or content types not updating in lists even though I had updated the Site level content type.

The following summarises some of my learning while looking into why these things were happening.

Let’s start with some basic concepts


Ghosting and unghosting is now referred to as Customizing and Uncustomizing.

Basically it means that if I deploy an item via a solution onto the SharePoint file system, I then create an entry in a list such as the Master page Gallery which references that item on the file system. So it acts like a pointer.

When I deploy an update to the file on the file system, it is automatically used from all of the entries that point to this. So this is what ‘Ghosted’ or ‘ Uncustomized’ means.

As soon as you edit the file through the SharePoint or SharePoint designer, SharePoint breaks the pointer to the file on the file system and copies it to the database. So now the file is ‘Unghosted’ or ‘Customized’.

If there are changes to the file on the file system, these will not be reflected for the entries that are ‘Unghosted’.


When we deploy files to the Layouts mapped folder, these are then available to any site in SharePoint through a virtual URL of http://<server>/<site>/_layouts/<folder>/<file>. Files in the layouts folder don’t have a reference in a list or library so they can’t be ghosted.

Site Columns and Content Types

When we deploy site columns and content types, usually they will be in the root of the site collection.

Use this page to create and manage content types declared on this site and all parent sites. Content types visible on this page are available for use on this site and its subsites.” 

When you use the content type in a list, it creates a child content type of the one in the root.

This means that if you update the parent content type, you may want to “push down” those changes to the child content types. So for example, you might add a new site column to a content type and want it to appear in the lists using that content type.

Making Changes

So the question then becomes, what happens when I want to make changes to my elements after they were initially deployed and perhaps activated by a feature?

Master pages and Page Layouts.

Let’s say that we have a Solution and Feature to deploy our master page and page layout as a Farm feature scoped at the site collection level.

The solution is deployed and the feature activated and the list items created in the Master Page gallery pointing to the files on the file system. So these are uncustomized or ghosted.

I can make changes to the master page or page layout and execute an Update-SPSolution to deploy the changes to the file system. Do not expect a change to the modified date in the master page gallery as the list items themselves have not changed. However, the changes should immediately be reflected on the site.

What can go wrong?

Once you are using a page layout you cannot delete it from the master page gallery. So if you want to change it, you can by the above method, but if you deactivate the feature and try to retract the solution, the page layout will not be removed.

One thing I have seen a lot, is a page layout not updating as expected through a solution upgrade. It seems that SharePoint thinks that the page layout has been customized even though you may not think that it has. It also may not appear to be customized in SharePoint Designer.

One of the easiest ways of making sure that any file is not customized is to use Gary Lapointe’s stsadm commands. This post by Simon Doy describes this nicely.

These commands allow you to view which files are customized and then allows you to re-ghost them:


More about Solution Upgrade

Microsoft describe the two ways to Upgrade a Farm Solution in this post. It describes ‘Replacement’ which is where a solution is retracted and re-deployed, and ‘Update’.

The article states that the Replacement method “must be used if the new version of the solution differs from the installed version in any of the following ways.

  • The new version removes a Feature that was in the old version or adds a Feature that was not in the old version.

  • The new version changes the ID of a Feature.

  • The new version changes the scope of a Feature.

  • The new version has a changed version of a Feature Receiver.

  • The new version adds a new elements.xml file, removes an elements.xml file, or changes the contents of an existing elements.xml file.

  • The new version adds a new Property element to a Feature.xml file, removes a Property element from a Feature.xml file or changes the value of a Property element in a Feature.xml file.”

So, once you package your changes, you should be able to do one of the following:

Issue the Update-SPSolution powershell command to deploy it.

For example:

Update-SPSolution -Identity contoso_solution.wsp -LiteralPath c:\contoso_solution_v2.wsp –GACDeployment
Retract and redeploy your solution package.
For example:

Uninstall-SPSolution –Identity <solution name>.wsp

Remove-SPSolution -Identity <solution name>..wsp

Add-SPSolution –LiteralPath <path>\<solution name>.wsp

Install-SPSolution -Identity <solution name>.wsp -GACDeployment

What can go wrong?

While I was investigating this, I tried creating a Feature Receiver to run an UpgradeAction. It took me a little while to work out why it wasn’t firing. The reason was that I needed to use the Replacement method because I had created a Feature Receiver which didn’t previously exists. I assume this equates to the “The new version has a changed version of a Feature Receiver”.

Once I retracted and redeployed, then I could get my FeatureReceiver to work when performing a feature upgrade.

Also, you need to be aware that if you retract and remove a solution, the files will be removed from the file system and your site may break until you redeploy it.

Feature Upgrade

There are quite a lot of good articles and videos on Feature Upgrades with SharePoint 2010. One of the most comprehensive is this series from Chris O’Brien. Chris has also written a solution to allow you to upgrade features via the UI in Central Admin or through Powershell.

I also discovered that you can Upgrade a feature directly from Powershell without Chris’ solution.

$site = get-spsite http://sharepoint/sites/ce2

$enabledfeature = $web.Features | where {$_.DefinitionId -eq “b6f54487-b222-4536-8d4b-3bef94283ddb” }


What can go wrong?

I found a few issues as I was working through this. One was described above, where my new Feature Receiver and FeatureUpgrade method would not fire.

Another was to do with sequencing. I declaratively created new site columns and deployed a new elements.xml file.

The first time I ran the feature upgrade, it failed because I had the CustomUpgradeAction declared before the ApplyElementsManifest. This was because I was referencing the site columns in the CustomUpgradeAction that I declared in the elements.xml. Obvious I suppose, but worth noting that you need to sequence the UpgradeActions correctly.

<?xml version="1.0" encoding="utf-8" ?>
<Feature xmlns="" Version="">
    <VersionRange BeginVersion="" EndVersion="">
        <ElementManifest Location="UpgradeTest\Elements.xml"/>
     <CustomUpgradeAction Name="AnnouncementsUpdate">
          <Parameter Name="Test">Whatever</Parameter>

Upgrading Site Columns and Content Types

This was a bit perplexing for a while. The FeatureUpgrade allows us to add additional site columns to a content type and Push Down the changes to child content types and lists. We can either do this declaratively using the AddContentTypeField option or through code.

For example:

public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters)
       switch (upgradeActionName) {
           case "AnnouncementsUpdate":

   private void UpgradeAnnouncements(SPWeb parentWeb){
       {           SPContentType announcements = parentWeb.ContentTypes["TestAnnouncement"];
           SPFieldLink upgradeTest1 = new SPFieldLink(parentWeb.AvailableFields["UpgradeTest"]);
           SPFieldLink upgradeTest2 = new SPFieldLink(parentWeb.AvailableFields["UpgradeTest2"]);
           SPFieldLink upgradeTest3 = new SPFieldLink(parentWeb.AvailableFields["UpgradeTest3"]);
       catch (Exception ex)

What can go wrong?

The most common thing that seems to go wrong seems to be to do with pushing changes down to existing lists and child content types.

When specifying the ‘PushDown’ option it does not always seem to work. This can be quite frustrating. I found that the easiest way to overcome this was again using one of Gary Lapointe’s stsadm commands to force the changes to propagate:

stsadm -o gl-propagatecontenttype -url "http://sharepoint/sites/ce2" -contenttype "Announcement"

I am sure there are other scenarios and element types not covered here, but this hopefully might help some people to overcome some of the frustrations when upgrading elements of their solutions.

Other useful links Tags:

Wednesday, April 13, 2011

Creating views in a list definition using SharePoint Designer 2010

I have been creating some SharePoint 2010 list definitions and list instances in Visual Studio 2010 recently.

I have taken an approach of creating my content types and site columns through the SharePoint UI. Then using the excellent CKSDEV tools, I import the content types and site columns into my Visual Studio project. I reorganize them into folders to keep things tidy.

Then I create a new list definition and list instance from the Add new item menu in Visual studio.

After creating these it is important to remember to edit the elements file of both, changing the descriptive information and the Type to a matching unique number. Usually this number is > 10000.

Now I can deploy my list definition, content type, site columns and list instance through a feature in Visual studio. Visual Studio is kind enough to recognize and resolve conflicts deleting the old instance of the list.

So my solution deploys and when the feature is activated, the list instance is created.

What I wanted to do next was declaratively create some views in the List Definition schema. The XML can be a bit confusing and tedious to write by hand.

So I looked for an alternative. I opened my list in SharePoint Designer 2010 and configured the views the way I wanted. Then I can copy and paste the View XML schema info from SharePoint Designer into the schema XML of my list definition.


However, the view would still not deploy correctly.

I found that I had to make a number of changes to the XML to make it work.

These included:

  • Remove the Name property
  • Remove the Level property
  • Add the SetupPath property
  • Add the WebPartZoneID
  • Change the BaseViewID property to the next sequential number for the views in the list
  • Change the URL property to remove the path
  • Add the XslLink element


Maybe it will end up being easier to handcraft the XML, but if you want to build your views in SPD and then bring them into Visual Studio, this might help.

If I have missed something or you have a better way of doing this, I’d love to hear about it.

UPDATE: I realised that I can get this information from the Schema tab of the SharePoint Manager instead of SharePoint Designer.


Wednesday, March 30, 2011

Deselecting items when importing a WSP into Visual Studio 2010

What a useful tip from Geoff Varosky. – Thanks Geoff!

First, you will want to select the first one you see in the window by clicking on it, both the checkbox, and the item itself so it is highlighted as shown… going to show you a neat little trick!


Now, scroll ALL the way to the bottom of the list, and select that, using your shift + click combination, and PRESTO! Everything has been unselected, except for the last item, which is just a simple click.

Importing Lists and Content Types into Visual Studio 2010 from Site Templates for Packaging in SharePoint 2010 Solutions « Geoff Varosky's Blog

Tuesday, March 15, 2011

CAuthZcontext and slow search

I have been working on a SharePoint 2010 farm deployment and came across a problem with the performance of SharePoint search. I was finding that it was consistently taking more than 15 seconds to return search results. As part of the troublshooting process, I turned on verbose logging in SharePoint and was able to identify the 15 second gap was between these two entries:

0x0138  12:38:40.33 init CAuthZcontext
0x0138  12:38:55.41 finished init CauthZContext

There was no reference to this anywhere on Google or Bing. I could not find any documentation.

I went to the extent of rebuilding the farm twice on different servers with different SQL instances. It was as simple as you can get with a Team Site collection and search center, about 5 documents in the document library. The problem persisted on both farms.

Thanks to @weshackett for suggesting I try enabling Claims for the web application hosting the document library. I did this and after resolving the subsequent Access Denied error by resetting the site collection administrator and recrawling the content, the performance went back to what I would expect.

It is still a bit of a mystery to me at present as to why Search was trying to use Claims authorisation for security trimming the results. If I find out, I will post the reason here.

Friday, February 25, 2011

Configuring the Cross Farm Topology Load Balancing service

We have been configuring a dedicated services farm with SharePoint 2010. To start with we were under the impression that SharePoint 2010 ‘took care of’ load balancing farm services. This was almost true, with the exception of the Topology service which you need to load balance through traditional means.

There are a number of good articles you can Google about setting this up.

Once we had an SSL certificate for the host name we selected, we published a service and tried to connect to it from our other farm. The ULS logs showed an error “The root of the certificate chain is not a trusted root authority”. This article explained the process of importing the certificate chain into SharePoint so it was trusted.

This resolved the issue and we were then able to connect correctly.

Sunday, February 13, 2011

The joy of BCS

I have spent many hours over the past week trying to work out why my BDC Model that I created in Visual Studio would not render in an External list.

I received the lovely:

Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Microsoft SharePoint Foundation-compatible HTML editor such as Microsoft SharePoint Designer. If the problem persists, contact your Web server administrator.
Correlation ID:…..

message in the web part. The only problem was that SharePoint was not actually logging the error to the ULS log which posed an interesting challenge.

I tried a default “Hello world”  BDC Model, and that worked fine.

So I thought there must be something wrong with my model, but wasn’t sure what.

Then I came across this very handy tool from Phill Duffy, of Lightning Tools on codeplex.

It allowed me to test the finder and specific finder methods of my BDC model and turned up the first issue. I found out that the ID can’t be of int64 type. (Thanks to the following article)

I changed the identifier to int32 and the model worked in the bcstesterman, and I thought that was going to be it.

But, to my disappointment, the web part error persisted. I then tore out what little hair I had left. Trying to work out how I was going to find the problem, I finally decided it was time to lay out some hard earned cash to buy .Net Reflector professional. I disassembled Microsoft.SharePoint and told VS.Net to break on exceptions. After a few minutes, I identified the cause of the problem:

{"There is an error in the TypeDescriptors of Parameters on Method with Name 'ReadList' on Entity (External Content Type) with Name xxxxx in Namespace … The TypeDescriptors incompletely define where the Identifiers of Entity 'xxxxx' are to be read. That Entity expects exactly '1' Identifiers, but only '0' TypeDescriptors were found from which to read them."}

So there was one error in my BDC model where I had not selected the identifier option in the ReadList (finder) method.

So it is now working and I can move on (and get some sleep). Once again .Net Reflector professional has saved the day.

Saturday, February 12, 2011

Publishing Silverlight Applications in SharePoint 2010 - Visual Studio SharePoint Development Blog - Site Home - MSDN Blogs

I just started writing a Silverlight application as a Sandbox solution with Visual Studio 2010 and SharePoint 2010.

I found this post very useful in configuring the projects in Visual Studio 2010. In particular, it hadn’t realised that we could set ‘Project Output References’ to get VS to automatically put the XAP file into the Module project.

In this blog post I’ll walk you through how easy it is to publish Silverlight applications to SharePoint 2010 by using the Visual Studio 2010 tools for SharePoint development.

Publishing Silverlight Applications in SharePoint 2010 - Visual Studio SharePoint Development Blog - Site Home - MSDN Blogs

Friday, January 14, 2011

Using Authdiag to solve IIS authentication config problems

I had a strange issue on a production SharePoint 2007 farm today where one WFE server ‘suddenly’ stopped serving pages for a number of the web applications. The second WFE server still worked fine.

Interestingly, the apps that were configured for Kerberos still worked okay, but the ones configured for NTLM did not.

There were no errors in any of the server logs (ULS, events, IIS).

If I ran fiddler, it showed the usual NTLM challenges and 401 messages.

I installed an extremely useful tool from Microsoft called authdiag on the server to check the IIS configuration.

It reported that NTLM requires HTTP Keep Alives. We looked at the IIS config and sure enough, the checkbox was deselected. We re-enabled http keep alives and the web sites worked again.

I have no idea how it got into that state, but it was great to get it back online.

Authdiag is a very useful tool. (This is the second time it has led me to a solution like this)