Friday, October 4, 2013

Search UI matters!

I’ve recently been tasked to optimize a search result page for a customer, as they have a “feeling” it’s not working out. For the record, I did not create the existing layout; if I did I would not write this post :)

The lesson to be learned is: Keep your UI clean, remove the clutter, and stick to science

I’ve long learned long ago that UX is very subjective and often emotionally loaded, and I do now want to force my views onto others without some backing. First off I got hold of the web server logs and wrote a small utility to parse out all the search queries going back a couple of months. By looking at the query strings I could easily pinpoint where in the UI people were clicking, thus providing me documented metrics on what parts of the current UI works, and which ones doesn’t.

Note: There is no out-of-the-box way of measuring where a user click in SP2010 or SP2013 search, so you have to figure out a way to do this yourself by either adding script and custom logging to the page, or parse the IIS logs as I did.

The below screenshot is what the result page looks like today, where the yellow area is what is visible on a typical laptop screen (it’s actually even less as I’ve cut away the SharePoint menus. Studies over the years have shown that information below “the fold” is hardly interacted with, and we have tons of information outside the viewable screen estate on this page.


The numbers on the screen illustrate the following:

  1. Entity navigator (All Sites, People, Messages) -  This navigator resides beside the search box by default, but has been moved top left. Typically this navigator chooses information silo on a high level.
  2. Long list of refiners
  3. Best bets
  4. Search results
  5. Related searches
  6. People matches
  7. Message matches (social)

Here are the metrics broken down by usage:

  • 6% of all queries came from clicking on related queries, which is an indication that people at least are getting hints to improve their queries
  • 2% of all queries were “Page 2” queries which is interesting, as people are actually moving on to find what they are looking for
  • 0,3% of the queries were using the Document Type refiner, which was the top refiner being used. Not a whole lot! And I know from previous research that people want this one, and many people use it, or say they would like to anyways.

This more or less means, NONE of the refiners are being used and the messages list in the right hand corner is noise.

15% of all queries were people searches, but I had no way of knowing if they came from the front page of the intranet or by people clicking any of the “see more people” links in 1 or 6.

With my metrics in hand I have created a proposed solution shown below, which I hope to put into production and measure in a couple of months time how it works out.


  1. I have moved the information silo (tabs) back up below the search box as it’s more obvious to the user, the intention being to increase the usage. I have also added search hit numbers to each tab to give a visual indication of the amount of information you can expect to find.
  2. Most of the refiners have been removed, but Site and Date has been kept. They now have a more prominent place as they are moved higher up, and both are in the screen real estate, “above the fold”.
  3. By experience people often navigate on the type of document they are looking for. Here I have moved the Document Type navigator above the search result, and represented it by the icons for the most used file formats; Word, Excel, PowerPoint, Acrobat Reader. Visually a lot easier to spot, using well-known icons to filter formats on and off.
  4. I have removed the social feed results, which is covered with a tab at 1), making the right column a lot cleaner.

We’re also working on some other concepts which we hope will improve findability:

  • filter away draft versions of documents which is currently indexed
  • filter by documents “close to me”
  • changing the ranking algorithm based on user context

To sum it up, keep your result page clean and tidy and use navigation concepts well-know to the user such as tabs (Google and Bing can’t be wrong right?). Also, do measurements of what works and what doesn’t. This allows you to make qualitative decisions when changing your UI, and not rely on committees, hormones and individual taste.


  1. Why index draft documents? How are you defining "Close to me"?

    1. As for draft documents that was the initial governance policy. Any user who can read a library (which is just about everyone) should have access to minor versions. This ensures that search mimics what people have access to. So far so good.

      But the idea was that people should publish major versions when a document is ready for consumption by others, but of course, this very very seldom happens. And search is riddled with too many results, many of version 0.x. So now the customer wants a function for user to opt-out of the version 0.x documents in order to only view "good" documents, and hopefully increase awareness of publishing final documents.

      It's all a bit backwards I know. The ideal solution would be to not index minor versions, and communicate _clearly_ that if you want to find your docs, publish them according to proper document behavior. If not, they could/should just use major versions.

      As for "close to me" we're thinking documents you have authored, documents authored by people in the same department and possibly documents authored by people you have marked as colleagues. We're running FS4SP here and will see the cost/benefit of each filter. There's also the issue of not creating too long queries as they will bomb.

  2. Hi - This is very helpful for a similar overhaul of the search user interface that I am doing. I am curious about how your solution proposal went. Did you end up going with the solution you outlined above or did it get modified - and was there any follow-up analysis of the logs to see if it made a difference? Thanks again.

    1. We did put it live with no user complaints :) I was out of the project afterwards, so have not done any measures on refiner usage etc. Life of a consultant I guess.

  3. Thank you. One other option we are considering for improving the user interface is developing a fully custom interface and using the REST api (SP2013). We are thinking this could allow for greater flexibility in the user experience as well as potential personalization of search, for example, using XRANK to increase the ranking for certain content based on that user's search history or preferences. Is that an approach you considered or would (or would not) consider? Wondering if there are positive or negative effects that we've not considered as I am relatively new to working with SharePoint 2013 Search.

    1. One "benefit" of using the search webparts in sync mode is that the search data is part of the actual page - so you save one round trip before displaying the actual data. But for a search page where you have a blank page coming in, doing a custom REST based solution might very well be worth it - as it makes it easier to do XRANK etc.

      That said, I have a solution with Elio Struyf at which shows how you can maniupulate the search web parts with logic like this.

      Also, if you do a custom page, you should implement calls back to RecordPageClick in order to feed back usage data used in relevance ranking. So... fully custom, or "patched" oob... not always an easy choice :)

    2. Thanks. Like Roland O. (, I'm having trouble finding a way to call RecordPageClick. Fiddler hasn't helped much so far as the call seems to be buried deep somewhere, maybe in the load of the iframe when hovering. Do you have any advice for determining how to do this?

    3. Hi,
      I've done this using CSOM for test which works great. The clue is to pass in the id coming from the query you just did, along with the other information so it's all hooked up correctly. There's a CSOM sample at