Friday, December 20, 2013

Why Microsoft should not turn off DMA on firewire in lock screen mode

There is a tool out there called Inception, which via a vulnerability in the way FireWire works, will let anyone log in as any user on your machine without a password on Windows (XP,Vista,7,8). You can read more about the vulnerability on the Inception site.

Pro Tip: If you’re not using firewire on your Windows laptop, remove the drivers! If not most machines can hotplug a firewire device and you have lost.

So why shouldn’t Microsoft do as Apple did with OSX for this issue? Because then I wouldn’t have won fame and liquid rewards.

The story goes; Once upon a time Mikael was hired by a consultancy to help out with a project. Next to his desk stood a laptop called HackMe, which invited employees of the company to hack in, retrieve a snippet of text from a file on the desktop, send this to the security manager, and claim fame.


imageMikael was told the machine had been left alone for a long time, all employees given up on this hard challenge a long time ago.

Never one to give up an opportunity to shine, let alone fame, glory, wine and champagne, Mikael decided to give it a go. The next day he brought with him an old laptop, a firewire cable, and Ubuntu on a USB stick with Inception. Mikael hooked up the gear, went to brew a cup of coffee, retrieved the password and won it all Smile


(Me on the left, security manager on the right – who was pleased someone hacked it, but not that it required an external SharePoint consultant Winking smile)

Thursday, December 19, 2013

Using ExcelREST to display charts (if you don’t have Enterprise license)

If you have the Enterprise version of SharePoint you may use the Excel Access Web Part to display charts on your pages. If you don’t have Enterprise license, but have Office Web Apps installed, you can use Excel REST. This should work both for SharePoint 2010, SharePoint 2013 and SharePoint On-line.

It all started with a customer who had SP Standard license but wanted charts on their pages, and I’m very clear to customers that I am not coding charts in ASP.NET or using JavaScript. They can most of the time create much better looking charts in Excel.

Research led me to creating a web part, much similar to the Excel Access Web Part, but also a bit improved. My next thought was to create this as an SharePoint App, but due to several factors of the App model, that project is now on ice and cannot be completed due to the lack of API’s.

Long story short, I’ve taken the code I have and made it into a sandboxed solution located at It’s tested with 2013 on-premises and SPO, but should be easy to compile it for 2010 if you want.


Wednesday, December 11, 2013

Issues with connecting to servers on the VPN network with Cisco AnyConnect v3.1 and Windows 8

I have no problem connecting to the VPN host with the newer versions of AnyConnect Secure Mobility Client, but traffic refused to route over the established VPN.

The solution was simple, yet hard to find.

Click the cogwheel to open settings.


Under preferences, uncheck “Block connections to untrusted servers”.


With this change I was able to access servers on the VPN network.

Duplicate Trimming in SharePoint 2013 is causing confusion

[Update - Verified with July 2014 CU]
You can now turn off security trimming via the Query Builder on your search web parts.

  • Edit web part
  • Click "Change query"
  • Click the "Settings" tab
  • Toggle "Don't remove duplicates'

[Original Post]
Duplicate trimming as a function in search is a good idea. The intent is to reduce noise by discarding duplicate or equal items in a search result. The issue with SharePoint 2013 is that trimming is implemented too coarse and a lot of good results are hidden for the user. Also, turning off duplicate trimming is not an edit web part task as the option is hidden in a JSON property on the web part.

My recommendation at the moment is to turn off duplicate trimming, and if users complain about real duplicates being show, tell them to clean up the data. Most of the time you really don’t want duplicates of items/documents stored anyways.

I’ll dig into and explain more about how duplicate trimming is performed in SharePoint 2013 in a later post.

If you are on-premises you may use the same procedure as I used in Make sure your People Search is fuzzified, where using PowerShell, you modify the internal JSON property. Using the same script change line 13 to read:

$dataProvider.TrimDuplicates = false

Being inspired by Chris O’Brian’s post on using CSOM with PowerShell I have modified my code to use PowerShell  and CSOM. By changing the credentials line, you may use the code against both SharePoint on-premises and SharePoint on-line.

# Author: Mikael Svenson - @mikaelsvenson
# Company: Puzzlepart
# Date: December, 2013
# Reference:

# replace these details (also consider using Get-Credential to enter password securely as script runs).. 
$username = "" 
$password = "password" 
$url = ""
# the path to the SharePoint Client dlls' 
$dllPath = "D:\SP2013-dll\ISAPI\"
$securePassword = ConvertTo-SecureString $Password -AsPlainText -Force 
Add-Type -Path "$($dllPath)Microsoft.SharePoint.Client.dll" 
Add-Type -Path "$($dllPath)Microsoft.SharePoint.Client.Runtime.dll" 
Add-Type -Path "$($dllPath)Microsoft.SharePoint.Client.Publishing.dll"
Add-Type -Path "$($dllPath)Microsoft.SharePoint.Client.Taxonomy.dll" 
# connect/authenticate to SharePoint Online and get ClientContext object.. 
$clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url) 

#$credentials = New-Object System.Net.NetworkCredential($username, $securePassword) 
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword) 
$clientContext.Credentials = $credentials 
if (!$clientContext.ServerObjectIsNull.Value) 
    Write-Host "Connected to SharePoint site: '$Url'" -ForegroundColor Green 

$web = $clientContext.Web
# get guid of the default Pages library to cater for localization
$pagesGuid = $web.AllProperties.FieldValues["__PagesListId"]
$list = $web.Lists.GetById($pagesGuid)
# get localized server relative url
$url = $list.RootFolder.ServerRelativeUrl

$page = $web.GetFileByServerRelativeUrl($url +"/results.aspx");

Write-Host "Checking out page" -ForegroundColor Green 
catch{ Write-Host "Page already checked out" -ForegroundColor Yellow}
$wpm = $page.GetLimitedWebPartManager([Microsoft.SharePoint.Client.WebParts.PersonalizationScope]::Shared) 
for ($i=0; $i -lt $wpm.WebParts.Count; $i++)
    $item = $wpm.WebParts.Item($i)
    if( $item.WebPart.Title -eq "Search Results" ) {
        Write-Host "Found result web part" -ForegroundColor Green 

Write-Host "Turning off trimming of duplicates" -ForegroundColor Green
# Read JSON properties and convert to an object
$dataProvider = ConvertFrom-Json $item.WebPart.Properties["DataProviderJSON"]
$dataProvider.TrimDuplicates = $false
# Convert the object back to a JSON string
$item.WebPart.Properties["DataProviderJSON"] = ConvertTo-Json $dataProvider -Compress
Write-Host "Checking in and publishing page" -ForegroundColor Green 
$page.CheckIn("Modified Search Core Results web part", [Microsoft.SharePoint.Client.CheckinType]::MajorCheckIn)
$page.Publish("Modified Search Core Results web part")

Friday, December 6, 2013

Stockholm, Las Vegas and Barcelona–Here I come!

I’m not one to follow the full SharePoint speaker circuit, but 2014 is looking to be a fun, exciting and hectic year!

imageFirst off, I’ll be speaking at the very first SharePoint Saturday in Stockholm together with legends like Wictor Wilén and Christian Buckley. My session is titled Rock your Office 365 Search with 13 easy tune-ups and my aim is to share some tips and tricks which will improve search beyond what’s there out of the box. This will be a trial of the session before the European SharePoint Conference in May.

imageMy next step is SharePoint Conference 2014 in Las Vegas March 3-6. I’m really proud to be invited back, and this will be the second time I speak at SPC. I’ll be co-presenting Managing Search relevance in SharePoint 2013 with David Hollembaek who works with MSC in Münich, Germany, and we’re aiming for a crash course in relevance tuning and showing what’s possible.

The last stop is European SharePoint Conference in Barcelona 5-8th May. Here I will re-visit my session from SPS Stockholm, with whatever tweaks I’ve learned on the way.

Hope to see a lot of old and new faces at these events!

Tuesday, December 3, 2013

How to fix search schema import with query rules which use dictionary lookups

Note: This post propose a solution which will render your farm in an unsupported state, and should only be done under Microsoft Support approved guidance. A bug for the issue has been filed, and I expect a fix will be provided with a CU later on. Another fix is to re-create your Service Application and get a fresh database.

I’m doing a search project where we have a bunch of query rules defined. Some are promoted results and some change what is displayed and the sort order. For some of the rules we use trigger terms from a term store, which works just fine.

The solution is created on a dev farm and then the search configuration from the search site is exported and moved to the production farm. So far so good.

Importing the search configuration in production works just fine, but when you try to access the query rule page (http://intranet/sites/search/_layouts/15/listqueryrules.aspx?level=sitecol) you get the following error:


Sunday, November 17, 2013

Help out #Haiyan victims in the Philippines and get free SharePoint consultancy

Dux Raymond Sy has started an initiative where if you donate USD99, you will get 1h of expert SharePoint consultancy from one of many SharePoint MVP’s (me included).

Head over to for more details and the complete list of people you may select from.

Thursday, November 14, 2013

Issue with Promoted Results in SharePoint 2013 and using REST for search

I was creating some promoted results using query rules this morning, and noticed that they worked fine in the search UI, but I got a 500 internal error when using REST.

Turns out there is a bug in both on-premises 2013 and SharePoint Online which will make your REST query fail if you omit to enter a URL for your promoted result.


It’s no big issue as I was planning to add a URL later anyways, but good to know until the issue is fixed.

Tuesday, November 12, 2013

Issue with Display Templates not updating once saved

I’m doing Display Template work at the moment and suddenly my changes to the templates were not reflected once I refreshed the page in the test environment.

Turns out blob cache had been turned on on the servers which also caches .js files, but clearing the cache folder did not seem to work. What did work was going to the Object cache settings page (_layouts/15/objectcachesettings.aspx) of the site collection of the Search Center and then flushing the cache.


Monday, November 11, 2013

Install Display Templates using PowerShell (or copy files using WebDAV in SharePoint)

I’m in the process of writing a Search Center configuration script for SharePoint in PowerShell, and in the process I wanted to copy my display templates as well.

I could have used the SharePoint API to upload the files into the _catalogs/masterpage/Display Templates folder, but this felt like too much work.

My next thought was to copy the files using WebDAV. I started my search on the interwebs and found some posts around PowerShell and WebDAV, but nothing really clear and simple. Then I had an epiphany, maybe I can just copy the files? And you can!!

$dest = '\\\_catalogs\masterpage\Display Templates\'
Copy-Item LocalPath -Destination $dest -Recursive

It’s a matter of formatting the URL in a server UNC notation and it just works.

What this doesn’t do is making sure the item copied is approved and published as a major version in SharePoint. To do this you either have to use the SharePoint API from PowerShell, or for example navigate to and filter on Approval Status = Draft, and publish each one.

Personally I change the settings on the _catalogs/masterpage library to not require approval or check-in/check-out, and only have major versions. If doing this you have to make sure you have control over the library as any saved change will go live right away – this means using a test environment or test site collection for the search page first is highly recommended.

Friday, November 8, 2013

Returning more than 50 results in a search result (ResultScriptWebPart) is a bit tricky

I’ve thought about breaking the 50 item limitation at times, and have received questions if it’s possible to do. And, yes it’s doable….but I won’t vouch for the robustness of my current solution :-)

First off, the technical reason why it blocks at 50 is this piece of code found in Microsoft.Office.Server.Search.WebControls.ResultScriptWebPart

public void set_ResultsPerPage(int value)
        DataVerification.ThrowIfOutOfRange(value, "ResultsPerPage", 1, 50, true);
        this.resultsPerPage = value;
    catch (Exception exception)
        ULS.SendTraceTag(0x15a2e3, ULSCat.msoulscat_SEARCH_Query, ULSTraceLevel.Unexpected, "Setting webpart property failed: {0}", new object[] { exception });
        base.Messages.Add(new ControlMessage(null, MessageLevel.Error, -1, exception.GetType().ToString(), exception.Message, "", exception.StackTrace, ULS.CorrelationGetVisibleID().ToString(), false, true, true));
        throw exception;

It’s a hardcoded limit. The logical reason behind this limit is could be either one of user experience, or one of performance, as it takes more resources to load up more items.

If you go the custom web part approach, inheriting from the ResultScriptWebPart and using for example reflection to set the private property for ResultsPerPage does not work, as the property itself is being set at page load time, and will then bomb at the limit verification. You could probably disassemble the whole web part and create a new one without the limit, and you would be golden.

In this post I’ll opt for a different approach, I’ll do it client side… which has one drawback, it won’t kick in on the first loading of the result page, as the results are then embedded in the page itself, and not loaded via ajax. This is however solved by kicking off an ajax call automatically after the page has loaded with the effect that the result list will expand fairly quickly after the initial rendering of the page.

On your result pages add a script editor web part and add the JavaScript at the end of this post and you should be able to get more than 50 results. The default max limit for search is 500, which may be changed on the web application for an on-premises farm:

$ssa = Get-SPEnterpriseSearchServiceApplication
$ssa.MaxRowLimit = 1000



String.prototype.endsWith = function(suffix) {
    return this.indexOf(suffix, this.length - suffix.length) !== -1;

Sys.Application.add_load( function() {
    // Get a collection of all divs
    var collection = document.getElementsByTagName("div");
    var length = collection.length;
    var resultsPerPage = 200;
    // Iterate all divs to find the control element for search
    for( var i=0; i<length; i++ )
        if(collection[i].id.endsWith("_csr") // element name ends with _csr
            && collection[i].control != null // element has a control object
            && collection[i].control.set_resultsPerPage != null) // control object supports resultsPerPage property
            // override the web part setting
    // wait until search box is in the DOM
    var checkExist = setInterval(function() {
       if ($get("searchImg") != null ) {          
          // Reload the results using ajax with the new limit
    }, 100); // check every 100ms

Wednesday, October 30, 2013

Limiting Search Results to a particular Site

To the end-user in SharePoint, the content structure is built up by Site Collections and Sites. A site collection, from the name, is a collection of sites, including one top-level site, and any number of sub-sites below.

See for more information on site collections and sites.

In most cases knowing this bit of information keeps you afloat, but it doesn’t take long before you try to do something not documented on, and you start seeing the word web used every now and then. You might know all of this already, but for reference to the unknowing here are the relationships: documentation MSDN documentation
Site Collection SPSite
Site SPWeb

Being a developer I’m used to this and can quickly jump between the red and blue pill worlds, knowing when a Site means a site collection and when it means a single site (web).

And this is where the worlds collide this time in terms of Query Rules.

Tuesday, October 22, 2013

Adding freshness boost to SharePoint Online

Freshness formula for ranking modelsEvery once in a while I get questions about customizing the rank model in SharePoint 2013. While it has never been easy to tune a rank model, it was somewhat easier with FAST ESP or FAST Search for SharePoint as the rank values was not normalized and it was possible to interpret how the rank was calculated. This is why I created the FS4SP Query Logger, quite useful when dealing with FS4SP.

An often useful part of ranking results is freshness, meaning newer items are pushed somewhat up the compared to older items.

But that was then, and this is now. The rank models in SharePoint 2013 has changed a lot and the resulting rank numbers are now normalized with more than 10 decimals. The numbers in the model itself are not easy to understand and I briefly wrote about this in Rank models 2013 – Main differences from 2010. I still don’t have a Ph.D in math, and I will leave that up to the clever people at FAST/MS who created these new model with neural networks and all. It’s possible to write simpler static models with 2013, but I think I have decided to stay away from it as long as I can, and do rank tuning using the XRANK operator instead. If I ever get a case where changing the model is the only way I’ll sign up for a rank tuning course over at the powers that be - benefit of living in Oslo :)

Wednesday, October 9, 2013

Make sure your People Search is fuzzified

Topics covered in this post
  • Fuzzy name matching on people search
  • Setting default language of a search result web part
  • Using the &ql URL parameter to set query language
  • Using Reflector to figure out how it all works
If your users primary language setting in SharePoint is a minority language, this post is for you. If your primary language is one of the languages in the list further down, keep on reading as well to broaden your horizon.

Finding people is one of the most used search features in SharePoint, and spelling names is inherently hard as people choose just about all possible ways to spell their name.

As an example; my name is Mikael Svenson, where it’s more common to spell Mikael with ch instead of a k (Michael) and Svenson is most commonly spelled with two s’ in the middle (Svensson). This means a search for “Michael Svensson” should also match “Mikael Svenson”. This is where fuzzy name matching comes in.

Creating a site collection in 2010 mode in SharePoint 2013

Today I got phoned up by a friend who asked how on earth he could programmatically create a site collection in 2010 mode in SharePoint 2013. Why he wanted to do this I’m not sure, and it’s not the point of this post :)

It’s actually quite easy if you look at the documentation of some of the SPSiteCollection.Add() methods which take an int as a compabilityLevel parameter. Passing in 14 in this parameter will ensure you get the 2010 look and feel. For reference the SPSite.SelfServiceCreateSite() method has similar overloads.

SPWebApplication webApp = new SPSite(http://host).WebApplication;
SPSiteCollection siteCollections = webApp.Sites;
uint lcid = 1033;
int compatLevel = 14;
string webTemplate = "STS#0";
SPSite newSiteCollection = siteCollections.Add("sites/test", "Title", "Description", 1033, compatLevel, webTemplate,
 "DOMAIN\\User", "Name", "Email_Address", "DOMAIN\\User", "Name", "Email_Address");

If you want to do it I PowerShell you can use the following command:

New-SPSite -Url http://host/sites/test -OwnerAlias "DOMAIN\User" -Template "STS#0" -CompatibilityLevel 14

And via Central Admin:


Friday, October 4, 2013

Migrating content from SharePoint 2010 on-premises to Office365–a non-working approach

Disclaimer: This post describes a failed attempt to streamline the migration of an on-premises 2010 site to a SharePoint On-line (2013) site without the help of 3rd party tools.

When you have some spare time it’s good to dog food what you preach. And we preach a lot to customers about options of migrating from SharePoint 2010 to SharePoint 2013. There are no silver bullet approaches to this as far as I know (and feel free to prove me wrong), so I thought I’d try what I call the Lazy Consultant approach to this.

This approach involves saving a site as a template with all the content included, then “fixing” the wsp file, and uploading it into SPO where you create a new site based on the uploaded template.

Search UI matters!

I’ve recently been tasked to optimize a search result page for a customer, as they have a “feeling” it’s not working out. For the record, I did not create the existing layout; if I did I would not write this post :)

The lesson to be learned is: Keep your UI clean, remove the clutter, and stick to science

I’ve long learned long ago that UX is very subjective and often emotionally loaded, and I do now want to force my views onto others without some backing. First off I got hold of the web server logs and wrote a small utility to parse out all the search queries going back a couple of months. By looking at the query strings I could easily pinpoint where in the UI people were clicking, thus providing me documented metrics on what parts of the current UI works, and which ones doesn’t.

Note: There is no out-of-the-box way of measuring where a user click in SP2010 or SP2013 search, so you have to figure out a way to do this yourself by either adding script and custom logging to the page, or parse the IIS logs as I did.

Thursday, September 26, 2013

Offline URL Decoder - or GUID Decoder :)

I recently read a post by Jasper Oosterveld about how to decode a URL encoded string using the on-line page
I’ve used that a lot myself, but an almost as easy way is to use your browser. In IE you may type

in the address bar and it will decode it on the fly. Of course there is a quirk as some versions of IE strip the javascript part if you paste it.


Same works in Chrome if you first open a new blank tab, and then enter the same command. If you re-use an existing page it fails.

Or in your favorite browser, hit F12 which should open developer toolbars, choose the javascript console and execute the command there. Below is a screenshot from IE11.


Then all you’re left with is if it’s still easier to open that on-line page Smile

Monday, September 23, 2013

Creating crawled properties with SharePoint 2010 and FAST

When you do a lot of work with FS4SP you sort of forget that trivial tasks may not be so trivial for those outside your nose tip. Recently at a customer another developer was adding some new <meta> tags to a SharePoint page, and he was trying to script the crawled properties (cp) in order to move them between the environments.

Yes, you may always kick off new crawls to get the cp’s created, and then map them manually, but this is often not doable as you don’t have access to Central Admin or the servers once you move out from your test environment. Hence, you send over deployment scripts.

The issue this time was which Crawled Category group will the crawled property appear in, and once you have the name, what’s the GUID, which you need to script it as it doesn’t take the name (the reason being a group may have more than one GUID/group associated behind the scenes… figures!?)?

To help out, I figured I’d list the GUID’s needed when creating crawled properties for the most used categories:

Category GUID Notes
SharePoint 00130329-0000-0130-c000-000000131346 All columns in SharePoint will have a crawled property created by default in this category with the ows_ prefix.
Web d1b5d3f0-c0b3-11cf-9a92-00a0c908dbf1 All pages of type HTML will have <meta> tags added as crawled properties in this category. It does not matter which crawler was used, SharePoint, Web, File.

The type of the value is always text (variant type 31).
Business Data – BCS 2edeba9a-0fa8-4020-8a8b-30c3cdf34ccd Columns from BCS connections appear in this category.

The easiest way to get the right GUID is by kicking off a crawl, and then inspecting the properties of one of the auto-generated crawled properties. You may also run the following PowerShell commands, which may yield more than one GUID, and might not help out that much:

$cat = Get-FASTSearchMetadataCategory "Web"


You can learn how to work with crawled and managed properties via PowerShell over at TechNet -

Monday, September 2, 2013

Appending query terms in SharePoint 2013/O365 without adding them to the search box

In SharePoint 2010 you can append query terms to a search query by either setting a property on the Core Results web part, or by using the a= query parameter.
Neither of these possibilities exist in SharePoint 2013 so you have to do a workaround. If you read my post Limiting search results by exact time in SharePoint 2013–and how to do FQL you might have picked up on how you can use the refiners to pass in arbitrary FQL. This means, the refiner parameter is not only useful for refinements, but you may use it to append parts to your query as well.

Monday, August 26, 2013

Accessing rank models in SharePoint 2010

I recently blogged about rank models in SharePoint 2013, and how you can use PowerShell to dump the existing rank model XML.
The commandlets to do so are more or less same as for SharePoint 2010, with one important difference, the RankingModelXML attribute is empty in 2010 (in most cases – more on that below).
$ssa = Get-SPEnterpriseSearchServiceApplication
Get-SPEnterpriseSearchRankingModel -SearchApplication $ssa | select RankingModelXML


Thursday, August 22, 2013

Defining custom intervals for search refiners in SharePoint 2013

Refiners are a good way to narrow down your results, and often you may find that you are refining on ranges of data. A typical example from e-commerce is to limit on price ranges, where you may range from:

  • 0-50
  • 50-100
  • 100-500
  • 500-above

Out of the box SharePoint will divide the results into four box, with intervals calculated by an internal distribution formula. If we look at the file size refiner in SharePoint 2013 it will show something like this by default:


Wednesday, August 21, 2013

How to move your DropBox users over to SharePoint

In a recent sales meeting for a new SharePoint 2013 project I was posed with the challenge on how to cater to a large sales force currently using DropBox as a means for document collaboration.

Microsoft’s weapon is of course SkyDrive Pro, which is the new and improved Groove, or SharePoint Workspace (which I love dearly). This tool (weighing in at a 260MB, compared to DropBox’s 17MB) allows you to sync SharePoint document libraries to a folder on your local drive.

To the savvy SharePoint user the benefits of using SharePoint as a backing storage is obvious with versioning, simultaneous collaboration, added meta data, advanced security settings, and the option for workflows etc. For a user coming from DropBox and folder structure or a file server for that matter, they probably couldn’t care less.

Thursday, July 18, 2013

Limiting search results by exact time in SharePoint 2013–and how to do FQL

[Update 2020-01-27]
KQL in SharePoint Online and SharePoint 2019 now support time portion as well. For SP2016 and SP2013 you still need to use FQL to accomplish this.

[Original post]
Often you might find the need to limit search results between two dates, or even more exact, between two exact time intervals. The issue is that property queries written in KQL will disregard the time portion of your query, limiting to full day results only (See this old post for an explanation).

FAST Search for SharePoint allowed queries with the time portion in Sharepoint 2010 by using FQL, and fortunately you can do this in 2013 and Office 365 as well.

Doing FQL in 2013 in theory involves setting a parameter either on your REST queries or on the KeywordQuery object telling it your query is written in FQL and not KQL. The reality is a bit more complex involving a custom result source, and to tell you the truth. I have yet to be able to get FQL to work using REST this way.

How then? you might ask. Use use the RefinementFilters property instead! This property actually uses FQL which is what we need and is our “Get out of jail free card”. And all though KQL has more of the FQL operators in 2013, FQL still has some tricks up it's sleeve. Worth taking a look at for sure.

To limit on dates you will use the FQL range operator. The default behavior is to use “greater than” and “equal and less than” if not specified in the query.

The granularity of date/time queries are down to 7 decimals, as long as the crawler has been able to set this level of granularity during indexing.

Wednesday, July 17, 2013

How to copy files between sites using JavaScript REST in Office365 / SharePoint 2013

I’m currently playing with a POC for an App, and wanted to try to do the App as a SharePoint hosted one, only using JavaScript and REST.

The starting point was to call _vti_bin/ExcelRest.asmx on the host web from my app web, but this end-point does neither support CORS nor JSONP, so it can’t be used directly. My next thought was; Ok, let’s copy the file from the host web over to my app web, then call ExcelRest locally. Easier said than done!

While the final solution seems easy enough, the research, trial and error have taken me about 3 days. I’m now sharing this with you so you can spend your valuable time increasing the international GDP instead.

Note: If you want to copy files between two libraries on the same level, then you can use the copyTo method. http://server/site/_api/web/folders/GetByUrl('/site/srclib')/Files/getbyurl('madcow.xlsx')/copyTo(strNewUrl = '/site/targetlib/madcow.xlsx,bOverWrite = true)


Copy a file from a document library in one site to a document library in a different site using JavaScript and REST.

Friday, July 12, 2013

Enabling multi-select refiners in SharePoint 2013/Office365

I recently received a question about how to do multi-select refiners in SharePoint 2013 as comment on my post Limiting Search Results in SharePoint.

Fortunately this is very easy, which is was not in SharePoint 2010.

  1. On your search page, edit the Refinement web part
  2. Click the “Choose Refiners” button
  3. Select your refiner, eg. FileType
  4. Chose the “Multi-value Refinement Item” display template
  5. Click OK and Save the web part changes


Your Result type refiner will now look like the image below where you can check off each value you want to refine on.


You should note that when using this template it will in the case of file types show the extension and not the application names which is does with the “Refinement Item” display template. But that’s fixable if you edit your display templates :) and left as an option for you.

Wednesday, July 10, 2013

Working with refiners in CSOM–SharePoint 2013

Topics covered in this post:
  • CSOM Search Queries
  • Working with refiners in your queries
  • ModifiedBy and EditorOWSUSER managed properties
I’m playing around using CSOM search in an App these days and find that I am using a lot more time than usual writing my code. The reason is two-fold. One: no IntelliSense, and two: not that much documentation and samples out there.

Also the quirks of setting properties in JSOM takes some learning as you have to know if the property is a collection or not. Writing C"# code, spotting collections is much more intuitive.
I’ll be using Microsoft.SharePoint.Client.Search.Query.KeywordQuery for my samples.

Wednesday, June 26, 2013

Yet Another SharePoint 2013 in Azure Post

Topics covered in this post:
  • 3 server Azure VM setup for SharePoint 2013
  • Shrinking Azure vhd blobs
  • Turning a DC into a server core install
I’ve long been thinking about provisioning a SharePoint 2013 dev farm in Windows Azure, and with the new MSDN pricing model this has become more attractive. You can now also do a VM shutdown instead of de-provisioning the full VM if you want to save some $$ by not running the VM 24/7.

In my case I have a Visual Studio Ultimate with MSDN subscription which gives me $150 free spending per month (, so we’ll see how that looks after a month’s use.

Thursday, June 20, 2013

Extending the existing search box in SharePoint 2013 with search as you type functionality

Back in February Murad Sæter blogged about using jQuery autocomplete to get real search results from SharePoint 2013 while you are entering the query.


Note   Executing live searches like this may cause load on your search server. You should monitor the load or restrict your queries if you see a load issue. You can also increase the delay time from the default 100ms to reduce the load.

imageInspired by this on a recent project I wanted to see if I could add this functionality to the existing search box on the search center.

The search box has three settings for search as you type suggestions. Suggestions based on what everyone searches which is recorded over time (or you can manually add suggestions), suggestions matching people names, and personal favorites – which are searches you perform often.

For my search scenario I was creating a page to find collaboration and project sites. Sort of a search based navigation to locate sites on the intranet. In my particular case I did not have any needs for person name results, and decided to re-use the person name container for real searches instead.

The effect is that while you are entering your query, real searches are executed against the Title field and limiting to sites only (contentclass:STS_Web).

The search suggestions functionality in SharePoint 2013 is provided by ajaxtoolkit.js. In order to extend the existing functionality  I had to override the _update function of the AjaxControlToolkit.AutoCompleteBehavior.prototype which is responsible for the search as you type functionality of the search box.

I also chose to use jQuery ajax and the search REST api in order to get my search results.

A simple way of adding the functionality is to drop a script editor web part on the page and simply paste in the code in this post. The code has some inline comments to describe what is going on.


<script type="text/javascript">
function Override() {
// Keep a copy of the original function
AjaxControlToolkit.AutoCompleteBehavior.prototype._update2 = AjaxControlToolkit.AutoCompleteBehavior.prototype._update;

// register the searchOnSuccess on the same prototype object in order for the update2 function to keep the
// context with all variables
AjaxControlToolkit.AutoCompleteBehavior.prototype.searchOnSuccess = function(data,prefixText,completionItems,cacheResults) {
var results = data.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results;

var names = [];
for (var i = 0; i < results.length; i++) {
var title = results[i].Cells.results[3].Value;
// Add highlighting of the search term
var idx = title.toLowerCase().indexOf(prefixText.toLowerCase());
if( idx >= 0) {
var hhtitle = title.substr(0,idx) + "" + title.substr(idx,prefixText.length) + "" + title.substr(idx+prefixText.length);
} else {
names.push(title); // fallback if indexof fails
/* href = results[i].Cells.results[6].Value; */
// put our results in the people name container as we're not using it on our page
// call the original update function which renders out the results
this._update2(prefixText, completionItems, cacheResults);

// Register an overload update function which executes a real search
AjaxControlToolkit.AutoCompleteBehavior.prototype._update = function(prefixText, completionItems, cacheResults) {
var context = this;
// Get top 5 results searching only on title field. Other paramaters can be added as well
// Query term is what the user has entered with a wildcard appended
url: _spPageContextInfo.webAbsoluteUrl + "/_api/search/query?rowlimit=5&querytext='title:\"" + prefixText + "*\"'",
method: "GET",
headers: {
"accept": "application/json;odata=verbose;charset=utf-8",
success: function(data){ context.searchOnSuccess(data,prefixText,completionItems,cacheResults);},
error: onError

function onError (err) {


// Replace url to your own jquery path
RegisterSod("jquery.min.js", "/_layouts/15/mAdcOW.SearchSuggestions/js/jquery-1.9.0.min.js");
LoadSodByKey("jquery.min.js", function () {
$(function () {
// Make sure ajaxtoolkit is loaded before registering the functions
ExecuteOrDelayUntilScriptLoaded(Override, 'ajaxtoolkit.js');

Tuesday, June 18, 2013

Highs, lows & random rants on SharePoint 2013

Here’s the presentation Harald Fianbakken and I gave at the Norwegian SharePoint Community meeting June 17th, 2013.

Basically a talk where we ranted on our experiences with 2013, highlighting the issues we ran into, and what we really love.

Here’s a small narrative per slide to give it all some more context.

Slide 6-8: The quality management system is an external system which stores it’s data as static HTML files. Along these files are corresponding image files when the artifact is a process diagram. The files are imported with a custom application which parses the <meta> tags of the HTML files, and stores the parsed data as publishing pages with custom columns to hold the metadata inside of SharePoint. The page library have several content types, and the right one is chosen on import depending on the parsed data. The images are stored in an image library.

The HTML files also contains image maps, which we parse and store the coordinates in order to re-create the navigation experience later inside of SharePoint.

Some of the metadata are stored as Managed Metadata, and a taxonomy is built upon import.

Slide 9: We auto-generate Content Types and lists based on reflection on annotated Poco’s. The poco’s are annotated using Puzzlepart also has a framework which allows the use of Linq queries on the CT’s/Lists, instead of using CAML. Sort of like SP Metal, but you start with the poco, not the other way around.

Slide 11-13: The solution has a part which is an “Improvement Potential” log. When someone reports an improvement a workflow is started. We encountered multiple issues when developing this, mainly on the test server. The root cause was most of the time missing user profiles and missing e-mail addresses.

Slide 14: There is a bug in Workflows as of now which comes into play if you add custom task outcomes to the task form. Meaning, you want more than just Approve/Reject. If you add more, then the outcome is always “Approve”. You can however retrieve the real value in your workflow, and then act on it as a workaround. This took us one week to figure out (see the blogs referenced at the end slide for more info).

Slide 15: If you don’t click “Publish” before exporting a workflow you will get an error upon feature activation. Also, you have to retract/redploy your workflow if you re-export as the WSP’s get new id’s. This is an issue if you develop the workflow on one server and want to move it to another.

Slide 18-19: Excel rest is a good way to get graphs/named entities from within Excel files without having an Enterprise license.

Slide 20-21: See for more information.

Slide 23-24: By using the SP2013 social api, we can follow or bookmark publishing pages in SharePoint, allowing users to have favorites.

Slide 26-34: We use custom display templates, result sources, query rules and display blocks in order to create a sort of 360-view for processes. Searching for a process will show related improvements and related documents to a specific process.

The issues we encountered where related to pulling back managed properties for certain content types, and getting the display templates to trigger correctly. It’s all a bit buggy at the moment, but I’ve been told there are fixes coming in later CU’s.

What seems to work the best is to create managed properties on the SSA, stay away from the auto-generated ones, and have one result block per query rule, even though the rules are the same.

Slide 36-40: By using custom properties on terms, we created a configuration option for display forms. The taxonomy states which fields should be shown in our dynamic form based on values you choose, and which fields should show in new/edit mode. Basically we used the terms store as a way to configure an input form. The other choice would have been to create multiple forms to cover all scenarios. But the requirements kept changing, and this gave us flexibility and ease of configuration.

Slide 42-45: Be sure to keep people who know HTML and know SharePoint HTML at hand when implementing a custom design. It saves time!

Thursday, June 6, 2013

Add a “Clear Filters” link to your search page in SharePoint 2013

Using refiners or filters on a search page is an easy way to narrow down on the content you are looking for. If you need to start from scratch you can just click the search button again, but this may not be the best UI.

I’ve worked on many projects where they also want a “Clear filters” link.

As all calls on the 2013 search page is done using AJAX, the real query is added to the hash part of the url, the part coming after the # character. If you want to clear all filters and the search terms, then you can add what designers call a dummy link with a # to the page, and that will reset your query.

<a href=”#”>Clear filters</a>

A simple solution it so add a Content Editor Web Part above the refiner panel with the above markup.

Tuesday, June 4, 2013

Meticulous workaround for getting Cisco AnyConnect to work with Windows 8

At times I need to use Cisco AnyConnect VPN to get into remote systems, but unfortunately AnyConnect does not work very will with Windows 8. My first solution was to install Windows XP as a virtual machine to get a lightweight solution.

Windows XP with AnyConnect works just fine and I can browse from within my VM.

My next issue was that I wanted to use SharePoint Designer 2013. This cannot be installed on Windows XP. You need Windows 7 or newer. Bummer :(

Instead of installing a Win7 VM I went for this solution.

  1. VMWare machine running Windows XP with freeSSHd installed
  2. Connect to remote system from XP using AnyConnect
  3. On native client (Windows 8) use putty to ssh into the VM with a socks proxy on port 8080
  4. Configure IE to use as a socks proxy
  5. Use SharePoint designer in my native Windows 8 which will use IE proxy settings

Communication goes like this:

SharePoint Designer (native) –> ssh proxy to XP (native) –> Cisco Any Connect (in VM) –>Virtual network adapter back via host machine –> Remote system

Cumbersome but it works.. and maybe, just maybe AnyConnect will work perfect on Windows 8 one day.

Monday, June 3, 2013

Enabling PDF previews in your libraries

[Edit: I totally missed, so this post is almost a dupe - sorry about that]

You installed the March CU for Office Web Apps 2013 (WAC) which supports rendering PDF files, and followed Wictor Wilén’s post on how to enable PDF previews for your search results (and perhaps his post on installing the CU as well).

Then you read the comments and saw that others like yourself want to get these PDF previews for document libraries and not only on the search result page.
Want to know how? Read on!
Enabling PDF preview in a SharePoint Library

Sunday, June 2, 2013

How to: Change the default sort of a Search Result Web Part

When using the Content Search web part you can modify the query in the query builder. By default it lists recently changed items but often you want to change the query using advanced mode.


When clicking advanced mode, the default sorting is set to Rank, and you can click the sorting tab, choose LastModifiedTime in descending order, and you end up with the latest items with the filter you decide to add.

So far so good. Say you have either developed display templates for your search page, or want to do content by search in Office 365, and you want to use the Search Results web part instead which features more or less the same capabilities as the Content Search web part.

The procedure is almost similar. Drop a Content Search web part on the page and enter the query builder. In my case I want to limit the results based on one document library in particular and will use the query. I remove the {searchboxquery} token and enter:

(contentclass:STS_ListItem OR IsDocument:True) path:

Further clicking “Test query” lists all results sorted by rank. I click the sorting tab, choose LastModifiedTime and the test results show as expected, sorted by the newest items on top.


Next I click ok, and save the web part settings, save the page, and…… the results are still sorted by rank!!

There is however a workaround. Instead of changing the sorting using the query builder, you have to change the UI sorting options for the web part.

When editing the web part, expand the Settings section. Check the “Show sort dropdown” option and replace the content with only one value, the one sorting my newest items on top. After you have modified the JSON value, click “Apply” to save the web part changes. Next uncheck the option again, and save the web part changes. The sorting now sticks, but you’re not showing the sort dropdown.


Don’t know if this is a bug or by intention, but is sure bugged me for a couple of hours.

Sunday, May 5, 2013

My slides from SharePoint Saturday Belgium

Here's my slides from my session "Enrich People Search with OData in SharePoint 2013 and search as a governance tool" held at SharePoint Saturday in Brussels, April 27th, 2013.

The session is sort of dev oriented, but not really :)

Thursday, May 2, 2013

Viewing Excel files larger than 10mb using Office Web Apps in SharePoint 2013

I’m currently working on some code using the Excel REST API against in SharePoint 2013. If you’re not familiar with it the endpoint can be found at http://server/_vti_bin/ExcelRest.aspx.

If you try to browse the model of a spread sheet you will hit the same limits as file preview does. You cannot preview a spread sheet larger than 10mb. The question is, how do you change this?

Both and talk about setting this via the Excel Services in Central Admin. But this is not how to do it. You can perfectly well use the REST API and preview files using OWA without Excel Services installed. You do it on the OWA server using PowerShell. If you want to increase from 10mb to 50mb issue the command below followed by iisreset and you’re good to go!

Set-OfficeWebAppsFarm -ExcelWorkbookSizeMax 50

Monday, April 29, 2013

Taxonomy Filter Web Part for SharePoint

I just published my codeplex project for a taxonomy filter web part. The web part works on XsltListView’s and provide a hierarchical term picker to filter against taxonomy columns.

It’s currently a farm solution for SharePoint 2010, but a version for 2013 (with 365 support) is in the plans. The project can be found at:

Hope it’s useful!

Monday, April 22, 2013

Limiting search results in SharePoint 2013 (aka scopes in 2010)

A typical scenario in SharePoint is to limit the results on a search page to specific content. In SharePoint 2010 this was achieved by creating Search Scopes, and then adding include and exclude rules to the scope. Then you could configure the Core Results Web Part to display results from that scope only.

SharePoint 2013 however introduces the concept of Result Sources instead of scopes. As a sample I will limit results to show items from one particular site.

You can create Result Sources on three different levels: Search Service Application, Site collection and Site. This means you can define Result Sources in a very granular way, depending on where you want to use them. But bear in mind that having configurations spread all across your sites increases complexity of maintenance. In this sample I will create my Result Source at the SSA level.

Thursday, April 18, 2013

Rank models in 2013–Main differences from 2010

Disclaimer: I’m by no means a math expert and my statements below might not be 100% accurate, but I try my best. Also, be careful when tuning the rank profile as changing numbers can have a big effect on your ranking.

With the new FAST search core, ranking has changed quite a lot from 2010. Newly published content on MSDN explains a bit more how rank is calculated and how you can change it.
As the O14 rank model is available in SharePoint 2013 (O15), I will try to outline some of the major differences you can expect to see regarding how results are ranked/sorted by default.
You can pull out the rank model xml yourself from both models using PowerShell.

$ssa = Get-SPEnterpriseSearchServiceApplication
$owner = Get-SPenterpriseSearchOwner -Level ssa

$o15 = Get-SPEnterpriseSearchRankingModel -SearchApplication $ssa -Owner $owner -Identity 8f6fd0bc-06f9-43cf-bbab-08c377e083f4
$o15.RankingModelXML > o15.xml

$o14 = Get-SPEnterpriseSearchRankingModel -SearchApplication $ssa -Owner $owner -Identity 9399df62-f089-4033-bdc5-a7ea22936e8e
$o14.RankingModelXML > o14.xml

Then it’s all a matter of comparing the models.

SharePoint Saturday Oslo–June 1st 2013

The year of 2013 is well upon us, and the powers that be has decided to organize Oslo and Norway’s first SharePoint Saturday on June 1st.

We’re all super-psyched and hope to put on a varied and packed day with interesting sessions for everyone attending.

And it’s all FREE! :)

Interested in attending or speaking? Head over to

Wednesday, April 17, 2013

How to enable page previews in SharePoint 2013 for content not on the Search Center host domain

The solution

  • Edit Item_WebPage_HoverPanel.html
  • Add <WebPartPages:AllowFraming runat="server" /> to your master page
Beware: This solution opens up for click-jacking, but should not be a real threat on a intranet scenario.

The journey

Ok, you start off your new SharePoint 2013 and try to be smart regarding web applications, site collections and domain names.
The full 2013 solution has one intranet and collaboration part and one part for a QMS system. Early on you agree to use the following domain structure which should make sense to the end-user.
Everyone is happy and after a month or so the issue of search comes up. To align with the existing structure you go with:

Wednesday, March 27, 2013

Following or favorite pages in SharePoint 2013 using JavaScript

Recently I was tasked with a project requirement that the user should be able to add pages to a favorite list. To me this sounds much like using the new following feature of SharePoint 2013 which allows you to follow documents, sites, people and tags. And the difference between a document and a page in SharePoint is more semantic than technical. Also, we don’t want the user to navigate to the pages library in order to follow the page, but do it with a link/icon on the page itself. That said, the hover panel “Follow” link does not appear by default for Pages libraries either :)

Instead of creating a custom list to hold a users favorites we decided to use the built in social following functionality using JavaScript CSOM.

So, how do you go about following a page? There are at least a couple of ways. One, write your own custom CSOM script, or two, tap into what SharePoint uses when you click “Follow” on a document or a site.

Tuesday, March 26, 2013

An easy way to accomplish Home navigation links regardless of the path of your site collection

Most of my team mates deploy project solutions to http://project.server.local/ which means all links used in the solution can be prefixed with “/” and everything is related to the root path and works just fine.

I’m different and use http://server.local/sites/project. And of course all links going to root for me will go to http://server.local instead of http://server.local/sites/project.

The solution, at least for global nav in master pages etc. is to make use of the variable _spPageContextInfo, blogged by numerous people. This object holds several properties about the current page/item you are viewing (see _spPageContextInfo is your new best friend by Sahil Malik for a complete property listing).

The code below is a sample of a hyperlink on an image to the front page of the current site collection.

Saturday, March 9, 2013

Search Query Suggestions for anonymous users in SharePoint 2013–and with security trimming

Query suggestions have improved in SharePoint 2013 and in addition to showing popular queries it can also show people matching your query or popular queries you have executed yourself in the past. But for anonymous users this is not the case.


Out of the box, query suggestions will not work for anonymous users as Waldek Mastykarz wrote about back in December 2012. Waldek actually contacted me before writing his post and as I commented on his post, this is due to some hardcoding in the internal code logic in Microsoft.Office.Server.Search.Query.Query

if (SearchCommon.IsUserAnonymous)
    return new QuerySuggestionResults(new QuerySuggestionQuery[0], new PersonalResultSuggestion[0], new string[0]);

Tuesday, February 5, 2013

Ipad 2 stuck in landscape mode after update to iOS6.1

[Edit - New Solution]
The issue came back, but I searched the webs some more and fixed it. Seem it was a hardware issue with a stuck gyro. I set the iPad upright on a table with the screen showing upside down. Lifted it and banged it semi-hard to the table, and voila, the screen rotated back to normal.

[Original Post]
Yes, I have an iPad, and it works just fine. Basically it’s my son’s home entertainment system.
Not sure if the iPad got stuck in landscape mode and wouldn’t rotate the screen right after the update or later… but solving it was “easy” once I figured out how.
There’s a lot of posts on how to do this with resetting the settings and hard boot, but none of it solved it exactly for me.
This is how I did it (works on my machine ™)
  1. Set the side switch to act as a lock screen orientation button
  2. Lock the screen orientation and make sure the pad lock symbol is showing
  3. Hold the iPad in portrait mode
  4. Do a hard reset pressing the home button and power switch at the same time for 10 seconds, release both when the apple appears and ignore any turn off slider screen
  5. The iPad should now reboot in portrait mode
  6. Toggle off the side switch, and the padlock icon should disappear
  7. Twirl your iPad and the screen orientation should again work

Importing terms with Ampersand (&) to Managed Metadata

When working on populating the term store in SharePoint 2013 with some data from a CSV file via PowerShell I encountered a weird issue with ampersands.

Importing works just fine, but when running the script the second time where it should not create nodes already existing, it still tried to re-add all terms with an ampersand in them.

Reading the post SharePoint 2010 Managed Metadata Import Ampersand by David Winchurch explains that ampersands are getting unicode encoded during import, hence comparison with the non-unicode version will fail.

Fortunately there is a helper method TaxonomyItem.NormalizeName which will trim consecutive spaces to one as well as encode ampersands, and it works equally well in 2013 as in 2010.

You gotta love all the special quirks in SharePoint Open-mouthed smile

Monday, February 4, 2013

Issue with creating a copy of seattle.master in 2013 using SharePoint Designer

I’m working on a project and wanted to prototype my master page by copying seattle.master to a custom one.

I fired up SPD, located the Master Pages folder, selected seattle.master, right-clicked and chose copy, than paste into a new copy. This gave me a file named seattle_copy(1).master. Next step is to rename it, and that’s when you get a nice error message:
“Server error: This file may not be moved, deleted, renamed, or otherwise edited”.
This message is typically presented when you have files attached to the master page, which I don’t as I just made a copy. So I can’t rename it, I can’t delete it. The old 2010 trick of moving it to a folder doesn’t work either.

Wednesday, January 30, 2013

Continuous crawl - What is it, and what is it not

There's a lot of confusion around the new "Continuous crawl" mode in SharePoint 2013. It took me a while to decipher what it was myself, and reading the documentation on TechNet is not too helpful.

Let's break it down!

Continuous crawl is

  • only for SharePoint content
  • running non-blocking incremental crawls at 15 minute intervals (can be changed using PowerShell)
Continuous crawl is not
  • event based push indexing
If you run scheduled incremental/full crawls as in 2010, then each crawl is blocking. This means that if a crawl run takes longer than the interval set, then the next crawl will have to wait until the running one finishes.

When you enable continuous crawl, a new incremental crawl will start regardless of any running crawls (it will stil obey crawler impact rules).

The best example to illustrate the advantage of continuous crawls is if you start a full crawl of lots and lots of content which takes weeks to complete. During those weeks, all new content changes will be backed up until the running crawl completes. Using continuous crawl mode, it will still take weeks to process all the initial content, but any change happening during indexing will be picked up by new incremental crawls.

Result: New content is made searchable very fast regardless of other long crawls!

An excellent in-depth writeup on the topic can be found at the SharePoint IT Pro Blog, and is worth the read.

Wednesday, January 9, 2013

How to get thumbs to work with FS4SP when using Claims security in SharePoint

As stated in the Microsoft support article KB2554903 and KB2641517, document thumbnails with FS4SP and Claims based Authentication is not supported. There are also numerous threads on the Microsoft FS4SP forum about this.

I recently experiences this myself in a project and decided to fix it, because the fix is not really that hard. The issue is that when your browser calls http://server/_vti_bin/WACProxy.ashx it receives a 401 error due to it not handling claims.

A WSP for this solution can be downloaded from Codeplex. Note that the WSP will overwrite the existing WACProxy.ashx file, so you might want to create a copy of it first.

What I did was create a wrapper which calls the WACProxy running under elevated privileges instead, and switched out the existing WACProxy.ashx file with one pointing to my assembly.

using System.Web;
using Microsoft.SharePoint;

namespace mAdcOW.SharePoint
    public class WACProxy : IHttpHandler
        public void ProcessRequest(HttpContext context)
                var proxy = new Microsoft.Office.Server.Search.Extended.Query.Internal.UI.WACProxy();


        public bool IsReusable
            get { return false; }

You might thing it’s very very bad running this in an elevated security context and thinking this might create a security hole. But it won’t. What the WACProxy does is sending back script which points to for example http://server/library/_layouts/MobilePageHandler.ashx. This call is then being executed by your browser using your logged in credentials. This means if you don’t have access to the document, you can’t generated a thumbnail for it either.

So we are merely running the call to generate the proper thumbnail URL in an elevated context to get around the claims error.

If you do not want to overwrite or replace the current WACProxy.ashx file, I have a webpart you can drop on the search page which will redirect calls to WACProxy.ashx to YourWACProxy.ash file instead. I will commit this to at a later time.

Friday, January 4, 2013

Searching All items in Outlook 2013

I updated to Office 2013 not long ago and when I searching for old items they do not appear, but instead at the bottom of the search result it displays:

Showing recent results...


First I though this was some setting with the indexing in Windows 8 but Outlook was selected as a source. Then I stumbled upon a support article “Only a subset of your Exchange mailbox items are synchronized in Outlook 2013” which solved my issue. The setting also applies to Calendar, Contacts, Tasks, Journal and Notes items.

By default with Outlook 2013 and using Cached Exchange Mode only the past 12 months are synchronized. Changing this to “All” fixed the issue and I can now search all my items offline and quickly.


The above support article lists the steps on how to change how much to cache locally.