Leveraging SharePoint's OOB Taxonomy Picker

Thursday, June 30, 2016

11

In this post I'll go over how to instantiate the OOB SharePoint Taxonomy Picker using JavaScript.
Sometimes jslink just can't do the job, and we need to build a completely custom form with JavaScript and the REST APIs.  It's not too bad, but Managed Metadata fields are definitely a pain to handle.

I'm not a huge fan of the OOB Taxonomy Picker UI.  If you have the time and resources, it's worth considering rolling your own taxonomy control.  But for efficiency or consistency's sake, we can reuse the OOB picker with just a little bit of code.

Credit for the core of this code goes to this blog post.  There are a few other posts out on the interwebs with similar.  However, I could not get any to consistently work.  Many didn't load dependencies properly, or relied on resources which might not be loaded depending on what version of SharePoint and what type of page you're on.

So, I've bundled everything into an easy to consume wrapper, and made it as robust as possible by explicitly loading every required resource.  This has been tested on an OOB Publishing Page in both O365 and SP 2013 on prem.  It requires jQuery to be loaded.

Helper Functions

First, drop the code below anywhere on the page.  It looks hairy, but is actually fairly straight forward.

When called, it inserts a DIV on to the page to hold the taxonomy control.  It also adds a hidden textbox to hold the GUIDs of the selected terms.

Next, it uses JSOM to retrieve the ID of the current site's default Term Store.  Finally, it wires up some properties on the textbox, and registers an event which will cause SharePoint to transform the textbox into a taxonomy picker when the page loads.

var taxonomyPickerHelper = taxonomyPickerHelper || {
  initPicker: function (containerId, termSetId) {
    // Create empty picker template and hidden input field
    var pickerContainerId = containerId + '_picker';
    var pickerInputId = containerId + '_input';

    var html = '<input name="' + pickerInputId + '" type="hidden" id="';
    html += pickerInputId + '" />';
    html += '<div id="' + pickerContainerId;
    html += '" class="ms-taxonomy ms-taxonomy-height ms-taxonomy-width"></div>';

    jQuery('#' + containerId).html(html);

    // Get Termstore ID and init control
    taxonomyPickerHelper.getTermStoreId().then(function (sspId) {
      taxonomyPickerHelper.initPickerControl(sspId, termSetId, 
        pickerContainerId, pickerInputId);
    });
  },

  getSelectedValue: function (containerId) {
    return jQuery('#' + containerId + '_input input').val();
  },

  getTermStoreId: function () {
    var deferred = jQuery.Deferred();

    var context = new SP.ClientContext.get_current();
    var session = SP.Taxonomy.TaxonomySession.getTaxonomySession(context);
    var termStore = session.getDefaultSiteCollectionTermStore();

    context.load(session);
    context.load(termStore);

    context.executeQueryAsync(
      function () {
        var sspId = termStore.get_id().toString();
        deferred.resolve(sspId);
      },
       function () {
         deferred.reject("Unable to access Managed Metadata Service");
       }
     );

    return deferred.promise();
  },

  initPickerControl: function (sspId, termSetId, 
                               pickerContainerId, pickerInputId) {
    var tagUI = document.getElementById(pickerContainerId);
    if (tagUI) {
      tagUI['InputFieldId'] = pickerInputId;
      tagUI['SspId'] = sspId;
      tagUI['TermSetId'] = termSetId;
      tagUI['AnchorId'] = '00000000-0000-0000-0000-000000000000';
      tagUI['IsMulti'] = true;
      tagUI['AllowFillIn'] = false;
      tagUI['IsSpanTermSets'] = false;
      tagUI['IsSpanTermStores'] = false;
      tagUI['IsIgnoreFormatting'] = false;
      tagUI['IsIncludeDeprecated'] = false;
      tagUI['IsIncludeUnavailable'] = false;
      tagUI['IsIncludeTermSetName'] = false;
      tagUI['IsAddTerms'] = false;
      tagUI['IsIncludePathData'] = false;
      tagUI['IsUseCommaAsDelimiter'] = false;
      tagUI['Disable'] = false;
      tagUI['ExcludeKeyword'] = false;
      tagUI['JavascriptOnValidation'] = "";
      tagUI['DisplayPickerButton'] = true;
      tagUI['Lcid'] = 1033;
      tagUI['FieldName'] = '';
      tagUI['FieldId'] = '00000000-0000-0000-0000-000000000000';
      tagUI['WebServiceUrl'] = _spPageContextInfo.webServerRelativeUrl + '\u002f_vti_bin\u002fTaxonomyInternalService.json';

      SP.SOD.executeFunc('ScriptForWebTaggingUI.js', 
        'Microsoft.SharePoint.Taxonomy.ScriptForWebTaggingUI.taggingLoad',
        function () {
          Microsoft.SharePoint.Taxonomy.ScriptForWebTaggingUI.resetEventsRegistered();
        }
      );

      SP.SOD.executeFunc('ScriptForWebTaggingUI.js', 
        'Microsoft.SharePoint.Taxonomy.ScriptForWebTaggingUI.onLoad',
        function () {
          Microsoft.SharePoint.Taxonomy.ScriptForWebTaggingUI.onLoad(pickerContainerId);
        });
    }
  }
};

CSS

Next, include this CSS file to style the taxonomy picker:

<link rel="stylesheet" type="text/css" 
      href="_layouts/15/1033/styles/WebTaggingUI.css" />

Container DIV

Add a DIV to hold the taxonomy picker:

<div id="my-taxonomypicker">
</div>

Load Dependencies and Initialize

Finally, add the JavaScript below to initialize the picker.  This bad boy makes sure all the necessary dependencies are loaded, then initializes the picker with taxonomyPickerHelper.initPicker.  That takes 2 arguments, the ID of the container to place the picker, and the GUID of the termset to load.

Note: You must plug in the GUID to your termset on line 21

Note 2: sp.rte.js is a required dependency on O365, but does not exist in 2013 on prem.  For O365, be sure to uncomment lines 13, 14, 17, and 24.

jQuery(document).ready(function () {
  SP.SOD.loadMultiple(['sp.js'], function () {
    SP.SOD.registerSod('sp.taxonomy.js', 
        SP.Utilities.Utility.getLayoutsPageUrl('sp.taxonomy.js'));
    SP.SOD.registerSod('scriptforwebtaggingui.js', 
        SP.Utilities.Utility.getLayoutsPageUrl('scriptforwebtaggingui.js'));
    SP.SOD.registerSod('sp.ui.rte.js', 
        SP.Utilities.Utility.getLayoutsPageUrl('sp.ui.rte.js'));
    SP.SOD.registerSod('scriptresources.resx', 
        SP.Utilities.Utility.getLayoutsPageUrl('ScriptResx.ashx?culture=en-us&name=ScriptResources'));

    // UNCOMMENT THIS FOR O365
    // SP.SOD.registerSod('ms.rte.js', 
    //     SP.Utilities.Utility.getLayoutsPageUrl('ms.rte.js'));

    // UNCOMMENT THIS FOR O365
    // SP.SOD.loadMultiple(['ms.rte.js'], function () {
      SP.SOD.loadMultiple(['sp.taxonomy.js', 'sp.ui.rte.js', 
                           'scriptresources.resx'], function () {

        taxonomyPickerHelper.initPicker('my-taxonomypicker', '<TERM SET GUID>');

      });
    // });
  });
});

Wrap Up

To get the selected terms, call:

taxonomyPickerHelper.getSelectedValue('my-taxonomypicker')

The full sample code is available for download here.


Search Display Templates Made Easy w/ Knockout

Thursday, June 30, 2016

3

I first started using Knockout JS in display templates for a solution that required complex interactive search results.  But I found that Knockout made working with display templates SO much easier in general, it's now my go-to solution for building any custom template!

The inline JavaScript / HTML mashup syntax that display templates use turns the simplest requirements into a giant, unreadable mess of <!--#_ _#--> and _#= =#_ tags.

Incorporating Knockout enables us to:

  • Write clean markup with easy-to-read databinding
  • Open the door to more complex interactive solutions

Why Knockout JS, specifically?  Mostly because it's what I'm most familiar with.  I'm sure other frameworks like ReactJS or AngularJS could be use with similar benefits.  I like Knockout because it's trivially simple to set up and get started with.  For simple templates, the data-binding feature is all we'll need.  But having Knockout also opens the door to dependency-tracking, 2-way data binding, event handling, custom components, and other cool stuff down the road.

Include KnockoutJS

First, we need to include the KnockoutJS framework on the site.  It is a single, standalone JavaScript file which you can download from http://knockoutjs.com.

Upload it some where on SharePoint (eg Master Page Gallery or Site Assets), then add a reference to it in the Master Page:

<!--MS:<SharePoint:ScriptLink language="javascript" name="~SiteCollection/_catalogs/masterpage/js/knockout-3.4.0.js" OnDemand="false" runat="server" Localizable="false">-->
<!--ME:</SharePoint:ScriptLink>-->

Display Template Markup

Full code example is at the end of this article.  Here I'll break down the template and explain what each part does:

Pre-Render Javascript (lines 18-22)


This JavaScript in the special inline JavaScript/HTML syntax runs before the markup is rendered. Here, we simply save a reference to the current search result item, and generate a unique ID.

Markup (lines 24-33)


This is where the HTML markup and Knockout data-bindings for the current item are defined.  This markup will be emitted to the search results page.

Note on line 24 where we plug the unique container ID generated above into the markup with _=# containerId #=_.  This is the only place where we need to use the _#= =#_ syntax, as the ID has to be inserted before applying Knockout data-binding.

The rest is arbitrary HTML and Knockout bindings!  Go crazy!  For those unfamiliar with Knockout, $root refers to the root data model object that the template is bound to.  Here, we'll bind the template to the search result item, ctx.CurrentItem.  So, any property available on the item can be used in the template.  Remember that only Managed Properties that have been called out in ManagedPropertyMapping (line 8) are available.

Post Render - Apply Data-bindings (lines 35-40)


Finally, we register a Post Render callback method which executes after the template markup has been emitted.

We tell Knockout to apply the data-bindings by calling ko.applyBindings.  We pass it the search result item to map from ($root in the markup bindings), as well as the HTML element containing the template to bind to.


<html xmlns:mso="urn:schemas-microsoft-com:office:office" xmlns:msdt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882">
<head>
  <title>My Custom Search Result Item</title>

  <!--[if gte mso 9]><xml>
  <mso:CustomDocumentProperties>
  <mso:TemplateHidden msdt:dt="string">0</mso:TemplateHidden>
  <mso:ManagedPropertyMapping msdt:dt="string">'Title','Path','Author'</mso:ManagedPropertyMapping>
  <mso:MasterPageDescription msdt:dt="string"></mso:MasterPageDescription>
  <mso:ContentTypeId msdt:dt="string">0x0101002039C03B61C64EC4A04F5361F385106603</mso:ContentTypeId>
  <mso:TargetControlType msdt:dt="string">;#SearchResults;#</mso:TargetControlType>
  <mso:HtmlDesignAssociated msdt:dt="string">1</mso:HtmlDesignAssociated>
  </mso:CustomDocumentProperties>
  </xml><![endif]-->
</head>
<body>
  <div id="item">
    <!--#_
    var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId());
    var containerId = "MyCustomItem_" + encodedId;
    var currentItem = ctx.CurrentItem;
    _#-->

    <div id="_#=containerId=#_">
      <div>
        <span><b>Title:</b></span>
        <span data-bind="text: $root.Title"></span>
      </div>
      <div>
        <span><b>Path:</b></span>
        <span data-bind="text: $root.Path"></span>
      </div>
    </div>

    <!--#_
    AddPostRenderCallback(ctx, function()
    {
      ko.applyBindings(currentItem, document.getElementById(containerId));
    });
    _#-->
  </div>
</body>
</html>


SharePoint Search Box for Explore Experience

Thursday, June 30, 2016

1

This post will focus on some issues that make SharePoint's Search Box webpart clumsy when used in a Explore / Discovery scenario, and how to address them:

  • Retaining active refiners between searches
  • Clearing the search keyword
  • Building a custom client-side Search Box

Empty Keyword

By default, when you hit Enter on an empty Search Box, nothing happens.  Why would anyone search for empty string any way?  But, this actually comes up a lot in the Explore experience because users are likely to start with the refiners.  Consider this flow:

  1. Land on the Explore page
  2. I browse by refining on Location = Seattle and Subject = Expenses
  3. I search for the keyword "reports", but don't find anything of interest
  4. Now, I want to go back to #2 and browse the rest of Seattle/Expense items

Without the ability to search for the empty string (or a "clear" button), the only way to do #4 would be to reload the page and re-apply the Seattle/Expense filters.

Luckily, the Search Box webpart supports empty string search-- it's just disabled by default.  It is controlled by the AllowEmptySearch property, which AFAIK can not be managed from the UI.  To set this property, export your Search Box, and edit the .webpart file in a text editor:

Import it back to the page, and you are good to go!

Preserving Selected Refiners

A bigger problem is that when you search for a new keyword, all refiners are reset.  Consider this use case:

  1. Filter on Location = Seattle, Subject = Expenses
  2. Try a bunch of different keywords within those categories

Since the Search Box clears existing refinements on submit, I'd have to reapply Seattle/Expenses refiners after every new keyword.  So, this is not usable at all.

We can partially solve this by setting the MaintainQueryState property to True (False by default).  Again, this is not available from the UI.  You have to export and re-import the webpart:

This retains selected refiners between searches, but introduces a new problem.  It maintains ALL query state, including paging, which gives unexpected results in this specific scenario:

  1. Search for keyword A
  2. Go to next page of results.  The query state is now on page 2.
  3. Search for keyword B
  4. If keyword B has only one page of results, you will see NO results because the query state is still on page 2.  Worse, the paging controls aren't available because it thinks there are no results at all!

What we really want is to retain selected refiners, but start back at page 1.  I can not figure out any way to achieve this using the OOB Search Box webpart (would love to hear if anyone else has).  Which brings me to the more interesting part of this article...

Custom Javascript Search Box

The goal is to have a textbox that does the following on every submit:

  • Search for entered keyword
  • Retain selected refiners
  • Start at page 1
  • Accepts empty keyword (clear keyword)

This can easily be achieved with just Javascript and a Content Editor Webpart.  This custom JS approach also opens the door to other custom behaviors.  Go crazy.  The sky's the limit!

To be clear, the OOB Search Box comes with other goodies like query suggestions and scope selection.  However, those often are not needed on an Explore page.  I feel it's more important to perfect the basic filter/search use cases. 

You can drop the following markup into a Content Editor Webpart, which does the following:

  1. Create a text box (line 1)
  2. Attach an event handler (keyup) for the Enter key (line 5)
  3. Get a reference to the search DataProvider on the page (lines 8-9)
  4. Set query state's keyword to value from textbox, and paging to first page (lines 13-14)
  5. Trigger a query (line 16)
  6. If there's a Search Results or Content Search webpart on the page, it should automatically pick up the new results.

<input id="search-box" type="text" placeholder="Search..." />

<script type="text/javascript">
  jQuery(document).ready(function () {
    jQuery('#search-box').keyup(function (event) {
      if (event.keyCode == 13) { // Enter key
        if (Srch.ScriptApplicationManager) {
          var appManager = Srch.ScriptApplicationManager.get_current();
          var provider = appManager.queryGroups['Default'].dataProvider;

          var keyword = jQuery('#search-box').val();

          provider.get_currentQueryState().s = 0; // page = 0
          provider.get_currentQueryState().k = keyword; // new keyword

          provider.issueQuery(); // execute search
        }

        return false;
      }
    });
  });
</script>


SharePoint Search vs Explore Experience

Tuesday, June 28, 2016

0

I stare deep into the empty search box, and the cursor blinks back at me-- a mocking, existential question mark.  What the heck do I type in? Does it matter? Does anything matter? I iterate pseudo-random combinations of keywords, hoping to discover that one serendipitous, magical arrangement. Despair sets in.

The SharePoint Search experience (search results page) is often used as a one-size-fits-all solution, even when users really need Discovery.  Or both.  There are small, yet very important differences between the two.  More and more, I'm building portals with a separate search-based Explore page, along side the usual Search Results via the global search box.

In the (in)famous words of Rumsfeld, "there are known unknowns... but there are also unknown unknowns". It's the difference between searching for a particular book on Amazon, or just exploring Kindle Books -> Sci Fi. From personal experience, at least 80% of the books on my Kindle came from the latter!

Search helps users quickly get to stuff, when they already have a good idea what they're looking for. The flow is simple: Enter a keyword to search -> Refine down the results -> Profit.

Often though, I have no idea what's out there.  I don't even know where to start.  Discovery, therefore, has to be a loosely guided process. Users likely start by exploring predefined buckets of information using the search refiners, rather than entering a search term right away. When the search box is used, they are likely trying many different keywords to get a sense of what's out there, rather than zeroing in on a particular item.

There are obviously many, many different ways to build a good Discovery experience. It doesn't have to be search-based at all.  But it makes a lot of sense to leverage the power of Search if your items are neatly groomed and tagged with appropriate metadata.

In the next couple posts, I'll focus on some enhancements to make the OOB search results page better suited for exploration:


Peruvian Fiends

Sunday, February 08, 2015

1

I spent two weeks exploring the areas around Cusco and Machu Picchu in Peru earlier this year.  It was an awesome experience.  As a city dweller, it was humbling and inspiring to witness for myself such sheer scale.  I could not truly grok the weight of the mountains before I stood beneath their shadow, nor the vastness of the earth without looking across its infinite plains.

One memory from the trip, however, sticks out like a longsword in the kidney.  Perhaps writing about it will exorcise my demons.

PSA: DO NOT, UNDER ANY CIRCUMSTANCES, VISIT MACHU PICCHU WEARING SHORTS

I was surprised to see many people at Machu Picchu wearing wide brimmed hats, long sleeve shirts and pants, under the mercilessly blazing sun. Not me, I thought. Hats are for tools...

Unlike the areas around Cusco, Machu Picchu is a rainforest zone.  As such, it's home to sandflies, which at first glance resemble miniature house flies.

Do not be deceived. THEY ARE OF THE DEVIL. They will rape and pillage your body, and leave behind nothing but a shriven, pain-filled husk.

The bite itself is hardly noticeable, and initially leaves a small red spot, often with a drop of blood in the center.  I counted about 40 bites, but surprisingly, the first two days were not unbearably itchy.

The next two days were a time now known as The Scourging.

Each bite expanded to about an inch across and turned blue, like big bruises.  My legs had become instruments of torture.  Two big lumps that existed for no purpose other than to be the source of terrible, terrible pain.

Beware the flies.

The rest of the trip was amazing.



Search Driven Design in SharePoint 2013

Sunday, February 08, 2015

0

Search has seen huge improvements in stability and ease of use in SharePoint 2013.  Along with the new Search-based content webparts, it's a complete game changer for content authoring and presentation.

This article is more of an elevator pitch, than an in-depth examination of search driven architecture.  If you've been building search based solutions since SP13 CTP came out, this isn't for you.  I, however, was until very recently a firm believer in the Old Ways.

The idea is simple.  Data is stored in lists and libraries.  We need to retrieve that data and render it.  Before, I'd get the data directly from said lists using the List APIs, be it through server-side, REST, or JSOM.  Now, we can get it from the Search index.  This simple change has vast implications.

Flexibility

Since we're getting our data from the Search index, the location of the actual content no longer matters.  It could be on the same site, on a different site, spread across multiple sites, or even a different farm.  If it's being crawled and indexed, we can serve it up.

This gives us vastly more freedom in terms of site architecture.  For instance, a common design is to create a separate "authoring" private site for content authors.  Then, index its content and present it on a public publishing site.

Extensibility

Search driven design also offers the maximum separation between data and presentation.  SharePoint 2013 comes with a new Content Search Web Part (CSWP), which renders dynamic search results in a way that allows us to completely customize the presentation.  I have yet to write a custom webpart from scratch since I started using this.

The CSWP has two primary moving parts: a search query and a display template.  The search query filter drives the data--what's being retrieved from the search index.

The display template drives the presentation--how to render that data.  All presentation logic is contained in the display template, which is uploaded to the site.  No server side code involved!

So, we can move the source content around without breaking the presentation.  Or conversely, completely change how our content is rendered, simply by switching display templates.

Performance

Search returns data faster than the List APIs.  I'm not stating this as scientific fact.  I haven't done any measurements.  Obviously, it's possible to make the opposite true by putting your search index on a laptop.  But, this is my general observation, having worked with many portals in many different environments.  And it makes sense, as search is designed and optimized to serve up content as quickly as possible.

The Other Shoe

Having said all that, it's not all rainbows and unicorns.  There's a delay between when source content is updated, and when it gets indexed and reflected in the presentation.  Search would not be a good fit for time-critical scenarios.

Essentially, we're adding a layer between the data and presentation.  All sorts of things could go wrong in between.  Maybe a search Managed Property gets mis-configured.  Maybe someone fat-fingers the crawl schedules.  There's that teeeeny, tiny niggle of worry that maybe your users aren't seeing what you'd think.

Secondly, not everything in SharePoint can be indexed.  Specifically, webparts.  Search would be a bad idea if a lot of your source content lives in Content Editor or Summary Links webparts.  This should never be the case if you build a search driven solution to begin with.  Alas, users are capricious creatures.

All in all, I find the benefits far outweigh the risks in most scenarios.  Try it!


Call Server-side Code from Javascript

Sunday, February 08, 2015

6

Server-side code and postbacks already seem like remnants of a bygone era.  The next generation of SharePoint developers will likely know no more about them than punched tape computers.  Yet, there are still scenarios where you just can't avoid server-side operations.

I recently needed a way to asynchronously execute a block of server-side code from client-side Javascript.  It turns out there is an ICallbackEventHandler interface for precisely that purpose!

There's a detailed MSDN article on it, which I did not find very easy to digest.  In this post I'll try to boil it down to the essentials by going over a SharePoint example.

Background

I was building a visual webpart to let users update their Quick Links and other User Profile properties.  I stubbed out my methods, whipped up some JSOM to grab the properties, and threw in a snazzy jQuery drag-and-drop control.

Then, I tried to implement the Save button.  Disaster struck.  User profiles are read-only from all client-side APIs.

The Big Picture

This diagram illustrates the flow of execution from client to server and back.  I'll go over each piece in detail, but this is how they all fit together.



Client Side

We'll start from the Javascript side, as that is where the user action begins.  ICallbackEventHandler lets us asynchronously fire off a server side method, to which we can pass a single string parameter.

Let's say my Quick Links are stored in <div>s on the page:

<div class="link" data-title="Google" data-url="http://www.google.com" />
<div class="link" data-title="Bing" data-url="http://www.bing.com" />
<div class="link" data-title="MSDN" data-url="http://www.msdn.com" />

We can push this information into a JSON object, serialize it into a string, and pass it server-side to be saved.

Invoke server-side code


At this point, the ExecuteServerSide method below that will invoke our server-side code isn't defined yet. We will wire it up later from the code-behind.

function save() {
    var links = [];

    // Create JSON object with link data
    $('.link').each(function () {
        var title = $(this).data('title');
        var url = $(this).data('url');

        links.push({ 'Title': title, 'Url': url });
    });

    // Serialize the object to a string
    var strLinks = JSON.stringify(links);

    // Invoke server-side code.  Will wire up later.
    ExecuteServerSide(strLinks);
}

Completion Handler


Next, let's add the method that will be called when the server-side operation completes.  The server can return a single string back to the page.

function ServerSideDone(arg, context) {
    alert('Message from server: ' + arg);
}

Server Side

Begin by adding the ICallbackEventHandler interface to the webpart's code-behind.  This interface has two methods that need to be implemented: RaiseCallbackEvent and GetCallbackResult.

public partial class QuickLinksEditor : WebPart, ICallbackEventHandler

RaiseCallbackEvent


RaiseCallbackEvent( string eventArg ) is the entry point.  It's what's called by ExecuteServerSide in line #16 of the Javascript above.  eventArg is the string passed from the client side (e.g. the serialized link data)  In most scenarios, you would save this string in a class variable and use it later to perform whatever server-side operation.

We could parse out the links manually, but it's cleaner to make a simple data model class:

[DataContract]
public class QuickLink
{
    [DataMember]
    public string Title;

    [DataMember]
    public string Url;
}

Then we can take advantage of DataContractJsonSerializer to convert the JSON string to a List of QuickLink objects:

public void RaiseCallbackEvent(string eventArg)
{
    // Instantiate deserializer
    DataContractJsonSerializer serializer =
       new DataContractJsonSerializer(typeof(List<QuickLink>));

    // Deserialize
    MemoryStream stream =
       new MemoryStream(System.Text.ASCIIEncoding.ASCII.GetBytes(eventArg));

    // Save input data to class variable for later processing
    this.Links = (List<QuickLink>)serializer.ReadObject(stream);
}

GetCallbackResult


This method is where we execute the server-side operation.

It returns a string, which is how information gets passed back to the client-side.  That return value is the ServerSideDone method's arg parameter in line #18 in the Javascript code above.

public string GetCallbackResult()
{
    try
    {
        // Get User Profile Manager and update Quick Links
        // Full code at bottom of article
    } 
    catch(Exception ex)
    {
        return ex.Message;
    }

    return "Success!";
}

Wire Up

Finally, we wire up the two client-side functions, ExecuteServerSide and ServerSideDone.  That is done from Page_Load in the code-behind:

protected void Page_Load(object sender, EventArgs e)
{
    ClientScriptManager scriptMgr = Page.ClientScript;

    // Completion handler
    String callbackRef = scriptMgr.GetCallbackEventReference(this, 
        "arg", "ServerSideDone", "");

    // Invoke server-side call
    String callbackScript = 
        "function ExecuteServerSide(arg, context) {" + 
        callbackRef + 
        "; }";

    // Register callback
    scriptMgr.RegisterClientScriptBlock(this.GetType(), 
       "ExecuteServerSide", callbackScript, true);
}

Again, note that line #7 here defines the method signature in line #18 of the Javascript.  And line #11 here defines the method called in line #16 of the Javascript.

That's it!

Complete Code-behind


using System;
using System.ComponentModel;
using System.Web.UI;
using System.Web.UI.WebControls.WebParts;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Json;
using System.Collections.Generic;
using System.IO;
using System.Text;

using Microsoft.Office.Server.UserProfiles;
using Microsoft.Office.Server;

using Microsoft.SharePoint;

namespace SharePointificate.QuickLinksEditor
{
    [DataContract]
    public class QuickLink
    {
        [DataMember]
        public string Title;

        [DataMember]
        public string Url;
    }

    [ToolboxItemAttribute(false)]
    public partial class LegacyQuickLinks : WebPart
    {
        private List<QuickLink> Links;

        public LegacyQuickLinks()
        {
        }

        protected override void OnInit(EventArgs e)
        {
            base.OnInit(e);
            InitializeControl();
        }

        protected void Page_Load(object sender, EventArgs e)
        {
            ClientScriptManager scriptMgr = Page.ClientScript;

            String callbackRef = scriptMgr.GetCallbackEventReference(this, 
                "arg", "ServerSideDone", "");

            String callbackScript = 
                "function ExecuteServerSide(arg, context) {" + 
                callbackRef + 
                "; }";

            scriptMgr.RegisterClientScriptBlock(this.GetType(), 
                "ExecuteServerSide", callbackScript, true);
        }

        public string GetCallbackResult()
        {
            try
            {
                SPServiceContext serviceContext = 
                    SPServiceContext.GetContext(SPContext.Current.Site);

                UserProfileManager userProfileManager = 
                    new UserProfileManager(serviceContext);

                UserProfile currentUser = 
                    userProfileManager.GetUserProfile(true);

                QuickLinkManager quickLinkManager = currentUser.QuickLinks;

                quickLinkManager.DeleteAll();

                foreach (QuickLink link in this.Links)
                {
                    quickLinkManager.Create(link.Title, link.Url, 
                        QuickLinkGroupType.General, null, Privacy.Public);
                }
            } 
            catch(Exception ex)
            {
                return ex.Message;
            }

            return "Success!";
        }

        public void RaiseCallbackEvent(string eventArgument)
        {
            DataContractJsonSerializer serializer = 
                new DataContractJsonSerializer(typeof(List<QuickLink>));

            MemoryStream stream = 
                new MemoryStream(ASCIIEncoding.ASCII.GetBytes(eventArgument));

            this.Links = (List<QuickLink>)serializer.ReadObject(stream);
        }
    }
}


Bulk Load List Items with Powershell

Sunday, February 08, 2015

0

I constantly tear down and rebuild sites in my dev environment, and re-populating lists with test data is a royal pain.  One day, as CEO, I'll have "people" to take care of this sort of thing.  Alas, I have but my own wits to rely on for now.

The most concise and flexible solution I could find is using Powershell to load the data from a CSV file.  I usually create the list data in Excel, and Save As a .CSV file.  It's a huge time saver.  The first line of the file specifies the display names of the list fields to populate.

For example, this CSV data is for a Links list.  Note the URL field in all caps.  The display names are case sensitive.

Title,URL,Description
SharePointificate,http://sharepointificate.blogspot.com,A very cool blog
Funny Cat Videos,http://funnycatvideos.net,Just click it.
Angry Birds,https://www.angrybirds.com,U mad?

There's an auto-magic Import-CSV Powershell cmdlet that loads a CSV file into a list of objects.  The properties of each object correspond to the display names defined in the file's header line.

The links CSV above would produce a list of 3 objects.  Each object will have a .Title, .URL, and .Notes property.  The following code outputs "A very cool blog":

$data = Import-CSV links.csv
Write-Host $data[0].Notes

We can then iterate over each object's properties with .psobject.properties, and map their Name / Value to a new SPListItem.

The sample script below only handles field types where we can directly set the text value.  Some complex types like Taxonomy or Multichoice will require special logic.  To handle those cases, check $list.Fields[$csvField.Name].Type.  Refer to this Technet article for code samples for setting every type of SharePoint field.

param(
    [string]$WebUrl = $(Read-Host -prompt "Web Url"),
    [string]$ListName = $(Read-Host -prompt "List Name"),
    [string]$CsvPath = $(Read-Host -prompt "Path to CSV file")
)


# Load SP snapin if needed

$snapin = Get-PSSnapin | Where-Object {$_.Name -eq 'Microsoft.SharePoint.Powershell'}
if ($snapin -eq $null) 
{
    Add-PSSnapin "Microsoft.SharePoint.Powershell"
}

$web = Get-SPWeb $WebUrl
$list = $web.Lists[$ListName]; 

# Load CSV into object

$csvItems = Import-Csv $CsvPath

# Iterate over items to import

foreach($csvItem in $csvItems)
{
    # Create new list item
    $newItem = $list.items.add();
 
    # Iterate over fields to import
    foreach($csvField in $csvItem.psobject.properties)
    {
        if(![string]::IsNullOrEmpty($csvField.Value))
        {
            # Set field value
            $newItem[$csvField.Name] = $csvField.Value;
        }
    }

    $newitem.update();
}