Skip to content

Author: Manas Bhardwaj

Task services has failed with an unknown exception

While installing SharePoint 2010 on a clean Windows 2008 R2 box, I received the following error:

Task services has failed with an unknown exception
Exception: System.InvalidOperationException: The specified value for the LocStringId parameter is outside the bounds of this enum.


Exception: System.InvalidOperationException: The specified value for the LocStringId parameter is outside the bounds of this enum.
	at Microsoft.SharePoint.Portal.WebControls.StringResourceManager.ConvertLocStringIdToStringFast(LocStringId lsid)
	at Microsoft.SharePoint.Portal.WebControls.StringResourceManager.GetString(LocStringId lsid)
	at Microsoft.Office.Server.ApplicationRegistry.SharedService.ApplicationRegistryServiceInstance.get_TypeName()
	at Microsoft.SharePoint.PostSetupConfiguration.ServicesTask.InstallServiceInstanceInConfigDB(Boolean provisionTheServiceInstanceToo, String serviceInstanceRegistryKeyName, Object sharepointServiceObject)
	at Microsoft.SharePoint.PostSetupConfiguration.ServicesTask.InstallServiceInstances(Boolean provisionTheServiceInstancesToo, String serviceRegistryKeyName, Object sharepointServiceObject)
	at Microsoft.SharePoint.PostSetupConfiguration.ServicesTask.InstallServices(Boolean provisionTheServicesToo)
	at Microsoft.SharePoint.PostSetupConfiguration.ServicesTask.Run()
	at Microsoft.SharePoint.PostSetupConfiguration.TaskThread.ExecuteTask()

Even after trying multiple Windows restart, SharePoint 2010 Repair procedures nothing really helped. What eventually helped was an un-expected trick posted on one of the MSDN Forums here.

So, what you need to do is REMOVE the registry key at the following location and the re-run the configuration wizard.


[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\WSS\Services\_Microsoft.Office.Server.ApplicationRegistry.SharedService.ApplicationRegistryService] 

And also,


[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\WSS\ServiceProxies\Microsoft.Office.Server.ApplicationRegistry.SharedService.ApplicationRegistryServiceProxy]

Happy SharePoint 🙂

Wish you a very happy new year 2015

Time flies and before you know it’s time to start working on your new year resolutions for your next year.

I still have few of them from my last year in my ToDo list, so not much to work on that area this year.

Again, wish you world of happiness now, and throughout the seasons of the coming year 2015.

Happy Neat Year

As usual, JetPack release the yearly report for this blog. And to be honest, it’s been a great year specially for my blog. I got some time spend on writing on a range of topics from technology to management.

Here are some excerpts from the report:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 32,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 12 sold-out performances for that many people to see it.

Views_2014

That’s 148 countries in all! Most visitors came from The United States. India & U.K. were not far behind.

Views_Country_2014

Make or Buy Decision for Software Solutions

(Padillo, et. al., 1999) recognised that the make-or-buy decision problem, also known as “sourcing”, “outsourcing”, or “subcontracting”, is among the most pervasive issues confronting modern organizations. (Cáñez, 2000) further added that the companies have finite resources and cannot always afford to have all manufacturing technologies in-house. Other authors have also acknowledged the heightened importance of the process for buy-or-make decisions results from environmental pressures and the intensifying of global competition.

(Stojanović, et al., 2011) argued that at its core there is a very simple logic. Comparing the costs of own production and vendor purchasing for a different number of units, there is an equilibrium Q* between cost effective “make” and “buy” solutions.

Figure 2: Basic economic “make or buy” decision-making model (Männel, 1976)

On the topic of make-or-buy decisions, (Probert, 1997) proposed a four-stage methodology that may be implemented by practising managers to resolve this issue. The various stages in his methodology are:

  • Initial business appraisal. This phase addresses issues related to the direction of the business and the customer preferences.
  • Internal/external analysis. This is the heart of the review. Details of the company’s internal performance as well as the competitors’ performance are collected.
  • Generate/evaluate options. Having the information from phases one and two, make-in and buy-out options are analysed.
  • Choose optimal strategy. Considering the different options generated in phase three, the optimal strategy is chosen.

(McIvor & Humphreys, 2000) recorded the key problems encountered by companies in their efforts to formulate an effective make or buy decision.

  • No formal method for evaluating the decision.
  • Inaccurate costing systems.
  • The competitive implications of the decision.

Many theories are used in the literature to further explore the make-or-buy decisions. The list includes, but it is not limited to, transaction cost theory, network theory, competency theory, resource-based view theory, and total cost of ownership. These topics would be explored further in the subsequent chapters.

Transaction Cost Theory

Transaction cost economics (TCE) has a long past since what we generally speak of as ‘transaction costs’ have been present in economic discourse for centuries. The past of TCE is rich in metaphors describing the idea of transaction costs, but the one with the most profound impact on the later development of TCE was the notion of frictions.

(Coase, 1937) in his classic 1937 paper on “The nature of the Firm” was the first to bring the concept of transaction costs to bear on the study of firm and market organization. (Coase, 1937) asked that if production could be carried on without any organisation at all, well might we ask, why is there any organisation? The answer was that firms exist because they reduce transaction costs, such as search and information costs, bargaining costs, keeping trade secrets, and policing and enforcement costs.

(Walker & Weber, 1984) argued that the effect of transaction costs on make-or-buy decisions was substantially overshadowed by comparative production costs. (Walker & Weber, 1984) further added that the extent to which market competition affects make-or-buy decisions may reflect the ability of the component purchasing manager to indicate how low competition leads to contracting difficulties. The purchasing manager may not have considered, however, the causal relationship connecting market competition and production costs.

On the other hand, (Barney, 1991) points out that transaction cost economics tends not to pay a due amount of attention to the capabilities of a company’s potential partners when deciding which economic exchanges to include within a firm’s boundary and which to outsource.

Resource Based View

(Atkinson, et al., 2012) recognized that the resource-based view of the firm has its roots in the organizational economics literature, where theories of profit and competition associated with the writings of various authors focus on the intimal resources of the firm as the major determinant of competitive success. Central to the understanding of the resource-based view of the firm are the definitions of resources, competitive advantage and sustained competitive advantage.

(Penrose, 1980) suggests that there is a logical limit to how big a firm can get and how many things an individual manager can concern himself with. This perspective is rooted in the understanding that a single manager—or cohesive, functioning management team—can only attend to so many details and issues at any one moment in time, and that a team necessarily only has a finite number of competencies at its disposal.

According to (Barney, 1991), the concept of resources includes all assets, capabilities, organizational processes, firm attributes, information, knowledge, etc. controlled by a firm that enable the firm to conceive of and implement strategies that improve its efficiency and effectiveness.

(Atkinson, et al., 2012) further recommended that in the resource-based view of the firm, these resources are the sources of competitive advantage. Barney describes a competitive advantage as occurring when a firm is implementing a value creating strategy not simultaneously being implemented by any current or potential competitors. According to the resource-based view of the firm, competitive advantage can occur only in situations of firm resource heterogeneity and firm resource immobility, and these assumptions serve to differentiate the resource-based model from the traditional strategic management model.

(Barney, 1991) advocates that, for a firm resource to have the potential of generating competitive advantage, it must be:

  • valuable, in the sense that it exploits opportunities and/or neutralises threats in a firm’s environment;
  • rare among a firm’s current and potential competition;
  • imperfectly imitable (either through unique historical conditions, causal ambiguity, or social complexity); and
  • without strategically equivalent substitutes

(Akio, 2005) asserts that the resourced based view suggests that the resources possessed by a firm are the primary determinants of its performance, and these may contribute to a sustainable competitive advantage of the firm.

Core Competencies

(Prahalad & Hamel, 1990) expressed their view on core competency as the collective learning of an organization and involves coordinating diverse production skills and integrating multiple streams of technologies. It includes communication, involvement, and a deep commitment to working across organizational boundaries, such as improving cross-functional teams within an organization to address boundaries and to overcome them. (Prahalad & Hamel, 1990) also stressed that core competency does not diminish by with use. Unlike, physical assets which deteriorate over time, core competencies are enhanced as they are applied and shared.

(Fine & Whitney, 1996) argued that the main skills companies should retain transcend those directly involving product or process, and are in fact the skills that support the very process of choosing which skills to retain.

(Quinn & Hilmer, 1994) suggests that effective core competencies are:

  • Skill or knowledge sets, not products or functions.
  • Flexible, long-term platforms — capable of adaptation or evolution.
  • Limited in number.
  • Unique sources of leverage in the value chain.
  • Areas where the company can dominate.

Strategic Choices

There are many definitions of strategy given by various authors. (Johnson, et al., 2008) considered strategy as a long term direction of an organisation. According to them strategy is the direction and scope of an organisation over the long term, which achieves advantage in a changing environment through its configuration of resources and competences with the aim of fulfilling stakeholder expectations. (Porter, 1990) on the other hand emphasised on the competitive factor and described strategy as the search for a favorable competitive position in an industry, the fundamental arena in which competition occurs. Competitive strategy aims to establish a profitable and sustainable position against the forces that determine industry competition.

From the various defintions of strategy, it can be summarised that startegy of an organization should have the following elements in it i.e. long term, resources, competitive, profitable and position etc.

The three business strategies (Porter, 1980) propounded (cost leadership, differentiation and focus) specify the basic approaches that could be implemented in a competitive environment. According to Porter, it is impossible to succeed if a firm does not prefer one of these three strategies or implement two of them simultaneously. Porter defines this situation as being stuck in the middle.

Figure 3: Michael Porter’s generic strategies

How to Set Check-in Policies for all Projects in Team Foundation Server using PowerShell?

The Team Foundation Server 2013 provides the administrators with opportunity to add check-in policies to the Source Control Settings.  These check-in policies define that while checking in the code in the Team Foundation Server Version Control (either TFS Version Control or GIT), the user has to perform certain extra actions. These actions vary from adding comments to describe the check-in, linking the check-in to one or more work items defined the in the product backlog or to make sure that your build successfully passes all the unit tests defined in the project.

The check-in policies are defined on the Team Project level. However when you have a Team Project Collection with hundreds of various team projects, you want a way with which you can standardize the check-in policies for all the team projects.

Unfortunately,  Team Foundation Server does not support this functionality out of the box. But luckily, you can use the Team Foundation Server SDK to implement this programmatically. The PowerShell script below shows how to set Check-in Policies for all Team Projects in Team Foundation Server using PowerShell.

The script makes use of the SetCheckinPolicies on the Team Project object. What it does is basically retrieving all the available projects in the Team Project Collection and looping through all available projects to set the check-in policy individually.

You would notice that the script makes use of two policies:

  • Work Item
  • Check for Comments

However, you can extend this or change to select other installed policies on your workstation. You would notice that script makes use of InstalledPoliciyTypes on the work station. It basically makes use of the registered policy assemblies from the registry under the path:

 HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\12.0\TeamFoundation\SourceControl\Checkin Policies

TFS Policy


[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Common")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Client")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Controls")


function AddPolicyOnProject($project){
	$policies = @()
	
	$policies += AddToPolicyEnvelope($workItemPolicy)
	$policies += AddToPolicyEnvelope($checkForComments)
	
	$project.SetCheckinPolicies($policies)
	
	Write-Host "Adding Policies to" $project.Name
}

function AddToPolicyEnvelope($policy){
	$policyType = $installedPolicyTypes | where {$_.Name -eq $policy.Type}
	return New-Object -TypeName Microsoft.TeamFoundation.VersionControl.Client.PolicyEnvelope -ArgumentList @($policy, $policyType)
}

$serverName = "http://manasbhardwaj.net/tfs/Projects"

$tfs = [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory]::GetServer($serverName)
$versionControlType = [Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer]
$versionControlServer = $tfs.GetService($versionControlType)
$projects = $versionControlServer.GetAllTeamProjects($true)

$installedPolicyTypes = [Microsoft.TeamFoundation.VersionControl.Client.Workstation]::Current.InstalledPolicyTypes

$workItemPolicy = New-Object -TypeName Microsoft.TeamFoundation.VersionControl.Controls.WorkItemPolicy
$checkForComments = New-Object -TypeName CheckForCommentsPolicy.CheckForComments

$projects | foreach { AddPolicyOnProject $_ }


Shrink all databases on SQL Server

This post is intended to be used as a hack for Developers to release some disk space. Please do not use on actual production environments.

Usually, the database transaction logs also takes a lot of space.

What I also usually do is set the Recovery Mode of all the local databases on Dev Machines to Simple instead of Full. That way the growth of log files is also reduced for future.

Next to free up the space (or shrink all databases), you can use the query below. What it will do is basically set the recovery mode to simple for all databases and then shrink them.


CREATE TABLE #DataBases (ID INT IDENTITY, Name NVARCHAR(100))

INSERT #DataBases
SELECT NAME FROM sys.databases WHERE NAME NOT IN ('master','model','msdb','tempdb')

DECLARE @Count INT = 1
DECLARE @NrOfDBs INT = 0

SELECT @NrOfDBs = COUNT(0) FROM #DataBases

DECLARE @DBName NVARCHAR(100), @SQL NVARCHAR(MAX)

WHILE (@Count < @NrOfDBs)
BEGIN
     SELECT @DBName = Name FROM #DataBases WHERE ID = @Count

     SELECT @SQL = 'ALTER DATABASE [' + @DBName + '] SET RECOVERY SIMPLE'

     PRINT(@SQL)
     EXEC(@SQL)

     --Shrink Database
     DBCC SHRINKDATABASE (@DBName , 0)
     
     SET @Count = @Count + 1
END

DROP TABLE #DataBases

A PowerShell alternative to SharePoint 2013 AppRegNew.aspx

If you are reading this post then chances are that you are already aware of the SharePoint 2013 App Model and especially the SharePoint 2013 Provider Hosted Apps.

A provider-hosted app for SharePoint consists of both an app for SharePoint that is deployed directly to a SharePoint 2013 site and a separately deployed web application. If the provider-hosted web application is an ASP.NET web application, you can use the Office Developer Tools for Visual Studio 2013 to create both components of a provider-hosted app for SharePoint.

Packaging and publishing a SharePoint 2013 provider-hosted app can be a lengthy process. MSDN has a detailed article here explaining the process and steps required to publish a SharePoint provider-hosted App.

The standard process of registering an app on a SharePoint 2013 environment is by using the appregnew.aspx page (http://sitecollection/_layouts/15/appregnew.aspx) and generating a Client Id which can be used to communicate between the SharePoint and the provider-hosted app to establish a high-trust.

appregnew.aspx

However, this approach of generating Client Id can be cumbersome when you have different environments (DTAP street)  with-in your organization. I did not want to burden my application administrators by creating complexity of the installation process  to generate the Client Id per environment and change it in several places (the .app package, web.config etc.)

Not only this, it also adds a lot of confusion when multiple developers are working together and all of them have to generate their own Client Ids for respective machines. The checked-in web.config, app packages in source controller can really create unnecessary confusion.

A similar blog about this process is posted here.

However, PowerShell can come to your rescue to generate the Client Id once and use the same Client Id to register your SharePoint App in different environment. Yes, not only in DTAP street but also various developer machine working together on a product.

The small PowerShell snippet below can automate the whole process of registering an App with a given Client Id and after that installing it in SharePoint.

For those who are interested in details, it makes use of Register-SPAppPrincipal command. This command lets an on-premise or SharePoint Online administrator register an app principal which means you can also use it for Office 365.


$clientID = "74599670-eb74-4348-9e7a-f9dc07c576a2"
$appFile = "C:\Temp\MyApp.app"
$siteCollection = "http://manasbhardwaj.net"
$appName = "My App"
 
$web = Get-SPWeb -Identity $siteCollection
 
$realm = Get-SPAuthenticationRealm -ServiceContext $web.Site;
$appIdentifier = $clientID  + '@' + $realm;
 
#Register the App with given ClientId
Register-SPAppPrincipal -DisplayName $appName -NameIdentifier $appIdentifier -Site $web | Out-Null
 
$app = Import-SPAppPackage -Path $appFile -Site $siteCollection -Source ObjectModel -Confirm:$false 	
 
#Install the App
Install-SPApp -Web $siteCollection -Identity $app	| Out-Null

Download Script

How to create custom SharePoint 2013 list using PowerShell?

SharePoint provides an interface called AddFieldAsXml. This basically creates a field based on the specified schema. Nothing fancy. Right?

However, this can be very handy when you want to create your own custom lists in SharePoint programmatically. I will be using PowerShell as an example to demonstrate how you can simply create SharePoint Lists based on plain, simple xml definitions.

To start with, I created an xml template based on which I want to create my custom list using PowerShell.


<!--?xml version="1.0" encoding="utf-8"?-->

	
	
	 	
	
            0 - 10000
            
                0 - 10000
                10000 - 50000
                50000 - 100000
                100000 or more
            
       	

In this example, I have used Text, Number, Date and Choice as the Field Types. But this could be anything which is supported by SharePoint or even your own content types. Check the documentation on MSDN for Field Element.

Next step is to read this xml file, parse it and use the AddFieldAsXml method to create fields in this list. The PowerShell snippet below does the trick. Straight and Simple. Isn’t it?


Add-PSSnapin Microsoft.SharePoint.PowerShell 

function CreateList($siteCollectionUrl, $listName, $templateFile){
	
	$spWeb = Get-SPWeb -Identity $siteCollectionUrl 
	$spTemplate = $spWeb.ListTemplates["Custom List"] 
	$spListCollection = $spWeb.Lists 
	$spListCollection.Add($listName, $listName, $spTemplate) 
	$path = $spWeb.url.trim() 
	$spList = $spWeb.GetList("$path/Lists/$listName")
	$templateXml = [xml](get-content $templateFile)
	foreach ($node in $templateXml.Template.Field) {
	
		$spList.Fields.AddFieldAsXml($node.OuterXml, $true,[Microsoft.SharePoint.SPAddFieldOptions]::AddFieldToDefaultView)
	}
	$spList.Update()
}


$siteCollectionUrl = "http://manas.com"
$listName = "New Custom List"
$templateFile = "template.xml"

CreateList $siteCollectionUrl $listName $templateFile

And here is the result, just by configuration in xml file you can create lists using PowerShell.

SharePoint Create List

Download Example

Happy Coding!

How to pass Professional Scrum Master (PSM I) Certification?

So, Finally, I am a Professional Scrum Master (PSM I). It was kind of overdue to have an attempt on Professional Scrum Master Certification conducted by Scrum.org. I have been using Agile (and Scrum) in my projects in various capacities for many years now. And to be honest, the simplicity of the framework and the empirical process behind the Scrum has fascinated me to pursue the subject further.

Scrum Master Badge

I registered for the exam last week. The process is relatively simple. Register on the Scrum.org and pay $100 for the PSM I exam. They would send your password to access the exam within a business day. And you have a period of 14 days to give the exam. I kind of used 7 days just to make sure I don’t waste my $100 as the score you need to get is fairly high (85%). That means from a set of 80 questions, you need to get at least 68 right.

PSM

I did not do excellent and just made sure to pass the exam on border by having 69 answers correct i.e. 86.6 % as score. Well, not really bad for first attempt. After the exam, you do not get report of the question which you did not answer correct but you do get a consolidated report on the areas with your scores. Mine looked something like this. This at least helps me to focus on specific areas for Scrum.

Scrum Score

Tips and tricks to pass Professional Scrum Master Certification

Let’s be honest, there is no ready-made formula for success in any field and same applies here while you try to attempt get the certificate for Professional Scrum Master. And in fact, the certificate value just drops to nothing if you have not learned anything during the process.

PSM-I-Exam-TopBanner-Text

Nevertheless, here are my two cents based on my own experience.

  1. Make sure you go through the official Scrum Guide written by Ken Schwaber & Jeff Sutherland thoroughly. The guide is very concise, but covers the essence of Scrum.
  2. I would suggest you to go through the Scrum Open Assessments (both and Scrum Master and Scrum Developer) multiple times before you are scoring 95% or more consecutive time. The open assessment has a set of approximate 40 questions. Out of which 30 questions are presented in an assessment.The assessment will give you an idea of questions which would be asked in the actual assessment. Additionally, you would find some of the questions from open assessment repeated in the actual assessment. This gives you surplus time and confidence during the examination.Personally, I found 10-15 questions being repeated in the actual exam.
  3. Don’t look around on internet for dumps of questions. You are not going to find any. And even if you do, then what’s the point of giving exam and get credentials? You could better create a Photoshop version of certificate to boss around.
  4. PSM I Simulated Exams from Management Plaza is a good tool to evaluate your preparation and readiness for the PSM I Exam. The simulator not only contains a set of 250 practice questions but also explains each and every answer.
  5. Everyone has their own preference over books, I went through the ‘A Guide to the SCRUM BODY OF KNOWLEDGE (SBOK™ GUIDE), 2013 Edition’. It’s to the point and gives you decent read before the exam.
  6. The Scrum Master Training Manual from Frank Turley and Nader K. Rad is another resource guide which compliments the original Scrum Guide. It has a lot of practical examples which are important from the exam point of view. One thing which I liked in the manual is the explanation of burn-down charts which is not explicit in the Scrum Guide. And ofcourse, it’s FREE!
  7. Go through the discussions on Scrum Forum. You would find a lot of people discussing their experience and queries on this forum. Group of folks here are willing to help you if have questions.  A great place to hang around for Scrum enthusiasts.
  8. During the exam, don’t try to Google (or Bing) around for the answers. First, you won’t find any. Second, there is no guarantee that the answer is right. Third, you would be wasting your time. Keep in mind that you need to complete 80 questions in 60 minutes. That gives you 45 seconds per question. Yes, you need to be fast.
  9. And yes, make sure you have an isolated place where you can concentrate while giving the examination. As the examination is online, you need to have a good and consistent internet connection in place. Have a glass (or two) of water with you. You would feel thirsty during the exam. Psychological? Not sure!

Good luck to those of you who are preparing and attempting for PSM!

And while you are here, you can read my earlier related posts on Scrum and Agile.

A beginner’s guide to Scrum

The curious case of Scrum Master’s role

An introduction to Agile Methodology

A beginner’s guide to various Software development methodologies

PSM Certification Guide

How to Pass Professional Scrum Product Owner (PSPO I) Certification?

Error during configuration of Scheduled Backups for Team Foundation Server

Another situation which should be easy and straight forward, but then you realise that it does not work.

I wanted to set up my own Team Foundation Server Express (yes, it’s light, free version with certain limitations but a good way to start for your small team or organisation). The installation process is quite straight forward and clear.

However when I wanted to configure the scheduling for back-ups of my TFS database, I was continuously receiving the error below.

TF401009: The TFS service account NT AUTHORITY\LOCAL SERVICE cannot access network shares. Change to an account that can access the backup path.

TF400997: SQL Server service is running as NT AUTHORITY\NetworkService. Please change this account to an account that can be granted permission on the backup path.

Untitled

I had given the permissions to both the accounts on the network share. Actually, it was shared folder on the same machine as I installed the TFS in single server mode.

To add a bit of background, my Team Foundation Service was running under Local Service and SQL Server was running under Network Service. This is the reason why these two accounts come into picture.

Looking at the logs, it was more clear that for scheduling backups TFS expects the proper (preferably service) accounts to be used to run SQL Server and Team Foundation Service. The following error messages can be seen in the log.

Verify that account ‘NT AUTHORITY\NetworkService’ is not local service or local system.
Node returned: Error
TF400997: SQL Server service is running as NT AUTHORITY\NetworkService. Please change this account to an account that can be granted permission on the backup path.

And a similar one for TFS Service as well.

Resolution:

The resolution is to change the Service account for both SQL Server (MSSQLSERVER) and TFSJobAgent (Visual Studio Team Foundation Background Job Agent) to another account.

Capture

However, this seems to be check only during the configuration. And if you change the service log on user back to Network and Local Service after configuration, the backup procedure still runs without any hassle.

Not sure about the details why this is prevented during configuration. If you are aware of the reason, please share!