AutoSPSourceBuilder heads to the PowerShell Gallery!

Standard

File under “why didn’t I do this years ago??”

You can now easily install AutoSPSourceBuilder (my PowerShell-based utility for downloading SharePoint updates and integrating them into the installation media) from the PowerShell Gallery.

TL;DR:

Install-Module -Name AutoSPSourceBuilder

No more need to browse to the GitHub repo, download the zip, extract it, etc. The simple one-liner above will (on any modern Windows machine with installed-by-default PowerShellGet etc.) automatically download and install AutoSPSourceBuilder.ps1 to your default Scripts directory, and make it available to directly run in any PowerShell sessions you launch.

What’s more, the AutoSPInstaller.xml update inventory file, updated on a (roughly) monthly basis and previously bundled with the script, is now by default automatically downloaded at script run-time to ensure you have the latest set of SharePoint updates to choose from. If however for any reason you want to use your own XML inventory, you can opt to skip the xml download and use a local copy of the inventory file by including the new -UseExistingLocalXML switch parameter.

Now that I finally realized just how ridiculously easy it is to publish a script to the Gallery, you can expect to see some more of my stuff make its way there in the near future.

Hopefully this latest batch of changes makes it easier to keep the AutoSPSourceBuilder SharePoint update management tool… updated!

Cheers
Brian

AutoSPInstaller and AutoSPSourceBuilder now work with SharePoint Server 2019

Standard

Just in time for the release of the SharePoint Server 2019 Public Preview, I’ve made probably the most significant updates to two of my open-source offerings in a while… AutoSPInstaller will now install and create a SharePoint 2019 farm (using the Public Preview bits), and AutoSPSourceBuilder will download and integrate the SharePoint 2019 prerequisites – so you can install SharePoint 2019 while offline, without an Internet connection on the target server(s).

Also, I’ve finally converted the AutoSPInstaller functions file to a PowerShell module (!) This should improve the ability to run individual functions, as I plan to make some or all of the functions more easily executable on their own, without depending on or referencing an XML input file.

Check them out, and let me know what you think!

Cheers

PFE Ramblings: SharePoint 2019, SharePointDSC and AutoSPInstaller

Standard

With the recent announcement and release of the Public Preview of SharePoint Server 2019, my fellow PFE Nik Charlebois and I thought we’d record a quick chat of our thoughts on the state of things around SharePoint and two of the most popular automated installation approaches, SharePointDSC and AutoSPInstaller.

Head on over to Nik’s blog for the full run-down, and a recording of our chat as well.

Cheers

Using AutoSPSourceBuilder To Build a .NET Framework 4.6 Compatible SharePoint 2013 Installation Source

Standard

Several months back, some folks started reporting an error when attempting to install SharePoint 2013 on certain Windows servers – specifically, ones that already had the .Net Framework 4.6 installed. The error simply states: “Setup is unable to proceed due to the following error(s): This product requires Microsoft .Net Framework 4.5” – well, 4.6 is surely newer, shinier and better than 4.5, so what gives?

Though initially identified as an incompatibility between the SharePoint 2013 installer and .Net 4.6, albeit with viable but potentially cumbersome workarounds (mess around in the registry, uninstall .Net 4.6 before re-attempting the SharePoint installation, etc.), Microsoft eventually released a permanent fix that involved some manual steps to replace a setup-related DLL. All good, right?

Well if you’re lazy (erm I mean, efficient) like me, you might feel we can do one better. Since the fix involves basically downloading a file, extracting it, then replacing a file that’s part of the SharePoint 2013 installation media, it kind of fits in nicely with one of my open-source projects which performs many of the same steps for SharePoint prerequisites and updates – AutoSPSourceBuilder. So I set about incorporating the KB3087184 fix as one of SharePoint 2013’s “prerequisites” – that is, it should really be considered a prerequisite if you want to install 2013 on a server that has all Windows Updates (including .Net 4.6x) already applied to it.

Here’s how to proceed with automatically integrating the KB3087184 fix yourself, if you find yourself in that situation:

  1. Download & extract the latest AutoSPSourceBuilder after reading a little bit about it to get acquainted, if you’re not already
  2. Run the script as usual, making sure to specify:
    1. -SourceLocation <path to your SharePoint installation source/DVD/mounted ISO>
    2. -Destination <path where the assembled stuff should be saved>
    3. -GetPrerequisites:$true (this is important as we consider KB3087184 to be a SP2013 prerequisite now)
    4. -CumulativeUpdate <CU name, e.g. “October 2016”>
    5. <other optional parameters e.g. -Languages, as needed>
  3. Check the output folder that appears, especially the _SLIPSTREAMED.txt file for confirmation that you have incorporated everything you were expecting

The AutoSPSourceBuilder PowerShell script will automatically detect if the fix is required, download the fix, rename the existing svrsetup.dll, then extract the updated svrsetup.dll to the correct location. Once the script completes, you should have a .Net 4.6 compatible SharePoint 2013 source, with your choice of cumulative update, language packs, etc. ready to be installed by something like SharePointDSC, AutoSPInstaller or (gasp) a manual process.

It’s always worth mentioning that my open-source projects aren’t officially supported by Microsoft, but you can reach out to me directly if you have any specific issues or questions.

Cheers
Brian

Getting More Storage Bang For Your Azure IaaS Buck

Standard

As someone who’s been running local Virtual Machines on a beefed-up laptop for several years now, I’ve found the strategy becoming more cumbersome and challenging as the demands of my main platform of choice (SharePoint) increase with regard to recommended specs. In fact, spec requirements are the main factor that have personally kept me from making the leap to a lighter, less-powerful Surface or similar device – devices which simply don’t yet have the horsepower (mainly in the RAM department) to allow me to build/run/test a local SharePoint 201x farm. So at the moment, my laptop (with 2 SSDs, 32 GB RAM, Core i7) is proving pretty difficult to part ways with. It actually out-specs many of my customers’ SharePoint VMs… Having said that, with the push to Azure and the ubiquitous cloud model in general, I’m slowly working my way towards the ultimate goal of eventually doing away with the ol’ boat anchor of a laptop and running all my stuff in the cloud.

Which brings us to this post regarding Azure Infrastructure as a Service (IaaS) – specifically, how to control costs (or in my case, MSDN credit consumption) while getting decent performance. And in my experience, the biggest single factor in an Azure IaaS virtual machine’s performance is storage. And, being a SharePoint guy, I’ll try to gear this information around what makes sense for hosting SharePoint VMs in Azure. Admittedly, much of this information is already available out there, my aim is just to assemble and share what’s proven helpful for me so far as a relative newcomer in the Azure space. What follows is therefore a combination of discoveries, strategies and scripts I’ve found useful in my ongoing transition to Azure IaaS.

Azure File Storage

Generally available since September 2015, the Azure File Storage offering (in Azure Resource Manager mode) allows you to create shared folders that can effectively be seen by not only your Azure virtual machines but really any endpoint on the Internet with the proper credentials. It has SMB 3.0 support, and (in my case, on an MSDN account at least) supports up to 5 TB (5120 GB) quota per share – plenty of room for your stuff. My own use case for Azure File Storage is to host a cloud-based copy of the SharePoint 2010/2013/2016 binaries, for easy and (somewhat) local proximity to my Azure VMs. Keep in mind this is persistent storage – the files I have placed on my Azure File Storage share will stick around as long as the parent storage account exists – completely independently of the state or existence of my Azure VMs. Another nice thing is I can download software ISOs (e.g. SQL, SharePoint) directly from one of my Azure VMs with lightning speed straight to my Azure File Storage share, then extract the binaries from there.

More for the SharePoint-minded folks: By running AutoSPSourceBuilder on one of my Azure VMs, I can save myself a bunch of time and bandwidth by downloading the prerequisites and CU(s) for SharePoint straight to my Azure File Storage share- giving me everything I need software-wise to install SharePoint in Azure IaaS.

And when I do need to upload additional stuff to my Azure share,  a utility called AZCopy comes in very handy at moving files to (and from) different types of Azure storage, not just file shares. I like to think of it as Robocopy for Azure; in fact, many of the command-line switches will be familiar to anyone who’s used Robocopy (and what seasoned Windows admin hasn’t?), plus AZCopy definitely supercharges your file transfer operations (it’ll easily max out your available bandwidth!) You can also use the web interface in the new Azure portal to manage content on your Azure File Storage shares, if you prefer.

Gotchas

Azure File Storage shares do seem to currently have some quirks. Although most file types run just fine from the UNC path (e.g. \\myfilestorage.file.core.windows.net\share), I’ve found that certain file types (e.g. MSI) seem to give obscure errors when trying to invoke them, for example:

MSI Error

However, the same file runs just fine if you copy it locally to the VM first. BTW if anyone out there has any clue as to why this happens, I’d love to hear it…

Another thing that comes to mind to watch out for (as it may not be immediately obvious) is that files stored in Azure File Storage can’t be ACLed – in other words, the shares don’t support NTFS security, because, well, by all indications they aren’t stored on an NTFS file system. So if were thinking of using Azure File Storage to host classic file shares in the cloud, you might be better off hosting these shares on an Azure Windows-based VM instead in order to take full advantage of NTFS security.

Also, for some reason AZCopy doesn’t retain the original modified times on uploaded files it seems; I haven’t found a way to work around this, but for now it remains a minor pain.

SQL Server Storage

Ask any SQL DBA what keeps them up at night and you’ll often get an answer containing “IOPS”. Short for Input/output Operations Per Second, IOPS is one way to measure a particular storage subsystem’s speed (it’s more complicated than that, but for the purposes of this post we’ll just keep it at this level) and is an important factor in overall SQL Server performance. The next thing your sleepless SQL DBA might mutter is something relating to “TempDB”. And sure enough, as SharePoint admins we’ve also been lectured about the importance of a well-performing TempDB on our SQL back end instance. The challenge is, running a SharePoint farm in Azure on a budget (or in my case, on CDN $190/mo worth of MSDN credits) doesn’t leave you with a really great default storage performance experience on most Azure VM sizes – the stated limit on standard data disks in many lower-end VM sizes is 500 IOPS – not exactly whiplash-inducing. So how can we improve on this?

TempDB on Temporary Storage

The first option is to leverage a VM size that has SSD (Solid State Disk) available for temporary storage – for example, “D3_V2 Standard”:

D3_V2-Standard

Looks promising! We’re offered 200 GB of premium local SSD storage… but careful, this is actually only temporary storage, meaning that we stand to irretrievably lose anything and everything on this volume should the VM be stopped/de-provisioned (which, on limited Azure credits, we are quite likely to do). So storing our SQL user database MDF and LDF (data and log) files here is probably a Bad Idea. But, remember our afore-mentioned, IOPS-loving TempDB? By its very nature, the files associated with TempDB aren’t (or don’t need to be) persistent; they can be re-created every time SQL Server starts up. And given the performance gains that SSD storage gives us, our temporary D: volume is looking really promising.

However (and there’s always a catch), getting TempDB to make its, er, permanent home on the temporary D: volume is not entirely straightforward. First, ideally we would want to specify the location of TempDB during installation of SQL Server itself. SQL Server 2016 in fact now gives us more “best practice” oriented options for configuring TempDB, including multiple files and autogrowth, right out of the gate:

TempDB

Cool right? We just specify the path for our TempDB files as somewhere on the D: drive, and we’re good to go… Eeeeexcept when the VM stops and re-starts. Then, SQL starts complaining that it can’t access the location of TempDB, and fails to start. What did we do wrong? Well it turns out that while the SQL setup process is able to create new directories (like the one on D: above), SQL Server itself can’t create them, and expects a given path to database files to already exist – and due to the temporary nature of our D: drive, the path above was blown away when the VM got de-provisioned. OK fine, you say, let’s re-do this but instead of specifying a subdirectory for the TempDB files, we’ll just specify the root of D: – but again, not so fast. The problem now is that the SQL Server installer wants to set specific permissions on the location of data and log files, and because we’ve specified the root of D:, it’s unable to, and this time the SQL setup process itself craps out.

The solution to all this is to do a switch – allow the SQL setup process to initially create our TempDB files at some arbitrary subfolder on D:, but then once SQL is installed, we can move the TempDB files to the root of D: (using simple SQL scripts that can be easily found on the Interwebz). The last gotcha is that we’ll need to ensure that the account running SQL server is a local admin. Why? Because the first rule of our temporary D: SSD volume is that it can get blown away whenever the VM is re-started/re-provisioned – and that means NTFS permissions on the D: drive are set to defaults as well. Aaaand by default, non-administrators can’t write to D:. So, we resolve this last unfortunate situation by simply creating and assigning a special SQL server account and configuring SQL to run under that account (making sure it’s a local Admin). Now we’re finally set to use the temporary D: SSD volume as the location of our TempDB files and enjoy the improved performance therewith.

Storage Space for Data/Logs

Another option for increasing storage performance on an Azure IaaS-hosted SQL server is to throw all available disk at it. By this I mean (for example) in the case of our “D3_V2 Standard” VM size above, we are allowed up to 8 data disks assigned to the VM, each with a maximum of 500 IOPS. Some quick math will reveal that, although still not shattering any performance records, if we can somehow team these disks together, we should be able to see a noticeable performance increase over employing only a single data drive. Again, this isn’t really new information, in fact even the official Azure tooltip on the new portal recommends “striping across multiple disks” among other things for improved IOPS (see under VM > Settings > Disks > Attach new disk, Host caching):

Azure_Disk_Recommendation

This can be achieved by either using the classic software-based striping/RAID tools found in Windows Disk Management, or (my preference) using the more recent Storage Spaces / Pools functionality introduced in Windows Server 2012. By creating a storage pool from all our available disks (perhaps in a striped or mirrored model) we can take advantage of multiple simultaneous I/O operations using multiple disks (in the case of our “D3_V2 Standard” machine above, we get 8 data disks.) Personally I just use all available disks in a simple stripe set (no parity or mirror) because I’m not storing critical data – your needs may vary. For SQL Server purposes, we can use our newly-provisioned pool of disks as the location of both our system and user data/log files during setup, and subsequently-created databases will automatically and conveniently get created there. As far as cost goes, from what I understand the additional cost incurred by using multiple disks is actually pretty minimal, since we are charged for actual data stored vs. size and number of disks.

One Storage Pool, or Multiple?

Now, do we want to use all our available Azure data disks in a single large storage pool, or should we create multiple pools out of subsets of disks? To use an oft-quoted consulting phrase, “it depends”. You will make certain best practice analyses happier by, for example, splitting out your data and log files onto separate disk volumes. But the real-world performance gains are debatable – sure, you theoretically avoid disk resource contention by putting your data files on 4 disks and your log files on another 4 disks, but at the same time you’ve reduced the total IOPS available to each by roughly half. I myself prefer to just give my data and log files all available IOPS in a single storage pool / disk / volume (keep in mind that this is all being done on a budget, for non-Production workloads anyhow). In a production scenario, while many of the concepts mentioned in this posts would nevertheless apply, you likely wouldn’t have the same constraints anyhow could afford bigger, better and more distributed storage options.

Sample Code Time!

Y’all know I loves me some PowerShell, especially when it comes to having to do something repeatedly (say, for every target server in a SharePoint farm.) So here’s a (badly formatted) function that will create a storage pool w/ simple striping using all available disks, create a virtual disk with a single partition, then format the volume as NTFS using a 64KB cluster size:

function New-StoragePoolAndVirtualDiskFromAvailableDisks ($storagePoolName, $driveLetter)
{
if ($null -eq $driveLetter) {$driveLetter = “S”}
$driveLetter = $driveLetter.TrimEnd(“:”)
[UInt32]$allocationUnitSize = 64KB
if ($null -eq $storagePoolName) {$storagePoolName = “StoragePool”}
# Create storage pool if it doesn’t exist yet
if ($null -eq (Get-StoragePool -FriendlyName $storagePoolName -ErrorAction SilentlyContinue))
{
Write-Output ” – Creating new storage pool `”$storagePoolName`”…”
New-StoragePool -FriendlyName $storagePoolName -PhysicalDisks (Get-PhysicalDisk -CanPool:$true) -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName
}
else
{
Write-Output ” – Storage pool `”$storagePoolName`” already exists – proceeding…”
# Create virtual disk if it doesn’t exist yet
if ($null -eq (Get-VirtualDisk -FriendlyName $storagePoolName -ErrorAction SilentlyContinue))
{
Write-Output ” – Creating new disk in $storagePoolName…”
$disk = New-VirtualDisk -StoragePoolFriendlyName $storagePoolName -FriendlyName $storagePoolName -UseMaximumSize -ProvisioningType Fixed -AutoWriteCacheSize -AutoNumberOfColumns -ResiliencySettingName Simple
Write-Output ” – Initializing disk…”
$disk | Initialize-Disk -PartitionStyle GPT
}
else
{
Write-Output ” – Virtual disk already exists – proceeding…”
$disk = Get-VirtualDisk -FriendlyName $storagePoolName
}
if ($null -eq (Get-Partition -DriveLetter $driveLetter -ErrorAction SilentlyContinue))
{
# Create and format partition if it doesn’t exist yet
Write-Output ” – Creating new partition at $($driveletter):…”
$partition = New-Partition -DiskId $disk.UniqueId -UseMaximumSize -DriveLetter $driveLetter
Write-Output ” – Waiting 5 seconds…”
Start-Sleep -Seconds 5
}
else
{
Write-Output ” – Partition $($driveletter): already exists – proceeding…”
$partition = Get-Partition -DriveLetter $driveLetter
}
if ((Get-Volume -DriveLetter $partition.DriveLetter -ErrorAction SilentlyContinue).FileSystem -ne “NTFS”)
{
Write-Output ” – Formatting volume $($partition.DriveLetter):…”
$partition | Format-Volume -FileSystem NTFS -NewFileSystemLabel $storagePoolName -AllocationUnitSize $allocationUnitSize -Confirm:$false
}
else
{
Write-Output ” – Volume $($partition.DriveLetter): is already formatted.”
}
Write-Output ” – Done.”
}

Note that the script above doesn’t actually have an Azure dependencies, so you can pretty much use it on any Win2012 and up VM with available disks.

SharePoint Storage

So what the heck does all of this have to do with SharePoint (you know, that thing that I usually specialize in)? Well, in addition to improving SharePoint’s performance (SQL being the brains and brawn behind SharePoint and all), we can use what we’ve learned above to create a SharePoint server’s recommended logging / data / search index volume. Now the storage performance requirements aren’t quite as high for this volume as for those hosting SQL databases, but it’s recommended that we put stuff like this on a non-system volume anyhow. So, although we may not have a need for a fast(er), large storage pool of disks, in many cases we’re entitled to those disks by virtue of the Azure VM size we’ve chosen, so why not use ’em.

Speaking of SharePoint VM sizes, here’s another sample script that you can use to get VMs suitable for running SharePoint in a particular Azure region (location). It takes the Location Name as input (hint, use Get-AzureRmLocation | Select-Object Location to list available locations) and assumes certain values (for RAM, # of CPUs, # of data disks) but feel free to play around with the numbers to get at the right VM size for your purposes.

function Get-AllAzureRmVmSizesForSharePoint ($locationName)
{
Add-AzureRmAccount
if ($null -eq $locationName) {$locationName = “CanadaCentral”} # Toronto
$minDiskCount = 8
$minMemory = 8192
$maxMemory = 16384
$maxCores = 8
Write-Output ” – VM sizes suitable for SharePoint (minimum $minDiskCount data disks, between $minMemory and $maxMemory MB RAM, and $maxCores CPU cores or less) in location `”$locationName`”:”
Get-AzureRmVMSize -Location $locationName | Where-Object {$_.MaxDataDiskCount -ge $minDiskCount -and $_.MemoryInMB -le $maxMemory -and $_.MemoryInMB -ge $minMemory -and $_.NumberOfCores -le $maxCores}
}

This particular function of course assumes you have a recent version of the Azure cmdlets installed.

Hope you found this somewhat random post useful. It’s basically a summary of the last few months of my experimenting with Azure as an aspiring expert on the platform, and really only scratches the surface of Azure options with regard to storage, VM sizing and automation.

Cheers
Brian

 

The First SharePoint 2016 Post-RTM Update Has Been Released

Standard

SharePoint 2016 is seeing its first post-go-live update – KB2920721 – available for download here: https://www.microsoft.com/en-us/download/details.aspx?id=51701. You may notice that it doesn’t follow the typical CU naming convention of ubersrv* – that’s because it’s not a cumulative update, but rather just a specific update that contains a single patch sts-x-none.msp.

Things to know about this patch:

  • It does not increment the farm’s configuration database version. Your previously RTM SP2016 farm (as shown in Central Administration) remains at 16.0.4327.1000 after installing this patch, so you’ll need somewhere else like Control Panel –> Programs and Features –> Installed Updates to confirm it’s installed.
  • As with most other SharePoint updates, you must run the SharePoint Products Configuration Wizard after installing the package itself in order to fully commit the patch installation.
  • Also as with most other SharePoint updates, you should be able to extract the .msp patch file to <DriveLetter>:\<SharePointBinaryLocation>\updates (a process called slipstreaming) and use this source when building a new farm from scratch, in order to automatically patch the new farm as it’s being built.
  • The KB article describing what changes are included in the patch is available at http://support.microsoft.com/kb/2920721.

Cheers
Brian

Installing SharePoint 2016 Release Candidate Directly (i.e. Without Manual Patching)

Standard

When SharePoint 2016’s Release Candidate was announced, you may have wondered why (and at the same time been a little sad that) there was no monolithic ISO or executable made available that would allow you to install straight to RC without first having to install the previous public release (Beta 2) first. Well, it turns out there’s a fairly simple way to accomplish a direct-to-RC installation, and it uses a tried & true methodology – slipstreaming!

Here are the high-level steps to go from zero to RC (in your non-Production environments, right?):

  1. Download the SharePoint 2016 Beta 2 bits if you don’t already have them.
  2. (optional) Download any Beta 2 language packs you might require.
  3. Extract/copy the Beta 2 bits to a suitable local or network folder (e.g. C:\SP\2016\SharePoint).
  4. (optional) Extract the language pack(s) to a folder relative to the path above (e.g. C:\SP\2016\LanguagePacks\xx-xx (where xx-xx represents a culture ID, such as de-de)).
  5. Download the SharePoint 2016 RC patch – and don’t forget to select/include the prerequisite installer patch that matches the language of SharePoint 2016 you’re installing.
  6. Download the RC language pack that matches the language of SharePoint 2016 you’re installing (in my case, English). You need this in order to update the built-in language pack.
  7. (optional) Download the corresponding patch for any other language packs you downloaded/extracted previously.
  8. Extract the RC patch to (using the convention above) C:\SP\2016\SharePoint\Updates:

SP2016 Slipstreamed Updates

Note that the wssmui.msp shown above is actually the English language pack patch, which you would have obtained from Step 6. above.

9. Extract the prerequisite installer patch files (downloaded as part of step 5) to C:\SP\2016\SharePoint, overwriting any existing files:

SP2016 Slipstreamed Prerequisiteinstaller

10. (optional) Extract respective RC language patch files to C:\SP\2016\LanguagePacks\xx-xx\Updates:

SP2016 Slipstreamed LangugePack

Careful! All the language pack RC patch files are called wssmui.msp regardless of language, with no quick way to tell them apart. I therefore recommend you extract/copy them one at a time – but again this step only applies if you’re actually installing packs for different languages.

11. Now install SharePoint as you normally would. Patches placed in the correct locations will be automatically picked up and applied during the installation. Note that by this point, the process should look familiar if you’ve ever done slipstreaming in previous versions of SharePoint.

12. Once the installation is complete, verify that the patches were successfully applied in <CentralAdminUrl>/_admin/PatchStatus.aspx. You should see entries for “Update for Microsoft Office 2016 (KB2345678) (wssmui_<cultureID>.msp 16.0.4336.1000)” under each language pack (if applicable):

SP2016 Slipstreamed RC Language Pack

And also “Update for Microsoft Office 2016 (KB2345678) (sts.msp 16.0.4336.1000)” under SharePoint 2016 Preview itself:

SP2016 Slipstreamed Updates

Oh, and the contents of that “readme.txt” file shown in the screen caps above? “Any patches placed in this folder will be applied during initial install.” As though the product was, you know, designed for this 🙂

Cheers
Brian

Using AutoSPInstaller to Run Specific Configuration Changes

Standard

-UPDATED May 2019-

While AutoSPInstaller (my open-source project for installing SharePoint 2010-2019) is designed so it can be run and re-run as often as required to complete or tweak the installation and initial configuration of a SharePoint farm, there admittedly are times when executing the entire scripted process might seem like overkill.

For example, you might want to provision a service application that you accidentally had left set to “false” the first time around. Or, you might want to rewire which servers in your farm are running the Distributed Cache service (maybe to create a dedicated Cache cluster). Alternately, maybe several months (and changes) have passed since your farm was built, and your level of confidence that something hasn’t diverged from your original XML configuration (to the point of conflicting with it) isn’t rock-solid.

Luckily, since the included file AutoSPInstallerModule.psm1 is, as the filename suggests, now an actual module with a collection of PowerShell functions, you can actually isolate and run these chunks of script code individually. The advantages are twofold: First, you can continue to leverage the consistent and automated approach that helped get your farm built quickly in the first place. Second, you can completely bypass all the redundant steps in the process (such as checking for and creating web apps, adding managed accounts, etc.) and can be assured that only the net-new changes you need will be executed.

To do this, you’ll obviously need the AutoSPInstaller script files themselves, as well as the AutoSPInstallerInput*.xml file you used to originally build the farm (with your new modifications included of course). For the steps below, you’ll want to be logged in as the SharePoint installer account (you did use a dedicated account to install SharePoint, right?)

First, we want to grab the full path to your XML, so we can easily paste it below. A quick shortcut to do this is to shift-right-click the XML file itself and select Copy as path:

image

Now, launch a SharePoint Management Console (as Administrator), and enter the following in order to assign the content of our input file to an XML object:

[xml]$xmlinput = (Get-Content "<path to your XML file which you can just paste here>") -replace "localhost", $env:COMPUTERNAME

Note that you can simply paste the path to your XML in the designated space above (by the way, the line above was basically pulled straight from AutoSPInstallerMain.ps1).

Now that our entire XML input file is loaded and available as $xmlinput, we can use it to pass parameters to many of the functions found in AutoSPInstallerModule.psm1. First however we’ll need to make those functions available to us in this console – this is accomplished by simply importing the module. Here we have another one-liner, and if we use the same technique to copy the path to our AutoSPInstallerModule.psm1 as we did above, we can just type something like the following:

Import-Module -Name "C:\SP\Automation\AutoSPInstallerModule.psm1" -Verbose

(TIP: including the -Verbose switch above will output all the available functions for easy reference)

Finally, we’re ready to call nearly any of the functions in AutoSPInstaller (in fact we can use familiar tab-based autocomplete to get their names, too) since they’re loaded in memory for the current PowerShell console.

Let’s say for example we want to provision Business Connectivity Services on this particular server (the one we’re logged on to, that is). We would simply enter:

CreateBusinessDataConnectivityServiceApp $xmlinput

At this point, the BCS service app should get provisioned based on the details in our XML input file:

image

Note, if nothing happens, it’s likely because you forgot to change the XML Provision attribute from “false” to either “true” or the name of your target server.

That’s really about all there is to it. Hopefully this helps folks who are leery of running the entire monolithic AutoSPInstaller process just to make small changes to their existing farms.

(Oh I realize the current layout & structure of AutoSPInstaller may not be optimal – namely, much of this should probably have been implemented as one or more PowerShell modules… it’s in the queue of future enhancements!)

Update- the entire post above was updated to reflect the fact that the AutoSPInstaller functions file has been converted to a PowerShell module!