Getting More Storage Bang For Your Azure IaaS Buck

Standard

As someone who’s been running local Virtual Machines on a beefed-up laptop for several years now, I’ve found the strategy becoming more cumbersome and challenging as the demands of my main platform of choice (SharePoint) increase with regard to recommended specs. In fact, spec requirements are the main factor that have personally kept me from making the leap to a lighter, less-powerful Surface or similar device – devices which simply don’t yet have the horsepower (mainly in the RAM department) to allow me to build/run/test a local SharePoint 201x farm. So at the moment, my laptop (with 2 SSDs, 32 GB RAM, Core i7) is proving pretty difficult to part ways with. It actually out-specs many of my customers’ SharePoint VMs… Having said that, with the push to Azure and the ubiquitous cloud model in general, I’m slowly working my way towards the ultimate goal of eventually doing away with the ol’ boat anchor of a laptop and running all my stuff in the cloud.

Which brings us to this post regarding Azure Infrastructure as a Service (IaaS) – specifically, how to control costs (or in my case, MSDN credit consumption) while getting decent performance. And in my experience, the biggest single factor in an Azure IaaS virtual machine’s performance is storage. And, being a SharePoint guy, I’ll try to gear this information around what makes sense for hosting SharePoint VMs in Azure. Admittedly, much of this information is already available out there, my aim is just to assemble and share what’s proven helpful for me so far as a relative newcomer in the Azure space. What follows is therefore a combination of discoveries, strategies and scripts I’ve found useful in my ongoing transition to Azure IaaS.

Azure File Storage

Generally available since September 2015, the Azure File Storage offering (in Azure Resource Manager mode) allows you to create shared folders that can effectively be seen by not only your Azure virtual machines but really any endpoint on the Internet with the proper credentials. It has SMB 3.0 support, and (in my case, on an MSDN account at least) supports up to 5 TB (5120 GB) quota per share – plenty of room for your stuff. My own use case for Azure File Storage is to host a cloud-based copy of the SharePoint 2010/2013/2016 binaries, for easy and (somewhat) local proximity to my Azure VMs. Keep in mind this is persistent storage – the files I have placed on my Azure File Storage share will stick around as long as the parent storage account exists – completely independently of the state or existence of my Azure VMs. Another nice thing is I can download software ISOs (e.g. SQL, SharePoint) directly from one of my Azure VMs with lightning speed straight to my Azure File Storage share, then extract the binaries from there.

More for the SharePoint-minded folks: By running AutoSPSourceBuilder on one of my Azure VMs, I can save myself a bunch of time and bandwidth by downloading the prerequisites and CU(s) for SharePoint straight to my Azure File Storage share- giving me everything I need software-wise to install SharePoint in Azure IaaS.

And when I do need to upload additional stuff to my Azure share,  a utility called AZCopy comes in very handy at moving files to (and from) different types of Azure storage, not just file shares. I like to think of it as Robocopy for Azure; in fact, many of the command-line switches will be familiar to anyone who’s used Robocopy (and what seasoned Windows admin hasn’t?), plus AZCopy definitely supercharges your file transfer operations (it’ll easily max out your available bandwidth!) You can also use the web interface in the new Azure portal to manage content on your Azure File Storage shares, if you prefer.

Gotchas

Azure File Storage shares do seem to currently have some quirks. Although most file types run just fine from the UNC path (e.g. \\myfilestorage.file.core.windows.net\share), I’ve found that certain file types (e.g. MSI) seem to give obscure errors when trying to invoke them, for example:

MSI Error

However, the same file runs just fine if you copy it locally to the VM first. BTW if anyone out there has any clue as to why this happens, I’d love to hear it…

Another thing that comes to mind to watch out for (as it may not be immediately obvious) is that files stored in Azure File Storage can’t be ACLed – in other words, the shares don’t support NTFS security, because, well, by all indications they aren’t stored on an NTFS file system. So if were thinking of using Azure File Storage to host classic file shares in the cloud, you might be better off hosting these shares on an Azure Windows-based VM instead in order to take full advantage of NTFS security.

Also, for some reason AZCopy doesn’t retain the original modified times on uploaded files it seems; I haven’t found a way to work around this, but for now it remains a minor pain.

SQL Server Storage

Ask any SQL DBA what keeps them up at night and you’ll often get an answer containing “IOPS”. Short for Input/output Operations Per Second, IOPS is one way to measure a particular storage subsystem’s speed (it’s more complicated than that, but for the purposes of this post we’ll just keep it at this level) and is an important factor in overall SQL Server performance. The next thing your sleepless SQL DBA might mutter is something relating to “TempDB”. And sure enough, as SharePoint admins we’ve also been lectured about the importance of a well-performing TempDB on our SQL back end instance. The challenge is, running a SharePoint farm in Azure on a budget (or in my case, on CDN $190/mo worth of MSDN credits) doesn’t leave you with a really great default storage performance experience on most Azure VM sizes – the stated limit on standard data disks in many lower-end VM sizes is 500 IOPS – not exactly whiplash-inducing. So how can we improve on this?

TempDB on Temporary Storage

The first option is to leverage a VM size that has SSD (Solid State Disk) available for temporary storage – for example, “D3_V2 Standard”:

D3_V2-Standard

Looks promising! We’re offered 200 GB of premium local SSD storage… but careful, this is actually only temporary storage, meaning that we stand to irretrievably lose anything and everything on this volume should the VM be stopped/de-provisioned (which, on limited Azure credits, we are quite likely to do). So storing our SQL user database MDF and LDF (data and log) files here is probably a Bad Idea. But, remember our afore-mentioned, IOPS-loving TempDB? By its very nature, the files associated with TempDB aren’t (or don’t need to be) persistent; they can be re-created every time SQL Server starts up. And given the performance gains that SSD storage gives us, our temporary D: volume is looking really promising.

However (and there’s always a catch), getting TempDB to make its, er, permanent home on the temporary D: volume is not entirely straightforward. First, ideally we would want to specify the location of TempDB during installation of SQL Server itself. SQL Server 2016 in fact now gives us more “best practice” oriented options for configuring TempDB, including multiple files and autogrowth, right out of the gate:

TempDB

Cool right? We just specify the path for our TempDB files as somewhere on the D: drive, and we’re good to go… Eeeeexcept when the VM stops and re-starts. Then, SQL starts complaining that it can’t access the location of TempDB, and fails to start. What did we do wrong? Well it turns out that while the SQL setup process is able to create new directories (like the one on D: above), SQL Server itself can’t create them, and expects a given path to database files to already exist – and due to the temporary nature of our D: drive, the path above was blown away when the VM got de-provisioned. OK fine, you say, let’s re-do this but instead of specifying a subdirectory for the TempDB files, we’ll just specify the root of D: – but again, not so fast. The problem now is that the SQL Server installer wants to set specific permissions on the location of data and log files, and because we’ve specified the root of D:, it’s unable to, and this time the SQL setup process itself craps out.

The solution to all this is to do a switch – allow the SQL setup process to initially create our TempDB files at some arbitrary subfolder on D:, but then once SQL is installed, we can move the TempDB files to the root of D: (using simple SQL scripts that can be easily found on the Interwebz). The last gotcha is that we’ll need to ensure that the account running SQL server is a local admin. Why? Because the first rule of our temporary D: SSD volume is that it can get blown away whenever the VM is re-started/re-provisioned – and that means NTFS permissions on the D: drive are set to defaults as well. Aaaand by default, non-administrators can’t write to D:. So, we resolve this last unfortunate situation by simply creating and assigning a special SQL server account and configuring SQL to run under that account (making sure it’s a local Admin). Now we’re finally set to use the temporary D: SSD volume as the location of our TempDB files and enjoy the improved performance therewith.

Storage Space for Data/Logs

Another option for increasing storage performance on an Azure IaaS-hosted SQL server is to throw all available disk at it. By this I mean (for example) in the case of our “D3_V2 Standard” VM size above, we are allowed up to 8 data disks assigned to the VM, each with a maximum of 500 IOPS. Some quick math will reveal that, although still not shattering any performance records, if we can somehow team these disks together, we should be able to see a noticeable performance increase over employing only a single data drive. Again, this isn’t really new information, in fact even the official Azure tooltip on the new portal recommends “striping across multiple disks” among other things for improved IOPS (see under VM > Settings > Disks > Attach new disk, Host caching):

Azure_Disk_Recommendation

This can be achieved by either using the classic software-based striping/RAID tools found in Windows Disk Management, or (my preference) using the more recent Storage Spaces / Pools functionality introduced in Windows Server 2012. By creating a storage pool from all our available disks (perhaps in a striped or mirrored model) we can take advantage of multiple simultaneous I/O operations using multiple disks (in the case of our “D3_V2 Standard” machine above, we get 8 data disks.) Personally I just use all available disks in a simple stripe set (no parity or mirror) because I’m not storing critical data – your needs may vary. For SQL Server purposes, we can use our newly-provisioned pool of disks as the location of both our system and user data/log files during setup, and subsequently-created databases will automatically and conveniently get created there. As far as cost goes, from what I understand the additional cost incurred by using multiple disks is actually pretty minimal, since we are charged for actual data stored vs. size and number of disks.

One Storage Pool, or Multiple?

Now, do we want to use all our available Azure data disks in a single large storage pool, or should we create multiple pools out of subsets of disks? To use an oft-quoted consulting phrase, “it depends”. You will make certain best practice analyses happier by, for example, splitting out your data and log files onto separate disk volumes. But the real-world performance gains are debatable – sure, you theoretically avoid disk resource contention by putting your data files on 4 disks and your log files on another 4 disks, but at the same time you’ve reduced the total IOPS available to each by roughly half. I myself prefer to just give my data and log files all available IOPS in a single storage pool / disk / volume (keep in mind that this is all being done on a budget, for non-Production workloads anyhow). In a production scenario, while many of the concepts mentioned in this posts would nevertheless apply, you likely wouldn’t have the same constraints anyhow could afford bigger, better and more distributed storage options.

Sample Code Time!

Y’all know I loves me some PowerShell, especially when it comes to having to do something repeatedly (say, for every target server in a SharePoint farm.) So here’s a (badly formatted) function that will create a storage pool w/ simple striping using all available disks, create a virtual disk with a single partition, then format the volume as NTFS using a 64KB cluster size:

function New-StoragePoolAndVirtualDiskFromAvailableDisks ($storagePoolName, $driveLetter)
{
if ($null -eq $driveLetter) {$driveLetter = “S”}
$driveLetter = $driveLetter.TrimEnd(“:”)
[UInt32]$allocationUnitSize = 64KB
if ($null -eq $storagePoolName) {$storagePoolName = “StoragePool”}
# Create storage pool if it doesn’t exist yet
if ($null -eq (Get-StoragePool -FriendlyName $storagePoolName -ErrorAction SilentlyContinue))
{
Write-Output ” – Creating new storage pool `”$storagePoolName`”…”
New-StoragePool -FriendlyName $storagePoolName -PhysicalDisks (Get-PhysicalDisk -CanPool:$true) -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName
}
else
{
Write-Output ” – Storage pool `”$storagePoolName`” already exists – proceeding…”
# Create virtual disk if it doesn’t exist yet
if ($null -eq (Get-VirtualDisk -FriendlyName $storagePoolName -ErrorAction SilentlyContinue))
{
Write-Output ” – Creating new disk in $storagePoolName…”
$disk = New-VirtualDisk -StoragePoolFriendlyName $storagePoolName -FriendlyName $storagePoolName -UseMaximumSize -ProvisioningType Fixed -AutoWriteCacheSize -AutoNumberOfColumns -ResiliencySettingName Simple
Write-Output ” – Initializing disk…”
$disk | Initialize-Disk -PartitionStyle GPT
}
else
{
Write-Output ” – Virtual disk already exists – proceeding…”
$disk = Get-VirtualDisk -FriendlyName $storagePoolName
}
if ($null -eq (Get-Partition -DriveLetter $driveLetter -ErrorAction SilentlyContinue))
{
# Create and format partition if it doesn’t exist yet
Write-Output ” – Creating new partition at $($driveletter):…”
$partition = New-Partition -DiskId $disk.UniqueId -UseMaximumSize -DriveLetter $driveLetter
Write-Output ” – Waiting 5 seconds…”
Start-Sleep -Seconds 5
}
else
{
Write-Output ” – Partition $($driveletter): already exists – proceeding…”
$partition = Get-Partition -DriveLetter $driveLetter
}
if ((Get-Volume -DriveLetter $partition.DriveLetter -ErrorAction SilentlyContinue).FileSystem -ne “NTFS”)
{
Write-Output ” – Formatting volume $($partition.DriveLetter):…”
$partition | Format-Volume -FileSystem NTFS -NewFileSystemLabel $storagePoolName -AllocationUnitSize $allocationUnitSize -Confirm:$false
}
else
{
Write-Output ” – Volume $($partition.DriveLetter): is already formatted.”
}
Write-Output ” – Done.”
}

Note that the script above doesn’t actually have an Azure dependencies, so you can pretty much use it on any Win2012 and up VM with available disks.

SharePoint Storage

So what the heck does all of this have to do with SharePoint (you know, that thing that I usually specialize in)? Well, in addition to improving SharePoint’s performance (SQL being the brains and brawn behind SharePoint and all), we can use what we’ve learned above to create a SharePoint server’s recommended logging / data / search index volume. Now the storage performance requirements aren’t quite as high for this volume as for those hosting SQL databases, but it’s recommended that we put stuff like this on a non-system volume anyhow. So, although we may not have a need for a fast(er), large storage pool of disks, in many cases we’re entitled to those disks by virtue of the Azure VM size we’ve chosen, so why not use ’em.

Speaking of SharePoint VM sizes, here’s another sample script that you can use to get VMs suitable for running SharePoint in a particular Azure region (location). It takes the Location Name as input (hint, use Get-AzureRmLocation | Select-Object Location to list available locations) and assumes certain values (for RAM, # of CPUs, # of data disks) but feel free to play around with the numbers to get at the right VM size for your purposes.

function Get-AllAzureRmVmSizesForSharePoint ($locationName)
{
Add-AzureRmAccount
if ($null -eq $locationName) {$locationName = “CanadaCentral”} # Toronto
$minDiskCount = 8
$minMemory = 8192
$maxMemory = 16384
$maxCores = 8
Write-Output ” – VM sizes suitable for SharePoint (minimum $minDiskCount data disks, between $minMemory and $maxMemory MB RAM, and $maxCores CPU cores or less) in location `”$locationName`”:”
Get-AzureRmVMSize -Location $locationName | Where-Object {$_.MaxDataDiskCount -ge $minDiskCount -and $_.MemoryInMB -le $maxMemory -and $_.MemoryInMB -ge $minMemory -and $_.NumberOfCores -le $maxCores}
}

This particular function of course assumes you have a recent version of the Azure cmdlets installed.

Hope you found this somewhat random post useful. It’s basically a summary of the last few months of my experimenting with Azure as an aspiring expert on the platform, and really only scratches the surface of Azure options with regard to storage, VM sizing and automation.

Cheers
Brian

 

Installing SharePoint 2016 Release Candidate Directly (i.e. Without Manual Patching)

Standard

When SharePoint 2016’s Release Candidate was announced, you may have wondered why (and at the same time been a little sad that) there was no monolithic ISO or executable made available that would allow you to install straight to RC without first having to install the previous public release (Beta 2) first. Well, it turns out there’s a fairly simple way to accomplish a direct-to-RC installation, and it uses a tried & true methodology – slipstreaming!

Here are the high-level steps to go from zero to RC (in your non-Production environments, right?):

  1. Download the SharePoint 2016 Beta 2 bits if you don’t already have them.
  2. (optional) Download any Beta 2 language packs you might require.
  3. Extract/copy the Beta 2 bits to a suitable local or network folder (e.g. C:\SP\2016\SharePoint).
  4. (optional) Extract the language pack(s) to a folder relative to the path above (e.g. C:\SP\2016\LanguagePacks\xx-xx (where xx-xx represents a culture ID, such as de-de)).
  5. Download the SharePoint 2016 RC patch – and don’t forget to select/include the prerequisite installer patch that matches the language of SharePoint 2016 you’re installing.
  6. Download the RC language pack that matches the language of SharePoint 2016 you’re installing (in my case, English). You need this in order to update the built-in language pack.
  7. (optional) Download the corresponding patch for any other language packs you downloaded/extracted previously.
  8. Extract the RC patch to (using the convention above) C:\SP\2016\SharePoint\Updates:

SP2016 Slipstreamed Updates

Note that the wssmui.msp shown above is actually the English language pack patch, which you would have obtained from Step 6. above.

9. Extract the prerequisite installer patch files (downloaded as part of step 5) to C:\SP\2016\SharePoint, overwriting any existing files:

SP2016 Slipstreamed Prerequisiteinstaller

10. (optional) Extract respective RC language patch files to C:\SP\2016\LanguagePacks\xx-xx\Updates:

SP2016 Slipstreamed LangugePack

Careful! All the language pack RC patch files are called wssmui.msp regardless of language, with no quick way to tell them apart. I therefore recommend you extract/copy them one at a time – but again this step only applies if you’re actually installing packs for different languages.

11. Now install SharePoint as you normally would. Patches placed in the correct locations will be automatically picked up and applied during the installation. Note that by this point, the process should look familiar if you’ve ever done slipstreaming in previous versions of SharePoint.

12. Once the installation is complete, verify that the patches were successfully applied in <CentralAdminUrl>/_admin/PatchStatus.aspx. You should see entries for “Update for Microsoft Office 2016 (KB2345678) (wssmui_<cultureID>.msp 16.0.4336.1000)” under each language pack (if applicable):

SP2016 Slipstreamed RC Language Pack

And also “Update for Microsoft Office 2016 (KB2345678) (sts.msp 16.0.4336.1000)” under SharePoint 2016 Preview itself:

SP2016 Slipstreamed Updates

Oh, and the contents of that “readme.txt” file shown in the screen caps above? “Any patches placed in this folder will be applied during initial install.” As though the product was, you know, designed for this 🙂

Cheers
Brian

Pre-Populating SharePoint Farm Details for ULSViewer

Standard

The new ULSViewer for SharePoint introduces the capability to monitor all the ULS logs in your SharePoint farm at once, in real time. While this is a fantastic enhancement to an already near-perfect piece of software, I found one tiny little pain point with it. When configuring ULSViewer to monitor an entire farm, you need to manually specify all the servers in your farm as well as the common ULS log path.

image

As a seasoned (crusty) SharePoint IT pro, I thought to myself, “hey ULSViewer, you seem smart… figure it out yourself!” After all, this isn’t top-secret information, it’s all right there within the farm configuration. And being the type of person who hates doing anything manually (especially more than once), I wanted some sort of automated fix.

So I set upon writing a fairly simple PowerShell script that would query the farm to grab all the SharePoint servers in the farm, plus the diagnostic (ULS) logging path. These pieces of information are available via two SharePoint PowerShell cmdlets: Get-SPFarm and Get-SPDiagnosticConfig. The rest was just reading and if necessary adding to ULSViewer’s Settings.XML file.

Head on over to the TechNet Gallery to grab the PowerShell script for yourself… heck it could even save you seconds of your precious time!

What I Do.

Standard

Over the past few weeks and months I’ve been contemplating putting together a post that describes what I do – partly as a reference to others (to answer that oft-asked question), partly to inventory my own activities and interests, and in no small part to get myself thinking about what lies ahead. I also expect this will be something that gets updated periodically and thus won’t be a snapshot of a particular time in my career (I hope, anyhow).

Why here? Well I thought about making this my LinkedIn summary but I’m not sure if it’s really the appropriate place. Maybe parts of it will make their way in there though. So here goes…

Current / Typical Activities

On any given day you might find me building SharePoint farms (either for testing purposes or for customers of my employer, Navantis). In fact, I build a lot of farms. Or perhaps, more appropriately, AutoSPInstaller does. That’s the PowerShell-based scripted process (which I created over 4 years ago and still maintain regularly) that hundreds of folks across the world have downloaded and used to help build their own SharePoint farms – kinda cool.

But before you build a SharePoint farm, you need to plan, size and architect it. So I also spend a fair amount of time meeting with customers to discuss their SharePoint infrastructure requirements – with a business angle as much as possible. It’s all well and good for a client to say “oh, we want everything turned on” as far as features and service apps go. But that’s akin to saying you want a 12-bedroom house for your family of 4, “in case we ever grow”. You might be able to afford that large house, but can you afford to heat, cool & maintain it? Similarly, now that you’ve turned on every single available SharePoint service (Lotus Notes connector, anyone?), those things consume memory and resources – and if nobody’s using them (yet, if ever), they’re costing you in terms of server resources.

If all goes well, the outcome of these planning and requirements gathering activities is a good solid SharePoint architecture design, which then feeds very nicely into the afore-mentioned scripted installation procedure.

It’s not all net-new builds of course. I find myself doing SharePoint health checks more and more these days, which quite often transition into either remediation mini-projects or full-blown farm re-builds, depending on the results of the health checks. I expect these types of engagements to increase in frequency as all those on-premises SharePoint 2007, 2010 and 2013 environments out there age and, as is unfortunately typical, don’t get the TLC they deserve.

With regard to SharePoint/Windows operations and management, well I also create and maintain some scripts for that, too. Mind you, I certainly wouldn’t say I do proper software development or coding; however, having worked with and supported developers for most of my career in IT, I have a good understanding of the development lifecycle and the peculiarities and challenges faced.

Otherwise, I do a fair bit of estimation and project management ‘lite’: How long will this particular SharePoint deployment take, based on number of farms, servers,  service apps being provisioned, etc.? What’s the anticipated project velocity, given the level of knowledge the customer’s own resources have, their change management processes, overall policies, and other intangible environmental factors? Finally, what are the assumptions we’re making, and the associated risks if our assumptions are wrong or not met? Experience has led me to a fairly robust personal process for gathering all of this information, which I refine with each engagement. You might even catch me fussing about in Microsoft Project to collect a lot of this information, too.

What I’m passionate about

I love automating mundane tasks, templating, and all-around maximizing re-use. I also have a knack for (and get energized from) solving tough technical issues, digging under the covers, taking a step back (or several steps back) when required and methodically walking through an issue – and realizing there’s no such thing as dumb questions along the way.

Applying best practices – a frequently-used term that’s actually tough to nail down (e.g. as defined by whom? And for whom?), though luckily within communities like SharePoint there appears to be consensus on most so-called best practices, blessed by those with  ample field experience, with backing evidence. I like to count my own experiences among those, which gives me that additional confidence and personal satisfaction.

New technology! I’m an early adopter (to a fault sometimes). Phones, tablets, laptops, servers, I love reading about news & advancements on all of these. However, as much as I’d like to, say, install that Windows 9 preview on my work laptop, experience (age?) has taught me that sometimes it’s best to hold off. But then again, here comes technology to the rescue – I can run it in a local virtual machine 🙂 which I do frequently with a lot of new tech. I consider it almost a duty to test out new SharePoint updates for example – at least so far as how they install and integrate.

What I want to do more of

Knowledge transfer (fancy consultant phrase for teaching)… Standing up in front of individuals and audiences and presenting my perspective on how things should be installed, should run and be maintained/operated, etc. This could involve travel to different cities for brief stints (conferences, yay), although admittedly that’s tough at the current point in my life with 4 young kids (including 3 under age 5). One of my regrets in life is that I didn’t catch the speaking bug earlier in my career – would have loved to travel to conferences around the world speaking on the topics I’m passionate about. Hopefully as things get easier on the home front this dream will become more of a reality.

Teamwork is unfortunately something that seems to be getting more and more rare in my current role. Seems I’m a bit of a lone wolf – most projects don’t have the budget to support more than one of me! While there may be several developers, business analysts, etc. I tend to be the only infrastructure architect. So I actually (gasp!) look forward to meetings where I can interface with the other team members and share my thoughts and advice.

What I want to do less of

Documentation for documentation’s sake. Yes, I get it, it’s a big part of a consultant’s job. But there are two problems as I see it. One, I consider myself really slow at writing documentation because I’m a perfectionist. I’m constantly correcting and re-correcting my writing to the point where a paragraph seems to take forever to put together. Second, so much of the documentation us consultants write seems to end up in a black hole, or filed off in some file share (document library) somewhere where it quickly grows old and obsolete before anyone really reads it. To me, the time spent writing these pages upon pages could be better spent teaching, scripting (yay), actually doing, or worst case, writing concise, quick-reference material that’s both easy to read and easy to keep current.

In fact, much of the motivation behind AutoSPInstaller was to replace the long series of screen caps & instructions found in typical SharePoint “build books” with something that would not only help someone rebuild a SharePoint environment, but would do so in the most automated and error-free way possible. I get a great sense of satisfaction telling customers “There will be NO build book… <dramatic pause> – you will get something better.”

Extreme multi-tasking is another thorn in my side. Sure I like to be busy – way more than I like having idle time. Being busy on a handful of tasks while feeling they are all moving forward is one thing, but being busy trying to juggle tasks, where a lot of effort is spent just switching contexts does not amount to a lot of productivity and just leaves me feeling bewildered and like I haven’t accomplished anything.

Work environment

Delivery-based work – in other words, being given the autonomy as a seasoned IT pro to make the call about where I can best be productive given the type of work I’m currently engaged in – be it a coffee shop, library, home (though not likely with 2-4 kids in the house at any given moment!), the client site, or maybe even the office (!) I want my (prospective) employer to say (implicitly or explicitly) “we don’t care where you work, as long as you get it done”. Mind you, if the “it” is something like “gather requirements” then obviously that won’t work nearly as well from the comfort of my own basement than it would in the same room as the customer. The point is, *I* can make (and have consistently made) the right decision about the where, and luckily these days it seems more and more employers are realizing the direct benefit of granting their staff the same degree of flexibility.

Further to my earlier claim as a lone wolf of sorts, I enjoy work environments with a healthy social component. Anyone who’s subject to my social media posts will attest to the fact that I love to (over)share my various exploits and experiences, but nothing beats human interaction – be it a pub night (SharePint!), sporting event, or casual lunchtime conversation. Work environments that promote this type of collaboration are tops in my books.

Summary

This concludes a brief bit of insight into the life of a consultant, architect & IT pro, currently working in the SharePoint space. Again, this post is really more of a self-inventory – a fulfillment of a promise to reflect on and record my professional activities, attitude and interests. Hope you found it useful, or at least mildly interesting…!

What’s In Store For AutoSPInstaller v.Next

Standard

Well it’s been a little while since the last AutoSPInstaller release (and the last blog post, to be honest) but let me assure you it’s been all work and (slightly less) play! The last few months have seen a pretty intense development crunch for the automated SharePoint 2010 install/config script, and I just can’t seem to figure out when to call it quits and stop scope-creeping myself. Anyhow I think it’s time I came up for air to let everyone know what I’ve been up to.

More Of The Same Goodness, Tweaked

While there’s some notable new features (see below), one of the goals of a new release should obviously be to resolve some outstanding issues. So I managed to fit a bunch of fixes in, and here are some of them in no particular order…

First, the CreateEnterpriseSearchServiceApp function seems to have never been able to successfully have more than one query server and successfully create the query topology (for reasons that turned out to be pretty odd.) Expect a fix for that in v3.

Also, the ValidatePassphrase function will now actually check farm passphrases against all criteria (previously, the requirement for an 8-character minimum was missing).

A nice treat for me was stumbling upon http://support.microsoft.com/kb/2581903. Why? Because it explains a nagging issue I’d been having lately with the PrerequisiteInstaller.exe – namely that it would almost always crap out on KB976462 lately. Well it turns out the fix is simple – as per the article, I re-jigged the InstallPrerequisites function to install the .Net Framework prior to running PrerequisiteInstaller.exe. Done and done.

Finally and maybe worth mentioning is the small but interesting addition of a timer for the SharePoint and Office Web Apps binary installation functions. This is nice for when you’re trying to get an idea of how long each install takes (e.g. for comparing the speed of various servers & platforms).

Sure there are other tweaks and fixes here and there, but let’s get to the new stuff…

Run Once, Install Many

A much pondered (if not requested) feature of AutoSPInstaller was the ability to install and configure your entire farm from a single, central server. Well ponder no more; I’ve had a good deal of success in finally remote-enabling the SharePoint scripted process. Lots of hoops to jump through for this one, including leveraging the ever-useful PsExec.exe to, uh, remotely open the door to PowerShell remoting in the first place. I expect this feature will go through a LOT of iterations since it seems there are a ton of things that can cause remoting to go wonky, never mind trying to do a full SharePoint 2010 install over a remote session!

So far I’ve had repeated luck building 2 and 3-server farms – can’t wait to try it on larger target farms with decently-powered hardware though. Oh and one more thing – the machine on which you trigger the install doesn’t even have to be one of the farm servers…

Simultaneous Installs

Hand-in-hand with the new remote functionality is the promise of parallel installations. Some of the faithful have asked, “Hey, why can’t we have multiple binary installs going at once, since these can take a long time, especially when installing n-server SharePoint 2010 farms?” Following a suggestion that was made on the AutoSPInstaller discussion list, I’ve implemented the ability to pause after the binaries have completed installing. That way, you can safely kick off the script on as many servers as you’d like at the same time, then return to each server one at a time to press a key and configure/join the farm.

Further, if remote installs have been specified, the script will kick off simultaneous remote sessions to each server in the farm and perform the binary install portion of the script. For now, each session will wait for input (key press) before proceeding with the farm config/join, but the ultimate goal is to go fully automated and have each session somehow detect when the farm is ‘ready to be joined’.

DB Control Freaks, Rejoice

Another oft-requested piece of functionality is the ability to spread your SharePoint 2010 databases out to more than just one SQL server. This is certainly a nice-to-have for large farms where (for example) you’d like your Search databases to have dedicated hardware. Or, maybe you need to put a particular content database on an extra-secure and isolated SQL cluster instance.

The next version of AutoSPInstaller will include the ability to specify a different SQL server (and SQL alias!) for each web app, and nearly every service app you can think of. The semi-exception is Search, which does allow for a different SQL instance to be specified, but currently won’t automatically create your alias for you (though you can simply create one manually, in advance).

Even if you don’t plan on using distributed SQL servers now, but are thinking you might need to segregate your DB back-end duties in the future, you can take advantage of this new feature by creating different SQL aliases (pointing to the same SQL instance, for now). The aliases can then fairly easily be re-pointed to different SQL instances later. Cheap insurance for growing farms that aren’t quite ready to spring for all that new SQL server hardware on day one.

Choose Your Input

Last and probably least, the under-appreciated AutoSPInstallerLaunch.bat will support an arbitrarily-named input XML file passed to it as an argument. So, if you’re like me and have amassed a decent collection of AutoSPInstallerInput-<string>.XML files, you’ll appreciate the ability to tell the script exactly which XML input file you’d like to use at that particular moment (and not just one auto-detected based on server name, domain etc. – though that’s still supported.)

Aaaaannnndd a nice little feature I discovered (maybe a little late to this party) is that you can actually drag an input file onto the AutoSPInstallerLaunch.bat:

DragOntoAutoSPInstallerLaunch

That way, it gets passed to the batch file as an argument without having to type it all out in a command window – a pretty decent time-saving tip!

Coming… when?

Aha, see the note earlier in this post about scope-creep 😉 Well if I can lock things down in the coming days/weeks, I hope to check in some code that you can download and try out on your own. Something I’d consider beta I guess, although there are really two streams going on:

  • The core traditional functionality (one server at-a-time, script launched on each server manually) which is actually pretty stable and has benefitted from the fixes and features listed above
  • The new bleeding-edge remote/parallel install stuff (which can be completely bypassed by setting the appropriate input file parameters to false).

Both will of course be included in the next source code check-in, so you can decide then how lucky you feel 🙂 You can always subscribe to updates to be notified of that imminent update!

Cheers
Brian