Today I received an invitation to preview PowerShell in Azure Cloud Shell (PSCloudShell). Azure Cloud Shell is a new feature added to Azure earlier this year which allowed us to run a Bash console inside our web browser to manage our Azure resources. Microsoft has just added PowerShell to this as well and I’m already having a blast poking around to see how this works and imagining all the possibilities.

When I first launched the PSCloudShell interface I noticed it took quite a while to get up and running. You’re required to set up a storage account to store all the necessary artifacts you need to run PowerShell such as modules, scripts, etc. Once the shell launches for the first time you’ll see a familiar prompt, with the location set to “Azure.”


If you’re building PowerShell modules and want to get your feet wet with development concepts like unit testing and release pipelines, check out my PlasterTemplates module on Github. This template will scaffold a new module project that supports a basic release pipeline model with the following features:

  • Editing in Visual Studio Code
  • Unit testing with Pester
  • Markdown documentation with PlatyPS
  • Source control in Github
  • Continuous integration in Appveyor
  • Module versioning

This Wednesday Microsoft announced the release of PowerShell 6.0 Alpha 17. One new feature in particular intrigued, and that’s the capability to connect to custom remoting configurations. This opens up the possibility of connecting to an Exchange endpoint, including Office 365. I just had to give this a try to see how and if it works. My set up for testing is a Hyper-V VM with Ubuntu 16.04 installed, and I installed PowerShell directly from the Github releases page.

Here’s how you can create an implicit remoting connection to Exchange Online:


Have you heard of this new thing called Pester? Seriously though, Pester is all over the place in the PowerShell world right now, and is now included in Windows 10 out of the box. Pester was created in 2011 by Scott Muc to satisfy his need for a unit testing framework in PowerShell, and since 2012 it has been lovingly developed by Dave Wyatt, Jakub Jares, and others. Currently in version 4.0.2, Pester is responsible for teaching developer practices to us lowly PowerShell scripters. One of the noticeable trends I’ve seen lately is using Pester for testing things that it was not designed for; infrastructure testing is the new buzz-word, and toward that end some Microsoft folks have offered up the Operation Validation Framework, a thin wrapper around Pester that allows for organizing infrastructure test code.


DSLs seem to be all the rage right now in the PowerShell world. Pester has quickly become a staple in many PowerShell developer’s toolbelts and there seem to be new DSLs cropping up on a regular basis. Simplex, Pscribo, and even DSC are all examples of DSLs written in or for use in PowerShell. I’m not a big fan of DSLs and I’ll explain why.

DSL stands for Domain Specific Language, and it means a programming language designed to tackle a very “specific” problem domain. Most of the common programming languages like C#, Java, and even PowerShell are general purpose and can be used to approach a very wide variety of uses and applications. In contrast, a DSL has a narrow scope of focus. A common reason for using DSLs is that they can use more natural language that is closer to the idiom of the problem domain itself. Pester, for instance, expresses test conditions using natural language elements like “It” and “Should” to imitate the way that we think about units of code.


During a recent talk I gave at the Cincinnati PowerShell User Group, I briefly demonstrated the technique I use to create advanced functions that accept credentials. I’ve been using this approach for a while and thought it would be great to show it off so others can take advantage of it. Of course like most demos it failed miserably. Here’s why:

PowerShell 4.0 - The Way We Were

In PowerShell 2 and above we could specify that a function parameter should accept objects of type System.Management.Automation.PSCredential, or use the type adapter [PSCredential] starting in v3.0:


This will be a quick post to detail the steps I took to resolve an issue in Exchange Online where we had a very specific use case for mailbox compliance.

We have a type of user that only has need of a mailbox for a certain period of time, and once this time is passed then according to our policy access to that mailbox will be removed. However, other services in Office 365 such as Onedrive for Business and the Office Pro Plus subscription will need to be retained. Our compliance policy also dictates that the mailbox data will need to be retained for an extended period of time. We user Litigation Hold to achieve this retention.

When a mailbox is on Litigation Hold and the corresponding user is deleted, the mailbox is converted to “Inactive” and all it’s data is retained. The guidance provided by Microsoft for this all centers around employees who are leaving your organization which is why the trigger for converting to an inactive mailbox is the deletion of the user.


This will be a short blog post that serves as a warning to any folks out there thinking of employing Office 365 Preservation Policies in their organization.

We recently decided to employ the new Office 365 Preservation Policies and we’ve become aware of a glaring issue that affects end users. It seems that when a Preservation Policy is applied to a Sharepoint Site, one unexpected affect is that folders can not be deleted if they contain any other items. The solution to this should be to delete items within folder first and then when it’s empty, delete the folder.

The real problem arises when a folder contains a OneNote Notebook. Sharepoint sees the notebook as a folder that contains files and will not allow the user to delete them. Users in our organization essentially can’t perform common folder cleanup tasks due to an administrative feature that according to it’s description should only affect files after they are deleted. This is a big problem.


I ran into a problem recently when running a TeamCity build process for a PowerShell module that I have published to the PowerShell Gallery. The build task continually returned errors and I couldn’t quite figure out what was going on. I decided to run Publish-Module locally to troubleshoot the issue and was surprised to see this error returned:

C:\Publish.ps1 : Failed to publish module 'O365ServiceCommunications': 'Failed to process request. 'A nuget package's Tags
property extracted from the PowerShell manifest may not contain spaces: 'Office 365'.'.
The remote server returned an error: (500) Internal Server Error..
    + CategoryInfo          : InvalidOperation: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : FailedToPublishTheModule,Publish.ps1

Series: Part 1 Part 2 Part 3 Part 4

Full Automation


So far in this blog series I’ve covered the basics of user licensing in Office 365 with PowerShell by demonstrating how to add and modify license skus. This is very useful in scripting the service entitlements for new users, but not all users are static and in many cases you’ll need to manage licenses as users move between positions and licensing needs change. This isn’t as easy as it sounds (or should be), and has a couple of obstacles.

Problem 1 - DisabledPlans

The biggest drawback to configuring user licenses via PowerShell lies in the design of the New-MsolLicenseOptions cmdlet. The problem is that the -DisabledPlans parameter is inherently the wrong approach to license automation. For example, let’s say we’ve set up a script that licenses users for the EnterprisePack and you’ve added Sharepoint to the disabled plans. In it’s original state, this would have enabled Exchange, Skype for Business, and Yammer. However, last year Microsoft added a new service plan to the license - Sway. This means that as soon as Sway became available as an assignable license in your tenant, Sway would have been assigned to your users because it hasn’t been explicitly added to the list of disabled plans.