If you’re building PowerShell modules and want to get your feet wet with development concepts like unit testing and release pipelines, check out my PlasterTemplates module on Github. This template will scaffold a new module project that supports a basic release pipeline model with the following features:

  • Editing in Visual Studio Code
  • Unit testing with Pester
  • Markdown documentation with PlatyPS
  • Source control in Github
  • Continuous integration in Appveyor
  • Module versioning

Plaster

Plaster is an open source project from the PowerShell Team that helps you create new PowerShell projects from templates. It can be used directly from the command line and is also drives new project creation in the PowerShell extension for Visual Studio Code. To learn about creating your own Plaster templates, read the tutorials linked in the Github readme. You’ll have to futz with the XML manifest a bit to get it working the way you want, but overall I found it relatively easy to get a new template up and running.

The PlasterTemplates module currently only includes one template - AppveyorCI. This is what we’ll be talking about in the rest of this article.

Plaster With VSCode

Before you get started, make sure you’ve followed the instructions on the module’s Github page to install it in your PSModulePath, and that you’ve installed Visual Studio Code with the PowerShell extension.

In Visual Studio Code, press F1 to launch the command palette, and then type in “Plaster.” Select the command “PowerShell: Create new project from Plaster template.” Wait a moment, and then select “Load additional templates from installed modules.” Wait a few more seconds while Code scans your module path, and then select the template “AppveyorCI.” Next, enter the path where you’d like your module directory to be created. The next step is a little tricky if you haven’t used Plaster in Code before - after you’ve specified a directory, Plaster will begin prompting you for template-specific parameters, but this is done in the integrated terminal instead of the command palette. If the terminal window is not visible it will look like Plaster’s execution stopped. Press Ctrl+` to bring up the terminal window and you’ll see that Plaster is waiting patiently for your input. Answer each prompt and you’ll soon have a new instance of Code with your module’s directory opened.

The prompts you see in the integrated terminal are parameters for the AppveyorCI Plaster template and are important for scaffolding the module correctly. The Github username parameter in particular is important to get right as it will be used a several places to facilitate testing.

Plaster From the Shell

Of course, since Plaster is a PowerShell module, you can also spin up your new module directory using Plaster at the command line rather than in Code. To create a module project exactly the same as the above example in Code, run this command:

Invoke-Plaster -TemplatePath "C:\Program Files\WindowsPowerShell\Modules\PlasterTemplates\AppveyorCI\" `
               -DestinationPath 'C:\Repos\NewModule' `
               -Author "Matt McNabb" `
               -ModuleName NewModule `
               -Description 'a new module' `
               -MinimumPowerShellVersion '3.0' `
               -GithubUserName mattmcnabb

Push to Github

Now that you’ve got a project folder ready to go, you’ll need to initialize it as a git repository and push it to Github. Learning git and Github is much too large a topic for this post, so if this is unfamiliar to you please refer to the getting started tutorial here. Your Github repo will act as a central source for your code changes and will trigger your builds in Appveyor.

Add Your Project to Appveyor

Once you have an active Github repo containing your new module built with Plaster, you’ll need to add the project to Appveyor. Appveyor supports sign in with your Github account which makes it easy to add your repos. Once signed in, click on “New Project” and select Github as the source. Then select your new module repo from the list of available repos. From this point on when you commit changes to your Github repo, a new build job will be triggered in your Appveyor project and all your tests will be run.

Build Locally

Now that all the project details are in place you can begin working on the actual code in your new module. As you write new functions, tests and documentation you’ll want to check that everything is working before you push to your Github repo. To facilitate this, the AppveyorCI template comes with a handy build task that can be run in Code. Press Ctrl+Shift+B to build the project and run all your tests (there are some basic tests included with the template). In order for your local builds to work, you’ll need to make sure you have some dependencies installed:

  1. PSScriptAnalyzer
  2. PlatyPS
  3. Pester

I decided not to force installation of these dependencies on local builds to make the execution run quickly. Just know that if you don’t have these installed then the build won’t work.

A note on versioning: As you’re making changes to your code you may want to increment your module version. The AppveyorCI template starts with a version of 0.1.0. Don’t make any changes to the version number in the module manifest itself. Instead, increment the module version in the appveyor.yml file. The module version entry starts like 0.1.{build}. Increment the first two numbers as you see fit and each successive build in Appveyor will be appended as the final number.

Build in appveyor

Appveyor will now run a build on your repo any time a new commit is pushed to your Github repo. If you’d like to skip a build, make sure your commit message includes the text “[skip ci]” or “[ci skip].” Appveyor builds are a new environment every time, so you may have to wait a few minutes for the environment to get up and running. Also note that each build will download and install some dependencies from the PowerShell Gallery.

If you believe your module is ready for publishing to the PowerShell Gallery, the AppveyorCI template can do that for you. You’ll need to make sure you’ve added your PowerShell Gallery API token to the Appveyor.yml file in the project. to do this, you’ll first need to register an account with the PowerShell Gallery, then copy out your API key. Next, go to your Appveyor account and generate a secure variable using your PowerShell Gallery API key. You can save the value of this secure variable in the appveyor.yml file in your module repo.

Now when you make a commit and you’re ready to deploy to the Gallery, make sure you include the text “!Deploy” at the beginning of your commit message. Assuming all your tests pass in the Appveyor build, your module will be uploaded to the PowerShell Gallery using your secure API key.

Future Plans

I have some ideas that I’d like to add to the AppveyorCI template in the future, if I can figure out how.

For one, I’ve been kicking around ideas regarding autogeneration of format XML files. A high-quality PowerShell module should include formatting for custom types, but writing the XML for these format files can be a real pain. I’d like to be able to write simpler documents or a simple PowerShell script that can be parsed to generate these files for me - similar to the way PlatyPS works. If that project ever comes to fruition then the AppveyorCI template will be updated to support it.

Another idea I’ve thought about is conditional testing/build scripts. There is already some separation between how tests are run locally and how they are run in Appveyor. This could grow a bit and it might be worthwhile to invest time in the build task to support testing conditions, like Pester tags for example. There might be a use case to have different build tasks or prompts that allow you to test only certain features, or even to add support for testing on Linux and Mac.

Make it Your Own

If you find this Plaster template useful, but are expanding into new territory with your testing and release pipeline, please feel free to fork it and take off in a new direction. For instance, if you’d like to add support for any of the build tools out there such as Psake or PSDeploy you can absolutely do that. My goal here was to get started with as little a learning curve as possible and so far it’s working for me. To see an example of a project using this template, check out my OneLogin project on Appveyor.


This Wednesday Microsoft announced the release of PowerShell 6.0 Alpha 17. One new feature in particular intrigued, and that’s the capability to connect to custom remoting configurations. This opens up the possibility of connecting to an Exchange endpoint, including Office 365. I just had to give this a try to see how and if it works. My set up for testing is a Hyper-V VM with Ubuntu 16.04 installed, and I installed PowerShell directly from the Github releases page.

Here’s how you can create an implicit remoting connection to Exchange Online:

Read more...

Have you heard of this new thing called Pester? Seriously though, Pester is all over the place in the PowerShell world right now, and is now included in Windows 10 out of the box. Pester was created in 2011 by Scott Muc to satisfy his need for a unit testing framework in PowerShell, and since 2012 it has been lovingly developed by Dave Wyatt, Jakub Jares, and others. Currently in version 4.0.2, Pester is responsible for teaching developer practices to us lowly PowerShell scripters. One of the noticeable trends I’ve seen lately is using Pester for testing things that it was not designed for; infrastructure testing is the new buzz-word, and toward that end some Microsoft folks have offered up the Operation Validation Framework, a thin wrapper around Pester that allows for organizing infrastructure test code.

Read more...

DSLs seem to be all the rage right now in the PowerShell world. Pester has quickly become a staple in many PowerShell developer’s toolbelts and there seem to be new DSLs cropping up on a regular basis. Simplex, Pscribo, and even DSC are all examples of DSLs written in or for use in PowerShell. I’m not a big fan of DSLs and I’ll explain why.

DSL stands for Domain Specific Language, and it means a programming language designed to tackle a very “specific” problem domain. Most of the common programming languages like C#, Java, and even PowerShell are general purpose and can be used to approach a very wide variety of uses and applications. In contrast, a DSL has a narrow scope of focus. A common reason for using DSLs is that they can use more natural language that is closer to the idiom of the problem domain itself. Pester, for instance, expresses test conditions using natural language elements like “It” and “Should” to imitate the way that we think about units of code.

Read more...

During a recent talk I gave at the Cincinnati PowerShell User Group, I briefly demonstrated the technique I use to create advanced functions that accept credentials. I’ve been using this approach for a while and thought it would be great to show it off so others can take advantage of it. Of course like most demos it failed miserably. Here’s why:

PowerShell 4.0 - The Way We Were

In PowerShell 2 and above we could specify that a function parameter should accept objects of type System.Management.Automation.PSCredential, or use the type adapter [PSCredential] starting in v3.0:

Read more...

This will be a quick post to detail the steps I took to resolve an issue in Exchange Online where we had a very specific use case for mailbox compliance.

We have a type of user that only has need of a mailbox for a certain period of time, and once this time is passed then according to our policy access to that mailbox will be removed. However, other services in Office 365 such as Onedrive for Business and the Office Pro Plus subscription will need to be retained. Our compliance policy also dictates that the mailbox data will need to be retained for an extended period of time. We user Litigation Hold to achieve this retention.

When a mailbox is on Litigation Hold and the corresponding user is deleted, the mailbox is converted to “Inactive” and all it’s data is retained. The guidance provided by Microsoft for this all centers around employees who are leaving your organization which is why the trigger for converting to an inactive mailbox is the deletion of the user.

Read more...

This will be a short blog post that serves as a warning to any folks out there thinking of employing Office 365 Preservation Policies in their organization.

We recently decided to employ the new Office 365 Preservation Policies and we’ve become aware of a glaring issue that affects end users. It seems that when a Preservation Policy is applied to a Sharepoint Site, one unexpected affect is that folders can not be deleted if they contain any other items. The solution to this should be to delete items within folder first and then when it’s empty, delete the folder.

The real problem arises when a folder contains a OneNote Notebook. Sharepoint sees the notebook as a folder that contains files and will not allow the user to delete them. Users in our organization essentially can’t perform common folder cleanup tasks due to an administrative feature that according to it’s description should only affect files after they are deleted. This is a big problem.

Read more...

I ran into a problem recently when running a TeamCity build process for a PowerShell module that I have published to the PowerShell Gallery. The build task continually returned errors and I couldn’t quite figure out what was going on. I decided to run Publish-Module locally to troubleshoot the issue and was surprised to see this error returned:

C:\Publish.ps1 : Failed to publish module 'O365ServiceCommunications': 'Failed to process request. 'A nuget package's Tags
property extracted from the PowerShell manifest may not contain spaces: 'Office 365'.'.
The remote server returned an error: (500) Internal Server Error..
'.
    + CategoryInfo          : InvalidOperation: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : FailedToPublishTheModule,Publish.ps1
Read more...

Series: Part 1 Part 2 Part 3 Part 4

Full Automation

Overview

So far in this blog series I’ve covered the basics of user licensing in Office 365 with PowerShell by demonstrating how to add and modify license skus. This is very useful in scripting the service entitlements for new users, but not all users are static and in many cases you’ll need to manage licenses as users move between positions and licensing needs change. This isn’t as easy as it sounds (or should be), and has a couple of obstacles.

Problem 1 - DisabledPlans

The biggest drawback to configuring user licenses via PowerShell lies in the design of the New-MsolLicenseOptions cmdlet. The problem is that the -DisabledPlans parameter is inherently the wrong approach to license automation. For example, let’s say we’ve set up a script that licenses users for the EnterprisePack and you’ve added Sharepoint to the disabled plans. In it’s original state, this would have enabled Exchange, Skype for Business, and Yammer. However, last year Microsoft added a new service plan to the license - Sway. This means that as soon as Sway became available as an assignable license in your tenant, Sway would have been assigned to your users because it hasn’t been explicitly added to the list of disabled plans.

Read more...

Series: Part 1 Part 2 Part 3 Part 4

Modifying License Assignments

Overview

In the first two parts of this series I showed you how to find available licenses to assign to an Office 365 user, how to assign those licenses, and how to enable only selected service plans from those licenses. All of the guidance I have seen to date around this topic stops at this point, but this is not where the work actually ends. For instance, how do we manage licensing for a user whose role changes within the organization? For many companies using Office 365 this means a change in entitlements as well. They may get different security and distribution group membership, but what if your company’s needs also include a different set of services in Office 365? How can we achieve that?

Finding currently assigned service plans

The first step in modifying a user’s license assignments is finding out what services they currently has been assigned. To do this, we use the Get-MsolUser cmdlet:

Read more...