Roberts Blog

The House of SCCM and Intune on System Center Street

Tag: SCCM (Page 1 of 3)

ConfigMgr HA and losing the Content Library

You’d think completely losing your ConfigMgr Content Library (no backup) would be quite a dramatic event from a bumpy road perspective, I found that it isn’t that traumatic at all, there are only two key activities, the first being some brief file system jiggery-pokery, and the second that the network is going to get a bit of a hammering, as all content will need to be resent (not redistribute) out to the DP’s to get ConfigMgr to put it back into the Content Library.

If you have a backup, restore that puppy and get out of jail totally. But with no backup, well, read on.

A while back I lost a HDD that was a member of a pair of 2TB disks in a lazy striped RAID disk, I was presenting that disk as a Shared Disk to two Hyper-V VM’s running SQL for my High Availability XL High Availability lab.

Losing that raided disk was a bit of a problem, as I said it was presented to two VM’s on the same Hyper-V host as a shared disk, they both used it to create a clustered SMB share to which I had moved the content library as part of the prep work to switch to HA.

My content library was literally gone!

It’s only a lab, so when things like this happen, hmmm, interesting, time to investigate!

So only recently I put another disk in to replace the faulted disk and brought the clustered SMB share back to life. Obviously there’s no content there but the clustered SMB share was back online and writable.

While the Content Library is unavailable no new content can be added to ConfigMgr, not the end of the world but worth noting from a HA perspective.

Left the lab alone for a day, came back and saw that ConfigMgr had attempted to distribute the built-in Configuration Manager Client package to the Content Library (CL) without being prompted, but had failed to complete the task.

It failed because ConfigMgr hadn’t recreated the CL top-level folder structure, it had created the DataLib folder but failed to create the FileLib folder and stalled right there.

Here’s the textual transcript:

  • Started package
    processing thread for package ‘HA200002‘, thread ID = 0x159C (5532)
  • Sleep 60 seconds…
    SYS=L3CMN4.LAB1.COM SITE=HA2 PID=5364 TID=5532 GMTDATE=Sun Oct 06
    20:10:46.881 2019 ISTR0=”HA200002″ ISTR1=””
    ISTR2=”” ISTR3=”” ISTR4=”” ISTR5=””
    ISTR6=”” ISTR7=”” ISTR8=”” ISTR9=””
    NUMATTRS=1 AID0=400 AVAL0=”HA200002″
  • Retrying package HA200002
  • Start updating the package HA200002…
  • CDistributionSrcSQL::UpdateAvailableVersion
    PackageID=HA200002, Version=18, Status=2300
  • Taking package snapshot for package
    HA200002 from source \\\SMS_HA2\Client
  • GetDiskFreeSpaceEx failed for
  • GetDriveSpace failed; 0x80070003
  • Failed to find space for 104867131 bytes.
  • CFileLibrary::FindAvailableLibraryPath
    failed; 0x8007050f
  • CFileLibrary::AddFile failed; 0x8007050f
CContentDefinition::AddFile failed;
  • Failed to add the file. Please check if
    this file exists. Error 0x8007050F
  • SnapshotPackage() failed. Error =
    SYS=L3CMN4.LAB1.COM SITE=HA2 PID=5364 TID=5532 GMTDATE=Sun Oct 06
    20:10:47.362 2019 ISTR0=”\\\SMS_HA2\Client”
    ISTR1=”Configuration Manager Client Package”
    ISTR2=”HA200002″ ISTR3=”30″ ISTR4=”32″
    ISTR5=”” ISTR6=”” ISTR7=”” ISTR8=””
    ISTR9=”” NUMATTRS=1 AID0=400 AVAL0=”HA200002″
  • CDistributionSrcSQL::UpdateAvailableVersion
    PackageID=HA200002, Version=17, Status=2302
    SYS=L3CMN4.LAB1.COM SITE=HA2 PID=5364 TID=5532 GMTDATE=Sun Oct 06
    20:10:47.415 2019 ISTR0=”Configuration Manager Client Package”
    ISTR1=”HA200002″ ISTR2=”” ISTR3=””
    ISTR4=”” ISTR5=”” ISTR6=”” ISTR7=””
    ISTR8=”” ISTR9=”” NUMATTRS=1 AID0=400
  • Failed to process package HA200002 after
    68 retries, will retry 32 more times
  • Exiting package processing thread for
    package HA200002.

I created the FileLib and PkgLib folders and kept an eye on DistMgr, magic began to happen. The client source was snapshotted and put into the CL, and then it went out to the DP’s just fine.

From there I needed to find all the content I wanted to put back in the CL in my own time, and perform an Update DP’s action on them. Content flowed from the source to the CL and then onto the DP’s as expected.

In the ‘real’ you’d just leave the content alone, its no doubt already on the DP’s and clients can retrieve it still, new content can be added as the content library is back online, so content would only really need to be resent to the DP’s if the source has changed, so the re-distribution of content to the DP’s to get it back into the CL doesn’t need to happen until that content has actually been iterated.

So, what can you take away from this, if you lose your content library and for some reason you find yourself without a backup the outcome isn’t that bad besides controllable network utilisation. As for recreating some folders, I’ve asked the PG to look at the code as they do recreate one folder (DataLib) but fail to create the other folder (FileLib), I didn’t check to see if it’d fail to create the last remaining folder (PkgLib), if they can make that piece of code consistent in folder creation then recovery will singly focus on network utilisation, as long as you haven’t lost your content source as well!

In an ideal world ConfigMgr would recover from this rare and sloppily-managed (no backup) event without redistributing all that content to the DP’s, so that the problem pivots solely around the content source and the content library alone, perhaps this can be made to happen now I’m not sure, and besides in a lab this can be brushed off, in production, backups … are … everything.

Windows 10 Automation–Changing Language–B1903

A customer of mine is in the process of bringing the image factory back in-house, leveraging their ConfigMgr installation, hosted in Azure, to deliver Windows 10 task sequences (build and upgrade) to intranet and eventually via their CMG, internet based devices.

The quality bar is relatively basic from the MSP that is responsible for purchasing, preparing and shipping their devices to their end-users, so it hasn’t taken long to spin up OSD in ConfigMgr, and match the build results, producing a better tooled, more customised build that meets their needs and goes several steps further, while they handle purchasing via the MSP and do the delivery themselves.

Managing the Windows 10 image factory using ConfigMgr is an interim measure for this specific customer, at some point they will swing towards using AutoPilot as part of their modernisation and cost reduction plan that we’ve come up with, which includes the ultimate objective that a company can have nowadays, or an IT pro can have on their radar, the biggy, winding up Active Directory.

Part of this customers image factory requirements is that the build starts out life in English (en-GB), so that their build engineers can customise the OS further with a bunch of tasks that haven’t been brought into the task sequence at this point, due to time restraints or Windows 10 B1903 related bugs (VPN settings annoyances when setup in SYSTEM context, SCCM delivered Wifi profile password woe’s …). Finally the customer wants to be able to switch the builds language to that required for the target user, just before they close the lid and begin shipping.

In this post I’m going to show how I handled the customers language requirements in Windows 10 using SCCM OSD, leaving some footprints on ground already well-trodden by notable others.

Straight out of the gate I was experiencing issues setting the language reliably in Build 1903.

I tried to approach using the unattended setup file, that ‘trusty’ old horse, and when that wouldn’t play ball, I turned to using brutality with DISM and PowerShell applets at the tail end of the task sequence, in an attempt to coerce the operating system into doing my bidding. Failure is a spur towards success for the less weak-of-heart, is what I say when things just don’t work. Surely there has to be a way.

I cruised the net. Saw much chatter about language issues in various Windows 10 builds, most of it seemingly unrelated noise, I noticed a post by Dan Padgett where he uses a different method, RUNDLL32 and an XML file (or two), passed it by, I recall at the time thinking that it most likely was for an older version of Windows 10 and looked pig ugly Smile

At my whit’s end, I reached out to Paul Winstanley, who promptly pointed me back at Dan’s post as the only reliable way he could get it all to work at present.

Dan’s post is actually quite comprehensive and is in part a derivative of some of the ground work carried out by Nicolas Lacours [Link here], there isn’t much more for me to add if anything, a Stirling job indeed, instead I’ll show how I leveraged the proposed method to switch languages during and after OSD.

So yeah, I implemented Dan’s write-up on using the RUNDLL32 method, and viola, after a bit of tinkering to match up with the task sequence variables in use, and after ironing out SillinessFromMe™, I was able to produce a build in any of the list of languages the customer needed.

Now that language in the newly built OS was controllable (thanks Dan and Nicolas, and Paul for circling me back there!) the next step was to force it to build with en-GB, while storing away in the registry what was chosen as the destination language when UI++ launches, so that it can be read in and processed another time to do the final language switch.

At this point the build engineer has an en-GB build to log into, and do whatever they want in readiness for the user.

The next piece was the final language switch, I used another task sequence, with all the language steps from the main task sequence copied across and some additional bits added, and then deployed as Available to the OSD build collection.

This then showed up in Software Center, and could be run as the final task before the device is powered off and shipped.

I’ll now go over the OSD build parts where it differs from Dan’s, and has notes worth pointing out, as I said there wasn’t much need for any change from what he has already etched out.

At the front-end of the task sequence, UI++ runs and interviews the build engineer:

  1. Launch UI++, buzz the engineer for build details and store selections in task sequence variables
  2. Stored the resulting OSDUILanguage value in a new task sequence variable called StoredOSDUILanguage, which is then used at the tail-end of the task sequence as part of the branding\tattooing (not MDT tattoo) of the device
  3. OSDUILanguage is forced to become en-GB to model the experience needed, this is the override that will force all builds to be en-GB initially

After the Setup Windows and ConfigMgr step, we break into the steps to handle the language.

Pretty much how Dan does it. I think the only difference is that I put the Language pack logic on the steps, and added the UK keyboard instead of US.

The final part of the solution in my task sequence runs just before the task sequence finishes up, and is used to poke the value stored away in the task sequence variable StoredOSDUILanguage, into the registry for later use alongside a few other settings.

Aside from my modifications, if you follow Dan’s guide, and you’ll get perfect a result every time. Very nice.

The task sequence to do the final language switch is part-clone of the OSD build task sequence steps, along with some customisations.

As you can see the structure of the change language task sequence is a copy\pasta of the OSD build task sequence with some added bits:

What’s happening:

  1. OSDUILanguage, the task sequence variable doing all the language donkey work, is set to en-GB as a default in case anything goes awry
  2. The registry key OSDUILanguage is retrieved from the registry and poked into OSDUILanguage, I do this using a PowerShell script stored in the same package hosting the language injection script from Dan
  3. A task sequence variable that I use to confirm if the language can be changed, BeginProcessing, is initialised as False
  4. OSDUILanguage is used to drive the dynamic variable step, each rule sets the BeginProcessing variable to True

5. The Begin group has logic on it that will skip the group if BeginProcessing isn’t true, which essentially ends the task sequences execution.

The rest is identical to the OSD build task sequence, the language is laid down, three reboots occur, and bosh the language has changed for new users (who have not already logged in). I will no doubt finesse this out a bit more to do error handling and an existential check on the registry key to trigger an abort if missing.

The PoSh to retrieve the registry key, which most likely can be done better, is here:

$tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
$regValue = Get-ItemProperty -Path ‘HKLM:\Software\<COMPANY NAME HERE>\Branding’ -Name OSDUILanguage | Select OSDUILanguage
$tsenv.Value(“OSDUILanguage”) = $regValue.OSDUILanguage
Write-host (“Found language ” + $regValue.OSDUILanguage)

We can probably collapse that into a single line executed using a Run Command Line step and do away with the script. Be ideal if ConfigMgr let us read directly from a ‘repository’ such as the registry as a task sequence step, and poke the value into a task sequence variable.

With this mechanism in place, simply changing the registry key from say fr-FR to de-DE and running the task sequence from Software Center will swap languages for new users only.

I couldn’t get it to switch the language for any profiles that were already created on the device.

The build engineers profile remains en-GB, however the destination user once they log in for the first time will have the correct language set.

Now the customer has a handle on their Windows 10 builds, and language is slotted into place to fit their needs, initially configured as en-GB, then configured with one of several other languages supported by the company in readiness for delivery.

All in all a job well done I thought.

I might take a look into changing the language for all users not just new users, and whittle up a post on it at some point if its doable without the user having to be logged in.

ConfigMgr CB1906–Management Insights–NTLM Fallback

I remember a few years back at an MVP Summit watching a PG member showing us the mock up’s they had prepared for how Management Insights would “look” in the console, while gauging our response and taking in feedback.

The feature certainly has come along way from then, if’ you’ve not paid much attention to Management Insights, now would be a good time to visit the feature and see what insights it gives you for your site.

From what I can recall the motivating reason for Management Insights was driven by the desire to make administrators lives easier overall, bringing to light the “house chores” needed to keep SCCM running fluidly, highlighting or giving insights into operational capability of a site (Empty collections, Fast Evaluation rules etc), and it has extended out to highlight best practices for certain parts of the product (example being the Site’s current Client Push NTLM Fallback state).

There’s a new Management Insight (MI) in CB 1906, called “NTLM Fallback disabled” which I’ll quickly run over now.

This MI will check the ConfigMgr Site, to see if Client Push Installation property Allow connection fallback to NTLM is enabled:


Enabled, the MI will report Action Needed:


When Allow connection fallback to NTLM is disabled in Client Installation properties, and when the MI is re-evaluated (right click) the MI reports a Completed state, which means we’re compliant:


The reason why you would disable Client Push attempts using NTLM is to force site to client authentication to take place using Kerberos, so as to fall in place with modern security practices, which see NTLM as insecure (rightly so) and something we should all be drifting away from, as partially noted in the docs:


When using the client push method of installing the Configuration Manager client, the site can require Kerberos mutual authentication. This enhancement helps to secure the communication between the server and the client. For more information, see How to install clients with client push.

At a lower lever you can disable NTLM fallback for the Operating System itself, with consequences that should be thought out first, using either domain or local GPO settings. This isn’t something you do without wising up on the consequences.

The new Management Insight is not checking the OS, it isn’t checking IIS or SQL for relevance and state, which have their own options for handling NTLM fallback.

Here’s a shot of local GPO on the site server for tinkering with restricting NTLM:

Note that GPO changes are made, remote devices attempting RDP to the site server that are not patched may encounter the “Encryption Oracle Remediation” issue.

Changing any NTLM setting requires some preparation work, at the least an understanding of what might break in your environment.

DP_Locality flags or bitmask in SMS_UpdatesAssignment

I recently had to figure out how to properly set the DP_Locality property in the WMI SMS_UpdatesAssignment server SMS Provider class, as part of some work being done on PatchMaster, which automates the deployment of patches.

Eventually I figured it out, but at first I couldn’t quite grasp what was going on. I had to query the PG to verify my thinking (Thank you Hugo!), and in this post I’m going to go over as much detail as possible so as to spam the bejesus out of the subject.

To frame things better, every time you create a new Software Updates deployment in SCCM, a new SMS_UpdatesAssignment instance is created in WMI on the Site server to represent it.

Here’s a shot of the property sheet of a deployment, note the Neighbour DP and fall-back DP panels at the top and middle, and the WUMU\Metering panel at the bottom:

This is what it looks like behind the wizards curtain, using WBEMTEST to take a peek, DPLocality is an unsigned 32-bit integer:

* Note that you can do the same with PowerShell: get-wmiobject -namespace “root\sms\site_<YOUR SITES SITE CODE>” -query “select * from SMS_UpdateGroupAssign
ment where AssignmentID = <DeploymentID>” | Format-List -Property DPLocality

The DPLocality properties value shows 786512 (0xC0050). We’ll decode this in a moment.

That integer actually represents a bit mask, or a set of flags. This is noted in the documentation under qualifiers for that property as bits.

When I set about to decode DPLocality, I came up short. The documentation on Microsoft Docs doesn’t quite explain the bit positions properly, we’ll go over this now.

Firstly, the information you need is spread across two pages, shown below is the class, and then the class that class is inherited from:

SMS_UpdatesAssignment Server WMI Class

The property we’re looking for is DPLocality:

As is shown above, this class is actually derived from the SMS_CIAssignmentBaseClass.

Deeper into the doc for the SMS_UpdatesAssignment class we get some information on the property we want to set, DPLocality:

First note that Qualifiers calls out that this is an integer formed using a bitmask (bits), concocted using flags.

Qualifiers: [not_null, bits]

It is also noted that the DPLocality property defaults to the flag combination DP_DOWNLOAD_FROM_LOCAL | DP_DOWNLOAD_FROM_REMOTE (0x50).

We have an assertion here, it states that when these two flags are combined they weigh in at 0x50 (hex), or 80 in decimal.


So we know that DP_DOWNLOAD_FROM_LOCAL and DP_DOWNLOAD_FROM_REMOTE are both set by default, and if we build ourselves a bit-table in excel, which I do a little bit further into this post, we can essentially figure out those two flags bit positions.

When these two flags are set the following radio button switches to “Download…”:

This panel defines whether the local DP or a neighbour DP can be used by clients for this deployment.

If you remove the DP_DOWNLOAD_FROM_REMOTE flag, the “Do not…” radio button is enabled instead.

That’s LOCAL or REMOTE DP defined, now let’s quickly look at the fall-back DP option and return to it in more detail further into the post:

To toggle the fall-back DP radio control we need to set another flag called DP_NO_FALLBACK_UNPROTECTED (detailed in the SMS_CIAssignmentBaseClass doc) so as to disable the fall-back DP option, or remove the flag to enable the option as highlighted in the shot above.

There isn’t much more noted for this class in this doc page, but we are told to go visit the base class that this one was inherited from, there is more for us there, so let’s go take a look.

SMS_CIAssignmentBaseClass Server WMI Class

Nothing worthy of note here but I include a shot of the properties for this base class:

Let’s cut straight to the chase, no need to dilly dally, roll down until you find the DPLocality property:

We know this is a bit-mask, and here we have all 5 flags and their bit positions listed for us.

Yes those are bit positions and consequently starts at 0.

With these flags and bit positions (adjusted in just a moment) that’s all we need to construct and deconstruct the DPLocality value.

As I hint, actually three of the bit positions are noted incorrectly, we’ll take a ganders at that in a moment.

With this information I put together a bit-mask table in excel for easy reference, so as to figure out what the values should be, this gives me the decimal and hex value represented by the bit positions.

Here’s the bit-mask lookup table as a shot:

So armed with that initial assertion of the value 0x50 (decimal 80) from the two DP_DOWNLOAD flags being turned on, we can see that decimal 16 and 64 are added together to yield 80 (hex 0x50), that verifies bit positions 4 and 6 nicely..

So, we’re told that bit positions 4 and 6 are for the DP_DOWNLOAD flags, and bit positions 17, 18 and 19 are for the other three flags.

The problem with this is that the references to flags starts out using the bit position (bit column in table above) starting from 0 (DP_DOWNLOAD flags), and then starting from 1 (as sequence column in table above) for the other three flags, giving incorrect bit positions..

If we purely use bit position, the flags actually reside in positions 4, 6, 16, 17 and 18, and not 4, 6, 17, 18, 19. It’s an easy amendment to the docs which I’ll try to submit soon.

Worth noting that you just need to add these flag’s decimal value together to produce a valid integer.

There are four combinations for the Neighbour and fall-back DP, these are:

  • Neighbour No – Fall-back DP Yes = DP_DOWNLOAD_FROM_LOCAL

To handle WUMU and Metering just add their decimal value in once you’ve calculated the Remote\Local\Fall-back DP options.

And that’s about all you need to handle DPLocality.

Since I was handling this for a C# project, I might as well pass on what I wrote, I’m sure it can be done better, this is enabling code not the high of perfection code.

Below I show a method used to encode the bitmask based on some .Net controls ‘state’, I.e. a checkbox is checked or unchecked:

private int handledplocalityBitmask()


if (!globalObjects.GlobalClass.disableProcessing)


int tosendInt = 80;

if (cb_deployment_downloadNeighbour.Checked && cb_deployment_defaultFallback.Checked) tosendInt = 80;

if (cb_deployment_downloadNeighbour.Checked && !cb_deployment_defaultFallback.Checked) tosendInt = 131152;

if (!cb_deployment_downloadNeighbour.Checked && cb_deployment_defaultFallback.Checked) tosendInt = 16;

if (!cb_deployment_downloadNeighbour.Checked && !cb_deployment_defaultFallback.Checked) tosendInt = 131088;

if (cb_deployment_usemsUpdates.Checked) tosendInt = tosendInt + 262144 ;

if (cb_deployment_allowMetered.Checked) tosendInt = tosendInt + 524288 ;

return tosendInt;


return 0;


The four variations on Default and Neighbour are covered, and WUMU and Metering are added in at the end.

To decode the bitmask requires some more interesting coding.

First define some flags and respective decimal values that can be referenced later on:

public enum DPLocalityBitMask
     DP_ALLOW_WUMU = 262144,

And here is the code sitting in a method that does the decoding:

// Handle DPLocality Bitmask Combinations (expect Metering and WUMU)

var DPLocalityMask = (DPLocalityBitMask)adeploymentProperty.DPLocality;

var decodedDPLocality = new List<DPLocalityBitMask>();

foreach (DPLocalityBitMask DPLocalityBit in Enum.GetValues(typeof(DPLocalityBitMask)))

    if (DPLocalityMask.HasFlag(DPLocalityBit))

// Neighbour, Fallback

if (decodedDPLocality.Contains(DPLocalityBitMask.DP_DOWNLOAD_FROM_LOCAL) && decodedDPLocality.Contains(DPLocalityBitMask.DP_DOWNLOAD_FROM_REMOTE))
     cb_deployment_downloadNeighbour.Checked = true;                       

if (decodedDPLocality.Contains(DPLocalityBitMask.DP_DOWNLOAD_FROM_LOCAL) && !decodedDPLocality.Contains(DPLocalityBitMask.DP_DOWNLOAD_FROM_REMOTE))
     cb_deployment_downloadNeighbour.Checked = false;

if (decodedDPLocality.Contains(DPLocalityBitMask.DP_NO_FALLBACK_UNPROTECTED))
     cb_deployment_defaultFallback.Checked = false;
     cb_deployment_defaultFallback.Checked = true;
// Metered network

if (decodedDPLocality.Contains(DPLocalityBitMask.DP_ALLOW_METERED_NETWORK))
     cb_deployment_allowMetered.Checked = true;
     cb_deployment_allowMetered.Checked = false;

// Use WUMU

if (decodedDPLocality.Contains(DPLocalityBitMask.DP_ALLOW_WUMU))
     cb_deployment_usemsUpdates.Checked = true;
     cb_deployment_usemsUpdates.Checked = false;


ConfigMgr–CMG and the DMZ

Patching servers or ‘managing Windows assets’ in a DMZ has always been a challenge.

Trying to manage assets with one systems management solution in two domains, the DMZ (external network) and the intranet (Internal network) broadens the challenge further.

All designs put forward to manage assets in a DMZ using the internal networks systems management solution, essentially depend on spinning services, or in ConfigMgr lingo ‘roles’, out into the DMZ so that the devices that reside there, do not need to reach back into the internal network for content and communications purposes.

The holy grail of DMZ design is to literally eliminate all communications from the DMZ into the Intranet, but allow communications from the intranet into the DMZ.

That’s a tall order.

To give a sense of what I mean, here is an analogy using tennis!

If you imagine a tennis court, a net, two players with racquet in hand and a tennis ball.

To have a sensible game of tennis both players need to be able to bounce a ball over the net and attempt to return the ball if it is received.

Now imagine that the net is the DMZ’s firewall, player 1 is the Site server, player 2 is an asset in a DMZ, the tennis ball is the content and communications.

In our imaginary game of tennis, player 2 cannot return the ball as the net always blocks it, thus the game cannot be played. Game over. Insert coins to continue.

In some rare configurations, IPSEC and other security services are employed to tunnel communications from a SPOC such as an MP, from the DMZ, back into the internal network to overcome the challenges. For those that do not tolerate this, providing a solution that can manage assets in both the internal and DMZ networks gets complicated quickly.

In the past I’ve managed to achieve complete DMZ compliance while using ConfigMgr to service DMZ assets. And yes it was complicated.

I achieved this by introducing a Management Point Replica so that the clients communications with an MP in the DMZ does not result in the MP talking to the DB located within the intranet, as well as by deploying a DP, a SUP, and all rounded off by reversing the Site system to Site server communications (set on the Site system itself). Site systems in the DMZ never talk back to the intranet, and assets always talk to those site systems, job done.

Another way to overcome this is to utilise IBCM, if it is setup for client management over the internet. Placing an MP, a DP and a SUP in the DMZ, and letting those internet-based devices access those roles, and most importantly the servers in the DMZ seals the deal.

As you can see, there are many ways to go at this problem.

With the cloud-first approach Microsoft have adopted for the last decade, we’re seeing heavy integration taking place through cloud products and services evolving, that is yielding some very interesting tooling.

With the introduction of the Client Management Gateway, we have another solution for managing DMZ clients.

Let the servers in the DMZ go out onto the internet via the CMG, and back into the on-premise roles, while using the Cloud DP to issue content.

It is worth noting that the Cloud DP can now be installed along with the CMG itself, and does not require another virtual machine to house it, reducing the cost of the solution somewhat.

There are so many options available right now, you can even throw an IBCM DP into the mix along with a CMG running just an MP and SUP, just to negate content delivery costs, but I suspect this is a short-term solution, and that over the long term it might cost more than using a Cloud DP with its associated metered download costs.

In the simplified above shot we see the DMZ layers, protecting company assets from external interference, and generally as a rule of thumb, preventing any communications originating from the DMZ into the internal network.

The servers will reach out to the CMG and not talk directly to the ConfigMgr Stand-alone hierarchy.


So today, to service (patch\update) assets in the DMZ we can go one of several routes:

  • Abandon the idea of servicing DMZ assets using the production ConfigMgr hierarchy instance, and instead stand up a new ConfigMgr Hierarchy located in the DMZ itself
  • Opt to use stand-alone WSUS
  • Service directly using Microsoft Updates, while employing some form of control using Local Group Policy to configure the Windows Update Agent
  • Employ Management Point Replicas, Site systems using reverse replication (SMB level) and with the SQL replication reversed (important so as to maintain DMZ compliance), deploy DP’s and SUP’s
  • Join to Azure and service from there
  • Use Operations Manager Suite (token nod to Sam Erskine who bangs the drum for servicing using OMS!)
  • Implement IBCM


  • Service using ConfigMgr and the super-amazing Client Management Gateway

The list isn’t exhaustive by the way …

In this article, I’m going to walk through how to setup a method for managing DMZ assets that involves ConfigMgr, CMG and PKI, with Certificate issuance to workgroup clients being handled by Certificate Enrollment Policy and Certificate Enrollment Services (CEP\CES).

I do not have a DMZ, and my PKI infrastructure is on the intranet. If you were aiming to test this with a real DMZ you’ll need to ponder how to deploy the PKI infrastructure appropriately.

Some of you may already have NDES setup, as part of your Intune implementations or explorations, and can use that to service the workgroup assets.

In a real production environment you’d break down how you’re going to place a CA\CEP\CES\CDP in the DMZ that they can utilise, while making sure these PKI services in the DMZ are not reaching back into the intranet.

In the shot above, is a Callout for the Primary showing 6 servers. This is my stand-alone Primary, which is configured for High Availability and really does consist of 6 servers:

  • 2 Primaries (Active/Passive) running Build 1806
  • 2 Site systems (single instance roles, duplication of non-single-instance roles, SMS Providers, some Shares)
  • 2 SQL servers running SQL AlwaysOn on top of a Windows Cluster, File Cluster services to accommodate the Content Library

I also have:

  • 1 Domain Controller for
  • 2 Certificate Authority servers
    • 1 running CA, CA Web Enrollment, NDES (IIS HTTP)
    • 1 running CEP, CES (IIS HTTPS)
  • 4 Windows 10 test clients
    • 2 AD/AAD joined
    • 2 Workgroup joined

I’ve ‘faked’ having a client in a DMZ, by blocking all communications to the ConfigMgr intranet servers using the HOSTS file on one of my workgroup based Windows 10 test clients. It cannot directly talk to ConfigMgr, it has unfettered access to the internet.

I had a few issues along the way that burnt a serious amount of hours, firstly not having an HTTP CDP for the CRL tripped me up, and then figuring out what the CEP URI is supposed to be caused delays.

With hindsight, this is pretty easy stuff, the CEP URI can be found in several places, and I didn’t need to do much to enable a HTTP CDP.

Here’s where you turn on the HTTP CDP, it has to be done before you begin issuing certificates, or existing certificates will need to be renewed to pick up this meta change:

Tick the Include in CRLs and the Include in the CDP extension tick boxes.

Another thing to tick off is the FriendlyName for the IIS virtual directory running within default website on the CA running CEP\CES:

Open IIS, head to the virtual directory residing in the default website containing CEP and UsernamePassword, visit its Application Settings and change FriendlyName to include something, I chose Lab1. You will see this show up later on when requesting certificates.

Once that was all taken care of, the entire chain of proceeding activities fell into place nicely.

The next piece is about preparing the PKI certificates needed to allow the ConfigMgr client to talk to the CMG, a Trusted Root CA and a computer certificate with Client Authentication present.

The root certificate can be exported from any domain-joined device, or from the Certificate Authority server in your lab, here’s a guide.

ConfigMgr itself can deploy the root certificate, but it first has to have the client installed and have access to the intranet roles. Since you can service devices in Workgroup mode, if you are already setup to use ConfigMgr to service your DMZ assets, delivery of the Root CA is a cinch using Certificate Profiles, forming one of the activities needed to transition from internal management to external via the CMG.

Or you can drop the Root CA in manually.

We now need a computer certificate which has client authentication capability. In my lab’s CA I created one by cloning the Workstation Authentication certificate template:

On your CA, open the Certificate Authority console, right click Certificate Templates and select Manage. Nose around and find the Workstation Authentication template:

Right click it, and select Duplicate Template. A new template will be prepared, and its property sheet shown, so that we can populate the template before committing it:

Populate the Template display name, mine is called Lab 1 Client Certificate.

Head to the Request Handling tab:

Untick Allow private key to be exported.

Head to the Subject Name tab:

Select Supply in the request.

Select the Security tab: