Roberts Blog

The House of SCCM on System Center Street

Category: Uncategorised (Page 1 of 4)

Intune–The curious behaviour of Require and Not configured

While using super cool Intune Standalone recently, I had some head-scratching for a while over settings still being applied to enrolled devices when they shouldn’t be. It didn’t take long to figure out, and is easily worked around.

Let’s look at the default properties of an iOS device compliance policy:

The System Security setting is not being applied, and note that the greyed out properties are all defaults.

If you assign this to a group you’ll get the desired behaviour.

Here’s the same, but with System Security enabled and the properties configured:

Again assign this to a group and you’ll get the desired behaviour.

Now if you toggle System Security to Not configured, and leave the properties non-defaulted, the properties will still be applied:

It is easy to work around the issue by toggling back to Require then defaulting all the properties and finally setting it back to Not configured before you save it:

I believe this applies to the entire UI when it comes to enabling\disabling settings.

The Intune peeps in Redmond are aware and on it, a fix is coming asap.

It was already found and reported by another EM MPV Oliver Kieselbach but I didn’t notice, much kudos Oliver!

In the meantime follow the advice above and you can easily work around this temporary glitch.

Intune–BYO and relaxing device security

One of my Intune SA customers wants to allow BYO devices without a device PIN, but enabled managed apps used to access corporate data to be secured with a pin, while enforcing a device PIN on corporate (CORP) devices.

A pretty simple ask, but they were caught up in an underlying issue with Intune SA showing a setting as Not configured, while still enforcing any custom values entered in the setting before it was set to that state. Changing the setting to Require, defaulting everything then setting to Not configured while slotting in a save or two worked around the problem.

They have a bunch of users who use their own personally-owned iOS and Android devices to access corporate email, they do not want to enforce a password on these devices, instead, they want to use managed application policies to tighten down and secure corporate data on them; They also have a bunch of devices that are owned by the company and are not BYO, and thus need further configuration or tightening down.

Pairing off the BYO from CORP devices or users can be carried out using Azure Active Directories Groups, along with associated queries to tease them apart, much as we would in SCCM with Collections and AD security groups. We’d then use these groups to assign (deploy) policies and configurations.

The basics for getting an iOS or Android device configured so that the device does not require a PIN, or any security other than what the user has defined (biometric, pin, gesture etc), while prompting for a PIN when managed applications are accessed is pretty straightforward.

Hive off the BYO devices into a group, using manual or dynamic (queries) group types, then create the following specifically for the BYO devices and assign them to the BYO group:

Within the device profile that is used to restrict the device, Password is set to Not configured:

Note that the defaults are configured, no customisation of the settings were made before Not configured was selected.

And within the device compliance policy, both Email and System Security are tweaked. Here’s the managed email profile being enabled:

And here, system security is set so that having no device password does not make the device non-compliant:

Finally the application protection policy is configured so that a password is required, here we can see policies for both Android and iOS for application protection:

image

And here we have the iOS application protection policy’s access requirements configured:

The end result is that a newly enrolled device experience results in no device password enforcement for a specific group of users or devices, however, when they attempt to access their corporate email, they are prompted to setup a 4 digit PIN, or however you setup that aspect of security in your environment.

Such a light touch of a subject with many avenues to explore.

Intune has matured into a really strong platform for device and data management. There’s so much extensibility now that if you’ve already begun your journey from Active Directory to Azure Active Directory and the modern management landscape (EM:S), you cannot help feel inundated with options and flexibility that just a year or two ago we didn’t have.

An interesting observation I’ve made recently is that businesses are beginning to switch onto de-wiring parts of their existing corporate network, so as to reduce cost and align with their modern management roadmap using SDWAN or vWAN technology, reconfiguring their small to medium branch offices with far more cost effective internet connections for devices to VPN over, while installing at their larger sites network appliances pre-configured to VPN to their Azure network, the need to go side-ways is diminishing, with smaller more agile companies leading the charge on shunting or setting up most if not all of their assets in the cloud.

Azure, AAD, AD, Intune, and SCCM alone yield a rich landscape from which to design a modern infrastructure.

ConfigMgr–CMG and the DMZ

Patching servers or ‘managing Windows assets’ in a DMZ has always been a challenge.

Trying to manage assets with one systems management solution in two domains, the DMZ (external network) and the intranet (Internal network) broadens the challenge further.

All designs put forward to manage assets in a DMZ using the internal networks systems management solution, essentially depend on spinning services, or in ConfigMgr lingo ‘roles’, out into the DMZ so that the devices that reside there, do not need to reach back into the internal network for content and communications purposes.

The holy grail of DMZ design is to literally eliminate all communications from the DMZ into the Intranet, but allow communications from the intranet into the DMZ.

That’s a tall order.

To give a sense of what I mean, here is an analogy using tennis!

If you imagine a tennis court, a net, two players with racquet in hand and a tennis ball.

To have a sensible game of tennis both players need to be able to bounce a ball over the net and attempt to return the ball if it is received.

Now imagine that the net is the DMZ’s firewall, player 1 is the Site server, player 2 is an asset in a DMZ, the tennis ball is the content and communications.

In our imaginary game of tennis, player 2 cannot return the ball as the net always blocks it, thus the game cannot be played. Game over. Insert coins to continue.

In some rare configurations, IPSEC and other security services are employed to tunnel communications from a SPOC such as an MP, from the DMZ, back into the internal network to overcome the challenges. For those that do not tolerate this, providing a solution that can manage assets in both the internal and DMZ networks gets complicated quickly.

In the past I’ve managed to achieve complete DMZ compliance while using ConfigMgr to service DMZ assets. And yes it was complicated.

I achieved this by introducing a Management Point Replica so that the clients communications with an MP in the DMZ does not result in the MP talking to the DB located within the intranet, as well as by deploying a DP, a SUP, and all rounded off by reversing the Site system to Site server communications (set on the Site system itself). Site systems in the DMZ never talk back to the intranet, and assets always talk to those site systems, job done.

Another way to overcome this is to utilise IBCM, if it is setup for client management over the internet. Placing an MP, a DP and a SUP in the DMZ, and letting those internet-based devices access those roles, and most importantly the servers in the DMZ seals the deal.

As you can see, there are many ways to go at this problem.

With the cloud-first approach Microsoft have adopted for the last decade, we’re seeing heavy integration taking place through cloud products and services evolving, that is yielding some very interesting tooling.

With the introduction of the Client Management Gateway, we have another solution for managing DMZ clients.

Let the servers in the DMZ go out onto the internet via the CMG, and back into the on-premise roles, while using the Cloud DP to issue content.

It is worth noting that the Cloud DP can now be installed along with the CMG itself, and does not require another virtual machine to house it, reducing the cost of the solution somewhat.

There are so many options available right now, you can even throw an IBCM DP into the mix along with a CMG running just an MP and SUP, just to negate content delivery costs, but I suspect this is a short-term solution, and that over the long term it might cost more than using a Cloud DP with its associated metered download costs.

In the simplified above shot we see the DMZ layers, protecting company assets from external interference, and generally as a rule of thumb, preventing any communications originating from the DMZ into the internal network.

The servers will reach out to the CMG and not talk directly to the ConfigMgr Stand-alone hierarchy.

Outstanding.

So today, to service (patch\update) assets in the DMZ we can go one of several routes:

  • Abandon the idea of servicing DMZ assets using the production ConfigMgr hierarchy instance, and instead stand up a new ConfigMgr Hierarchy located in the DMZ itself
  • Opt to use stand-alone WSUS
  • Service directly using Microsoft Updates, while employing some form of control using Local Group Policy to configure the Windows Update Agent
  • Employ Management Point Replicas, Site systems using reverse replication (SMB level) and with the SQL replication reversed (important so as to maintain DMZ compliance), deploy DP’s and SUP’s
  • Join to Azure and service from there
  • Use Operations Manager Suite (token nod to Sam Erskine who bangs the drum for servicing using OMS!)
  • Implement IBCM

or

  • Service using ConfigMgr and the super-amazing Client Management Gateway

The list isn’t exhaustive by the way …

In this article, I’m going to walk through how to setup a method for managing DMZ assets that involves ConfigMgr, CMG and PKI, with Certificate issuance to workgroup clients being handled by Certificate Enrollment Policy and Certificate Enrollment Services (CEP\CES).

I do not have a DMZ, and my PKI infrastructure is on the intranet. If you were aiming to test this with a real DMZ you’ll need to ponder how to deploy the PKI infrastructure appropriately.

Some of you may already have NDES setup, as part of your Intune implementations or explorations, and can use that to service the workgroup assets.

In a real production environment you’d break down how you’re going to place a CA\CEP\CES\CDP in the DMZ that they can utilise, while making sure these PKI services in the DMZ are not reaching back into the intranet.

In the shot above, is a Callout for the Primary showing 6 servers. This is my stand-alone Primary, which is configured for High Availability and really does consist of 6 servers:

  • 2 Primaries (Active/Passive) running Build 1806
  • 2 Site systems (single instance roles, duplication of non-single-instance roles, SMS Providers, some Shares)
  • 2 SQL servers running SQL AlwaysOn on top of a Windows Cluster, File Cluster services to accommodate the Content Library

I also have:

  • 1 Domain Controller for Lab1.com
  • 2 Certificate Authority servers
    • 1 running CA, CA Web Enrollment, NDES (IIS HTTP)
    • 1 running CEP, CES (IIS HTTPS)
  • 4 Windows 10 test clients
    • 2 AD/AAD joined
    • 2 Workgroup joined

I’ve ‘faked’ having a client in a DMZ, by blocking all communications to the ConfigMgr intranet servers using the HOSTS file on one of my workgroup based Windows 10 test clients. It cannot directly talk to ConfigMgr, it has unfettered access to the internet.

I had a few issues along the way that burnt a serious amount of hours, firstly not having an HTTP CDP for the CRL tripped me up, and then figuring out what the CEP URI is supposed to be caused delays.

With hindsight, this is pretty easy stuff, the CEP URI can be found in several places, and I didn’t need to do much to enable a HTTP CDP.

Here’s where you turn on the HTTP CDP, it has to be done before you begin issuing certificates, or existing certificates will need to be renewed to pick up this meta change:

Tick the Include in CRLs and the Include in the CDP extension tick boxes.

Another thing to tick off is the FriendlyName for the IIS virtual directory running within default website on the CA running CEP\CES:

Open IIS, head to the virtual directory residing in the default website containing CEP and UsernamePassword, visit its Application Settings and change FriendlyName to include something, I chose Lab1. You will see this show up later on when requesting certificates.

Once that was all taken care of, the entire chain of proceeding activities fell into place nicely.

The next piece is about preparing the PKI certificates needed to allow the ConfigMgr client to talk to the CMG, a Trusted Root CA and a computer certificate with Client Authentication present.

The root certificate can be exported from any domain-joined device, or from the Certificate Authority server in your lab, here’s a guide.

ConfigMgr itself can deploy the root certificate, but it first has to have the client installed and have access to the intranet roles. Since you can service devices in Workgroup mode, if you are already setup to use ConfigMgr to service your DMZ assets, delivery of the Root CA is a cinch using Certificate Profiles, forming one of the activities needed to transition from internal management to external via the CMG.

Or you can drop the Root CA in manually.

We now need a computer certificate which has client authentication capability. In my lab’s CA I created one by cloning the Workstation Authentication certificate template:

On your CA, open the Certificate Authority console, right click Certificate Templates and select Manage. Nose around and find the Workstation Authentication template:

Right click it, and select Duplicate Template. A new template will be prepared, and its property sheet shown, so that we can populate the template before committing it:

Populate the Template display name, mine is called Lab 1 Client Certificate.

Head to the Request Handling tab:

Untick Allow private key to be exported.

Head to the Subject Name tab:

Select Supply in the request.

Select the Security tab:

By all means make changes here, but for my lab I’ve left as-is as my Domain Admin account will be used to initiate the certificate enrolment from the client.

You can select OK now and commit this new template.

Head back to the Certificate Authority console, right click Certificate Templates and this time select New > Certificate Template to Issue:

Once you’ve selected Certificate Template to Issue, from the list produced, select your newly created Certificate Template. This will make it available to anyone with enough permissions as defined in the Security tab of the template, to request the certificate.

Now on the client, we need to add a Certificate Enrollment Policy server using the Certificates MMC.

Run the MMC as an administrator or accept the UAC prompt, choose Computer account when adding the Certificates snap-in.

Right click Certificates > Personal or Certificates > Personal > Certificates if any certificates have already been issued, select All Tasks, then Advanced Operations, finally select Manage Enrollment Policies…:

We can now add and remove Enrollment policy servers:

Select Add…

You can obtain the format needed to create the CEP URI by using CERTUTIL.EXE, which as shown below returns the CES URI, just replace CES with CEP:

Or by visiting IIS:

Bound to be other ways to find it.

For my example it is:

https://L1CA.Lab1.com/ADPolicyProvider_CEP_UsernamePassword/service.svc/CEP

So I punch that in as the enrollment policy server (URI) value, and select Validate Server.

We’re now prompted for credentials to access the CEP:

In my lab I chose the Domain Admin again.

You should now get some information back on what happened, the validation of the Certificate Enrollment point was successful:

Our Policy server is in place, we can request Certificates using it:

Select OK.

Just as an aside, if you do not set the FriendlyName as we did earlier in the article, you’ll get a warning when you validate, and the Name will show as a GUID: