Skip to content

Azure Security – Means of Control

Hi there, in my last post (yes, more than 15 month ago), I started describing overall framework that can be used to securely design and implement applications in Azure.

Secure administration is one of the foundation blocks of overall security for applications deployed into Azure. Over the next few blog posts we’ll discuss different aspects of secure administration over Azure resources and some ideas on how to make it all work.

One of the main principals of secure administration is trust in user identity that is used to perform administration over resources. This trust assures us that identity is not compromised and is not subject to compromise by the bad guys. To have this trust we need to have a solid administrative architecture that ensures that privileged accounts can be used only from approved, properly managed devices with no or very limited attack surface. The goal of such architecture is to ensure that trusted identity can be used only from managed trusted devices and its use on any other device would be denied.

Before we can look at different administration models of Azure based resources, we need to understand how resources in Azure can be controlled via different technical controls.

Means of Control over Azure based Resources

Resources deployed in Azure can be controlled via the following mechanisms.

1.     Administration of Azure based resources via Azure Control Plane

Azure Control Plane is the de facto access mechanism to control anything in Azure. All Azure AD accounts with administrative roles must be protected to reduce the likelihood of compromise of the customer environment.

  • What is controlled: All Azure resources, such as Azure AD, Resource Groups, RBAC, Networking, VM resources, Encryption, SQL DBs, ie any resource in Azure
  • Tools used on the client side:
    • Browser,
    • PowerShell,
    • Cross Platform CLI,
    • Visual Studio,
    • VSTS,
    • Visual Code,
    • GitHub and many other
  • Azure technologies and controls used for administration:
    • Account resets,
    • ARM templates,
    • Azure DSC,
    • Azure Automation and other
  • Accounts used for administration:
    • Azure AD account with assigned RBAC permission to accomplish the task
    • Account can be cloud based only or being synchronized from on-premises ADDS
    • Main type of accounts with admin access:
      • Account Admins in Enterprise Portal
      • Tenant level Admin roles:
        • Global Admins in Azure Tenant
        • Other AAD Roles
  • Subscription level Admin roles:
    • Owner
    • Service Admin and Co-Admin (access to the legacy portal)
    • User Access Administrator
    • Accounts with RBAC roles assigned at Subscription level, resource group level or resource level
  • Means of Control over the resources in Azure:
    • Direct via AAD account via assigned permissions on Azure resource
    • Indirect via one of the Azure controls that has ability to modify Azure resource or the state of the compute resource (like VM)

It is important to understand that Azure control plane can be used not only to configure Azure resource settings as it relates to Azure, it is also used to configure and manage internal settings and applications on the resource itself. For example, it can be used to configure VM with specific roles and applications and then ensure that this VM is configured exactly as needed, all without ever logging into this VM via traditional remote access mechanisms.

The following diagram shows different means of control over Azure via Azure Control Plane:


2.     RDP, SSH or remote management into target VM

Console based access or access via remote management tools is the classic way to manage most operating systems.

This can be done via one of the following mechanisms:

  • Connect to the target VM via direct connection, such as ExpressRoute, Site-Site VPN, Point-Site VPN. It requires a routed connection from the client to the target VM
  • Connect via Jump Server VM
  • Connect via Jump Server VM configured with VM JIT
  • Accounts used for administration:
    • Operating System based account, such as Windows local accounts or AD DS based domain accounts
  • AuthZ within VM is done via OS based authorization model


3.     Manage via tools like project “Honolulu”

  • What it is: Project “Honolulu” is the code name for new set of remote management tools which use Browser as its client.
  • Management of the target system is done via one of the following:
    • Direct connection from the client to the target system, with OS based account
    • Connection via Web Server Management Proxy that has direct connection to the target systems, authN is done via OS based account

Behind all of these “means of control” mechanisms are identities (user accounts) which must be properly secured. In the next post we’ll look at current administrative practices and identify where they fall short.


Deploying Applications in Microsoft Azure with Security in mind

It’s been a while since I blogged here. Perhaps writers block or something. Not like I was not busy over the last 18 or so months since my last post. I have been busy doing different type of work related to security of Microsoft platform, most of it related to the on-premises implementations, not whole lot of Cloud based solutions. This is starting to change and recently I’m getting more involvement with doing bit of security related work with Microsoft Cloud, ie Azure.

Azure adoption is going like crazy. Customers starting to implement their new apps in Azure IaaS or move their existing apps from on-premises to Azure IaaS. Many security conscious customers starting to ask on how to deploy this in the right way, using Azure provided security controls and shift their management strategies from how it was done in on-premises environments, where a lot of things are done via brick & mortar, to pretty much software only environment.

If you done any work in Azure and think about security, then you already found that there is a lot of Azure documentation available on all type of different topics, many are very detailed, some maybe not as one would wish. Azure Security team is starting to consolidate many topics under single umbrella here, which is totally awesome. There are many personal blogs from folks as well, documenting different Azure features. This is all good, having documentation available for specific technical areas.

But there is something missing. It the framework that would take a comprehensive look at all the different areas in Azure and provide guidelines and recommendations on how to design security around application that is being deployed in Azure IaaS. If we could have a framework that can adopt to the constantly improving Azure security controls and have some practical guidelines on how to apply it to our own applications, then I think companies would have a better way to deploy their applications in Azure in a secure way.

I have been thinking about it, read bunch of documentation and other blogs, and came up with initial security framework that I think every application deployed in Azure IaaS should be designed against. This is of course work in progress and subject to change.


If you wonder what it is all about, in the next blog post I’ll try to get a bit more detailed about each of the main pillars in this framework. Thanks!

Token Replay Detection

Token Replay Detection is used to protect applications against replay of the issued tokens by Identity Provider Security Token Service. Active Directory Federation Services (AD FS) provides this capability when it is installed with SQL as its configuration store database. If AD FS is installed with Windows Integrated Database (WID) then this capability is not provided. When implementing AD FS services, it is common question to ask, should it be installed with WID or with full SQL back end. SQL back end provides other benefits besides token replay detection, so it should be evaluated for its full capabilities, not just token replay detection, but if you are wondering if you should use SQL while installing AD FS for token replay detection, you should fully understand in what type of federation topologies this capability is actually will be used.

The following couple diagrams are showing when token replay detection actually used and SQL back end should be used with AD FS implementation. Open them to see all the details.



AD FS Audit

Hello everyone! Hopefully you had a great summer, I had a good one, not even bothered to post anything on this blog. I do post almost daily on my travel blog, some cool photos from a few places I happen to visit, but this one does require a bit more thought about the technical stuff, not as easy as posting travel photos. Now summer is over, I’ll try to get back into the publishing mood with technical stuff as well.

Today we are going to review a few different ways we can do an audit and usage on the AD FS service.

By default when you install AD FS it does not provide any intelligence into how users use this service. For example if our AD FS is configured with couple Identity Providers (IdP) and a few Relying Parties (RP), and we want to know which users from what IdP are using what RP, and potentially what type of claims are being passed, there is no easy way to collect this data in manageable and usable way.

Let’s review a few different ways that we can do an audit on the AD FS.

1. Use AD FS Debug Tracing Log

See here how to enable it. While this log will collects the information about issued tokens, the way it is collected is not the most convenient way to analyze this data. It collects it in multiple events that would need to be used to extract relevant data. AD FS Debug Tracing logs are really designed for troubleshooting purposes and should not be used for any type of long term data collection.

2. Use Security Audit Logs 

See here how to enable it. We can enable Success Audit and Failure Audit in the AD FS Properties. This will log all activities in the Windows Security log. The key thing here to note is that it will log all information into the log. All information in the assertion will be logged into the Security log. If we have a farm of AD FS servers, it will log into the local log and there will be a need to push these logs to some central collection system and then figure out how to do some intelligent analysis against that data. As with the AD FS trace logs, each transaction is collected via multiple event entries, linked to each other by correlation ID.

Using Security Audit will allow to collect all necessary data, it has a few drawbacks:

  • Logs from each AD FS server will need to be pushed to a central collection system
  • To make any sense of the collected data, there will need to be a software that can comb through the events and pull out data that we are interested to analyze
  • As mentioned, these logs will contain all information in the security assertion, potentially including PII and other sensitive data.

So while it is possible to collect data and do an audit, making this type of collection on a long term basis might be fairly complicated to implement and manage.

3. Use SQL Attribute store

This is a custom way to collect information, but this will provide the most flexibility in what we can collect and then it will allow us to create custom reports against collected data in more streamlined way. The following diagram shows the concept of this solution:


1. User will attempt to access application and will be redirected to the AD FS (ADFS Server1 or 2) server.

2. They will be redirected to authenticate to their IDP (not shown) and come back to the AD FS to get a token for the application.

3. AD FS Server will process claims rules for the application and issue required claims for it. At the same time it will connect to the SQL Attribute Store and insert specified claims into the SQL DB.

4. SQL DB can be exposed to the administrator via different mechanisms, such as reporting web service or manual data transfer from DB into text format document.

The type of data collected and how it is collected is really up to you to design and implement. There are different ways to make this work. Let’s take a look at one of them. To use SQL Attribute store to collect data we’d need to perform the following configuration:

1. Create DB on the SQL server with a minimum of one table. You can use on-premises SQL or cloud based SQL. I use Azure SQL in my lab.

2. The table should have columns for the types of claims we want to collect. For example:


Create a stored procedure that will be used to inject data into this table:


Configure AD FS with SQL Attribute store and configure Claims rule on each RP to send claims to this attribute store. You’ll need to make sure that the account you use to connect to the SQL server have write permissions to the specified table.

Here is an example claims rule that works with the stored procedure and our table:

c1:[Type == “”%5D

&& c2:[Type == “”%5D

&& c3:[Type == “”%5D

&& c4:[Type == “”%5D

&& c5:[Type == “”%5D

&& c6:[Type == “”%5D

&& c7:[Type == “”%5D

&& c8:[Type == “”%5D

&& c9:[Type == “”%5D

&& c10:[Type == “”%5D

&& c11:[Type == “”%5D

=> add(store = “AuditLog”, types = (“”), query = “EXEC [dbo].[LogRequest] @RequestId = {0},@Inside = {1},@RpId = {2},@forwardIP = {3},@clientIP = {4},@AuthMethod = {5},@AuthInstance = {6},@AuthRef = {7}, @Upn = {8}, @AppName = {9}, @Company = {10}”, param = c1.Value, param = c2.Value, param = c3.Value, param = c4.Value, param = c5.Value, param = c6.Value, param = c7.Value, param = c8.Value, param = c9.Value, param = c10.Value, param = c11.Value);

For this rule to work you’d need to make sure that all eleven claim types are present in the pipeline. If one of them is missing then the rule will not trigger the “add” action and it will not run the stored procedure on the target SQL server.

If it is all properly configured, every time user getting a new token for the target application, AD FS will insert a new row into the SQL DB. Based on that entry we will have information about their IP address, their UPN, their Company, the application they have accessed, timestamp etc. You can add or remove required columns as needed.

Of course, the table can be designed differently as well, for example have only two columns: claim type and claim value.

Now we can do some targeted logging and do very intelligent reporting on who uses our applications, where they are coming from, their affiliation, the rate of access etc. There is really no limit to flexibility of what can be done via this logging mechanism.

Pass the Hash White Paper v2

Updated version of the Pass-the-Hash white paper was just released. Good read to know how to protect your ADDS environments from this attack.

Azure AD Application Proxy

In the last few days there were some interesting previews lighted up in Azure AD – one of them is Azure AD Application Proxy. This is very unique service which will allow you to publish your on-premises applications to external users via Cloud based SaaS reverse proxy solution. Currently it is in preview and it provides very basic functionality, but this is great concept and we’ll have to see what type of new features they will add to the service over the coming weeks and months.

To read more about it you can visit this post.

To enable Azure AD Application Proxy you’ll need to have an Azure AD Premium subscription. The great news is that you can now get a free 90 day trial of the AAD Premium. Check out this video on how to do that.

How to enable Azure AD Premium Trials


Not directly related to the AAD, another interesting preview is RemoteApps feature provided via Azure Cloud Services. I just turned on the trial for it and will need to play with it a bit over the next few weeks.

Good day!

Logging into AAD via 3rd level Federated Account

There are two common ways to authenticate into Azure O365:

  1. With non-federated account, residing in AAD, as I call it, with “managed account”, where user is authenticated against AAD using user ID and password, and with
  2. Federated account, with shadow account residing in AAD, and the account that user authenticates against, residing in the on-premises environment.

With “managed account” there are not many options, you basically authenticate against AAD, maybe have MFA on it as well. With federated account there are many variations come into play, mainly around the vendor product is being used to provide the identity provider services. AD FS is one of the most common products, but many other have been certified to provide such service as well. This topology usually assumes that user authenticates against directory where identity provider is installed. There is currently no easy way to log into the Azure via a second level Identity Provider, ie, the Identity Provider that is trusted by be on-premises IdP.

One of such ways I discussed in my blog posts back in the fall of 2013. The idea is that we register external identity provider identities in our on-premises AD DS, then use DirSync to get these registration accounts into AAD. We use AD FS to transform token from the external identity provider to use AD DS based accounts and get them authenticated into AAD. You can check out more detailed explanation on how it can be done here.

While this works, I feel like this is a bit heavy solution to provide access to the external users to AAD. Users must have accounts in on-premises AD DS, DirSync must be implemented, which means that newly registered accounts won’t be able to access AAD for quite some time. The default DirSync schedule is 3 hours and while it can be reduced, it is usually not recommended to reduce it to a small number.

So couple month ago I started thinking about other ways to use currently available technologies and be able to access AAD via external identities without using DirSync and without storing registration accounts in on-premises AD DS.

GraphAPI comes to the rescue. With GraphAPI developer can write an application that creates accounts straight in the AAD, so it is available immediately after account was registered. This is good, but how do we authenticate against it without having on-premises account? To solve this problem we can use the power of AD FS, some development and GraphAPI. As you probably know, AD FS provides support for custom attribute stores (CAS). CAS is basically a way to pull information about the user from external data stores (not from AD DS, AD LDS or SQL), like text files, other LDAP directors, or as you can guess where we are going, from the AAD itself. All we need to do is to write CAS that uses GraphAPI to query our AAD Tenant for user information and then builds a SAML token that will be passed to the AAD for authentication.

To do this we need to create a mapping between the external identity and the AAD based account. One of the easiest way of doing this is to take user Name Identifier claim from the external IdP, make sure its value is less than 64 characters and put in into the ImmutableID attribute of the user account in AAD – this should be done during registration of the user in AAD. As part of registration unique UPN will be generated as well, this is a required attribute of the user account.

After the mapping between the external account and AAD account is done, we’ll configure CAS to use Name Identifier as the lookup value of the user account in the AAD to find its corresponding UPN. Now AD FS will have required information to build the outgoing SAML token to authenticate to AAD – user UPN and ImmutableID.

Smart Card Enrollment

A few days ago one of my friends asked if I knew how to enroll smart cards from Windows AD CS without using any type of specialized smart card management systems. I have done this type of enrollment a few years ago, but truth to be told, all of the enterprise environments usually use smart card management systems and I have not seen anyone using out of the box enrollment capabilities. I looked on the Internet, nothing came in the search then I remembered that I might have sample configuration somewhere in my my archives.  So I found in my archives configuration steps on how to enroll a smart card from AD CS and tested it in my lab. It was not a straightforward test, at first I could not make it to work. My test environment is running in Azure IaaS and my physical PC where I attach smart card is not part of the AD DS of the test environment, so in order to enroll from the Enterprise Issuing CA I use RDP connection to one of the workstations in the Azure IaaS environment. The workstation in IaaS did not recognize my Gemalto .NET smart card. It supposed to detect it via plug-play and install the drivers, but it did not do so. It took me a few tries and tests to figure out that I had to download proper drivers and install it manually. Then everything worked beautifully and I was able to enroll my test smart card with a cert for my test user.

You might ask, what is the configuration on ADCS to make it all work? I don’t want to dig in my archives again if I ever want to remember how to do this, so I decided to share it here.

Assumption: I assume that you already have Enterprise Issuing CA implemented in your environment. In current day and age I would recommend you to have it on Windows 2012 R2, but it will work on 2008, 2008 R2 and 2012 versions as well. It might even work against 2003 based OS, but that is too ancient by now and I hope everyone is moving to the latest greatest OS.

OK, you have AD CS running, to enroll a user with a smart card certificate we are going to use “Enroll on Behalf” concept. What it means is that we are going to designate one user (SC-Enroll user account in the following steps) with special permissions to enroll smart cards for other users in the environment. Usually it would be someone from the security department. To configure and then issue user cert to a smart card do the following steps:

  1. Configure Enrollment Agent Template
  2. Configure Smart Card Logon Template
  3. Assign created templates to Contoso Issuing CA
  4. Enroll for Enrollment Agent certificate
  5. Smart Card Certificate Enrollment for the end user


Configure Enrollment Agent Template

  1. Log on as Administrator to ADCS server.
  2. Open Certificate Authority MMC
  3. Right click on Certificate Template and click Manage
  4. It will open the Certificate Templates MMC (Certtmpl.msc).
  5. In the details pane, right-click on Enrollment Agent, and then click Duplicate Template.
  6. Choose “Windows Server 2012 R2” template.
  7. On the General tab, enter the Template display name as Contoso Smart Card Enrollment Agent, and then click Apply.
  8. Click on Security tab. Click on Add button. Type SC-Enroll and click on Check names button. Click OK. Give it Allow Enroll permission.
  9. While in Security tab select Domain Admins and uncheck Enroll check box.
  10. While in Security tab select Enterprise Admins and uncheck Enroll check box.
  11. Click OK to save and close Contoso Smart Card Enrollment Agent template.


Configure Smart Card Logon Template

  1. In the details pane, right-click on Smartcard Logon, and then click Duplicate Template.
  2. Choose “Windows Server 2012 R2” template. Click OK.
  3. On the General tab, enter the Template display name as Contoso Smart Card Logon, and then click Apply.
  4. Click on Issuance Requirements tab. Enable checkbox “This number of authorized signatures:” Make sure it requires only one (“1”) signature.
  5. Under Policy type required in signature select: Application Policy
  6. Under Application Policy select: Certificate Request Agent
  7. Under Cryptography tab, select “Requests must use one of the following providers:” and then select “Microsoft Base Smart Card Crypto Provider
  8. Click on Security tab. Click on Add button. Type SC-Enroll and click on Check names button. Click OK. Give it Allow Enroll permission.
  9. While in Security tab select Domain Admins and uncheck Enroll check box.
  10. While in Security tab select Enterprise Admins and uncheck Enroll check box.
  11. Click OK to save and close Contoso Smart Card Logon template


Assign created templates to Contoso Issuing CA

  1. Switch back to Certificate Authority MMC
  2. Right click on Certificate Templates, click New and Certificate Template to Issue.
  3. In selection window select newly created templates (Contoso Smart Card Logon and Contoso Smart Card Enrollment Agent) and click OK.


Enroll for Enrollment Agent certificate

  1. Logon to Enrollment Station (this is computer in the same domain where Issuing CA is implemented) as SC-Enroll
  2. Open MMC and add Certificates snap-in for the current user.
  3. Right click on Personal and select All Tasks and click on Request a New Certificate
  4. Click Next
  5. How many templates SC-Enroll have rights to enroll?
  6. In Request Certificates dialog box select Contoso Smartcard Enrollment Agent check box. Click on Details and click on Properties button.
  7. In General tab, type the Friendly name for this certificate (for example: Contoso SC-Enroll Certificate)
  8. In Description type the following: This cert is used issuing Smart Cards to users
  9. Click Apply button.
  10. Explorer other tab but do not change any properties.
  11. Click OK to close the Certificate Properties.
  12. Make sure Contoso Smartcard Enrollment Agent is still selected and click on Enroll button.
  13. Since we didn’t require Certificate Manager Approval, the certificate should be issued and the installation result status should be Success.
  14. Click Finish.
  15. Open Personal certificate store and verify that certificate is present. Open it and verify that private key is present on this computer.


Smart Card Certificate Enrollment

  1. Logon to Enrollment Station as SC-Enroll. Insert Smart Card reader into USB port.
  2. Insert Smart Card into the smart card reader.
  3. Open MMC and add Certificates snap-in for the current user.
  4. Right click on Personal and select All Tasks, click on Advanced Operations and click on Enroll on behalf of
  5. Click Next
  6. In Select Enrollment Agent Certificate click on Browse button, select Enrollment Agent certificate and click OK
  7. Click Next
  8. In Request Certificates select Contoso Smart Card Logon template and click Next
  9. In Select a User click on Browse button, type “User Name” (the user you need to enroll the smart card for…) and click on Check Names, click OK
  10. Click on Enroll button. You should be prompted to provide the PIN for inserted smart card. Type the PIN number and it will be enrolled with the certificate for “User Name” user account from AD DS.
  11. When asked to enroll for next user cancel it.
  12. Open command prompt and type the following command: Certutil –scinfo
  13. Type PIN number when prompted. You’ll see information about enrolled certificate on the smart card.


Smart card is ready and can be used for authentication.

Multiple AAD as RPs with Single AD FS?

In my last post I discussed how we can configure multiple Azure AD tenants as Identity Providers with the same AD FS instance.

This time I decided to reverse the situation and see if we can configure multiple Azure AD (AAD) tenants as relying party with the same AD FS instance, so the AD FS acts as an Identity Provider and allow Single Sign On experience into the Cloud based applications hosted by different AAD tenants. The desired configuration is shown in the following diagram.


The process to enable SSO into Azure AD tenant via on-premises AD FS is well documented and I’m not going to get into all the details, at the high level it is fairly simple and straightforward:

  1. You need to configure custom domain in your AAD tenant.
  2. You run couple PowerShell commands from your AD FS server. The first one will connect to the specified AAD tenant. The second will convert target custom domain for SSO and it will configure AD FS to act as Identity Provider against AAD.
  3. You’d also need to have some type of solution that synchronizes accounts from on-premises ADDS to the AAD Tenant, like a DirSync.

So I did this on my first AAD tenant and was able to SSO into it by using credentials from on-premises ADDS.

Then time came to configure second AAD tenant for SSO, and unfortunately it did not work for me. The PowerShell conversion complained about AD FS Identifier being already in use. I was hoping for a bit more granularity, and be able to configure each AAD as its own Relying Party with the same AD FS, but not the case. Maybe there will be something there in the future that would allow us to do this.

Until we have some workaround and maybe there is one out there already, please share if you know, there has to be one-one relationship between AD FS IdP and AAD RP, like this:


Multiple AAD with the same ADFS Service

A few month ago I discussed how to configure Azure Active Directory as Identity Provider to AD FS and access claims enabled applications. The following diagram shows that specific configuration:


What I didn’t realize at the time is that it is not possible to configure multiple AAD Tenants as Identity Providers with the same AD FS service. The reason it is not possible to configure multiple AAD Tenants is because all of them are using the same Azure AD Signing Certificate. AD FS does not allow different IDPs that use the same signing certificate. Basically, you can have only one AAD Tenant in direct trust relationship with the same AD FS Service, like shown in the following diagram:


Fortunately, there is a fairly easy way to get around this with the current capabilities of Access Control Services (ACS). ACS does not have this certificate limitation and each ACS instance has its own signing certificate. So if you need to configure multiple AAD Tenants as IDP with the same AD FS Service you will need to configure separate instance of ACS for each of your AAD Tenants, configure them with trusts, then configure each ACS as IDP with your AD FS Service. The following diagram illustrates this workaround: