Skip to content

Permissions Management, AWS and Federated Access

In the last post I talked about my first impressions with Permissions Management and AWS account. Since then, I had a chance to play a bit more with this configuration. Specifically, I configured federated access from my Azure AD tenant to AWS account. One of the reasons I wanted to test this configuration is to see if we can use privileged access workstation (PAW) for cloud services management, which is joined to Azure AD tenant and managed via Azure based controls and uses Azure AD account, to manage AWS Account. The configuration is shown in the following diagram.

Federated access with AWS does not require an account in AWS IAM. If we look in AWS Account IAM, we will not see AWSAdmin user account. As we configure this configuration, we create a role in the AWS Account and assign this role with necessary policy, as in the below diagram I configured it with the highest permission – AdministratorAccess policy. As part of this configuration this role is synchronized to Azure AD application for AWS federated access. Azure AD then can assign AD based user account to this role.

The authentication flow works at high level in the following order:

  • User logs on to their PAW with Azure AD account, in this example AWSAdmin account
  • User initiate an IDP Initiated SSO to AWS account
  • User is authenticated to AWS Account with their AWSAdmin federated account and have permissions in AWS Account based on the role permissions, in this example, AdministratorAccess policy

So what does it have to do with Entra Permissions Management?

I was pleasantly surprised to find that Permissions Management reports federated accounts used to access AWS Account as privileged accounts and shows them as SAML user accounts. This is very valuable information as we need to perform account disposition on AWS account.

To be fair, there should be a way to identify these accounts by looking in AWS logging or other native way to identify it, but as mentioned before, I’m not an expert in AWS and having tool like Entra Permissions Management identifying it in single user interface accounts that require attention is very helpful.

The following diagram shows couple accounts from my Azure AD tenant that have Super Identity access to AWS Account.

Till next time my friends!

Permissions Management and AWS Account

Howdy folks. This is third installment in my testing of Microsoft Entra Permissions Management. The first post had initial overall impressions. The second post was about Azure subscriptions. This installment is about collecting permissions from AWS Account.

I have to say up front that I’m not an expert in securing AWS, like I’m (or pretending to be) with Azure. And here I see a potential problem with Permissions Management. If you read my second post, you will see that Permissions Management does not give a clear picture on all accounts that can control Azure subscription or subset of it. I only know it because I have pretty good experience with securing Azure and can see where Permissions Management is not telling me what I need to know.

With AWS, I have to fully depend on Permissions Management to be authoritative on what it sees and tells me as the factual information about all accounts that can control it. If I don’t know what it is potentially missing (like in Azure case) then I’m blinded with luck of such information.

I have a single AWS account that I use for some limited testing. I configured it with a few accounts, assigning them with different permissions, those that seems to me as fairly powerful. I configured all of it with the Root Account on which I have MFA enabled. Note upfront, Permissions Management does not know anything about the root account, as it only looks at the AWS IAM.

Account NamePolicy AssignedMFAType
adminAdministratorAccessnouser
admin1AdministratorAccessnouser
admin2AmazonEC2FullAccessnouser
admin3AWSSSODirectoryAdministratornouser
admin4IAMFullAccessnouser
admin5IAMUserChangePasswordnouser
admin6SecurityAuditnouser
admin7AWSCloudShellFullAccessnouser
admin8AWSCodeDeployFullAccessnouser
admin9AWSLambdaDynamoDBExecutionRolenouser
admin10AWSMarketplaceFullAccessnouser
admin11SecretsManagerReadWritenouser
testuser1ReadOnlyAccessnouser
mciem-oidc-connect-role-msftn/arole
mciem-collection-role-msftSecurityAuditn/arole

The last two roles in the table are roles configured as part of setting up Permissions Management to collect data from AWS Account.

Permissions Management interface for AWS is the same as for Azure, but it has some additional categories.

After refreshing data in the Permissions Management, I have the following observations:

  • Two accounts are marked as Super Identities, accounts Admin and Admin1, both assigned the same “AdministratorAccess” policy. None of the other policies have the same designation. I have not found actual list of all AWS policies that would be considered Super Identities. If you have a list, please drop a link in the comments!
  • Three accounts marked as Over-Provisioned Active – Admin account and both of the roles.
  • All accounts marked as no MFA configured accounts – which was nice!
  • All accounts that have not been used (most of them in my case) have been marked as inactive.
  • Three accounts were marked as “Can access Secrets”, both SI accounts and Admin11 with “SecretsManagerReadWrite” policy. This is nice, as with Azure it did not report on the AKV RBAC roles.

Same, as with Azure, the primary dashboard provides very basic high level information. For specific account details you have to go into Analytics page and drill into each account individually.

So at this point I’m on the fence on the comprehensiveness of the Permissions Management reporting on all accounts that can control AWS account and its resources. I can see almost 750 different policies available to be assigned to users. I can’t test them all, but out of a few random with “FullAccess” it only flagged one policy as Super Identity. Is it missing something, like it does in Azure?

I’ll have to do a bit more research on it.

Permissions Management and Azure Subscriptions

Let’s take a look at how Microsoft Entra Permissions Management works against Azure subscriptions. My primary focus in this test is to identify what type of information it gives me on the current RBAC and anything actionable we can do to quickly ensure that RBAC is not over permissioned.

I have only couple personal subscriptions where I test different workloads. Usually in those subscriptions RBAC is pretty simple and straightforward, as it is just one account that I use to test different workloads. To make this test a bit more comprehensive, I created a few user accounts and a few applications and configured one of my subscriptions with the following RBAC using new accounts:

ScopeDisplayNameRoleDefinitionNameObjectType
/subscriptions/SUBIDTestAdmin2Automation ContributorUser
/subscriptions/SUBIDTestAdminApp6Automation ContributorServicePrincipal
/subscriptions/SUBIDPremiumTenant AdminClassic AdminUser
/subscriptions/SUBIDWin11ContributorServicePrincipal
/subscriptions/SUBIDWS22ContributorServicePrincipal
/subscriptions/SUBIDTestAdmin1ContributorUser
/subscriptions/SUBID/
resourceGroups/CloudVM
TestAdmin8ContributorUser
/subscriptions/SUBIDTestAdminApp1ContributorServicePrincipal
/subscriptions/SUBID/
resourceGroups/CloudVM
TestAdminApp2ContributorServicePrincipal
/subscriptions/SUBIDTestAdmin4Key Vault AdministratorUser
/subscriptions/SUBIDTestAdmin5Key Vault Secrets OfficerUser
/subscriptions/SUBIDTestAdmin6Log Analytics ContributorUser
/subscriptions/SUBIDAdminOwnerUser
/subscriptions/SUBID/
resourceGroups/CloudVM
TestAdmin7OwnerUser
/subscriptions/SUBIDTestAdminApp3OwnerServicePrincipal
/subscriptions/SUBID/
resourceGroups/CloudVM
TestAdminApp7OwnerServicePrincipal
/subscriptions/SUBIDCloud Infrastructure Entitlement ManagementReaderServicePrincipal
/AdminUser Access AdministratorUser
/subscriptions/SUBIDCloud Infrastructure Entitlement ManagementUser Access AdministratorServicePrincipal
/Tenant AdminUser Access AdministratorUser
/subscriptions/SUBIDTestAdminApp4User Access AdministratorServicePrincipal
/subscriptions/SUBID/
resourceGroups/CloudVM
TestAdminApp8User Access AdministratorServicePrincipal
/subscriptions/SUBIDTestAdminApp5Virtual Machine Administrator LoginServicePrincipal
/subscriptions/SUBIDTestAdmin3Virtual Machine ContributorUser

As you can see, I configured it fairly comprehensively across different types of accounts and a few most critical RBAC roles that I could think of. Of course, it does not cover every possible RBAC, but most used and I wanted to see how it works at resource group level as well.

After refreshing data in Permissions Management, I have the following observations:

  • Three accounts are marked as Super Identities. Permissions Management classifies super identities as “A super identity can be a human or machine entity with authority to perform actions equivalent to root access”. The following accounts marked as such:
    • PremiumTenant Admin – has Classic Administrator permission (equal to Owner)
    • Admin – has Owner at subscription level
    • TestAdminApp3 – has Owner at subscription level
  • I was surprised not to see accounts with “User Access Administrator” RBAC in this category, as those accounts can grant themselves or any other account an Owner RBAC. If any of these accounts are compromised, game for this subscription is pretty much over. Seems to me an oversight.
  • I’m also surprised that none of the accounts with Owner RBAC on the Resource Group (and for that matter User Access Administrator) are marked as Super Identities. While those accounts can’t do anything at subscription level, they can do anything at Resource Group level and if those accounts are compromised, they can control existing resources in target RG or do anything new there as well.
  • I was less surprised not to see any account in “Contributor” RBAC being marked as accounts that need attention. As you can observe in the table, I have six of those accounts and main dashboard really didn’t bring my attention to any of them. Which again, if any of those accounts are compromised, they can do everything in the target scope, except update RBAC. I absolutely want to know any user in those roles.
  • And, of course, even less surprised not to see any other accounts in what I would say powerful roles, like for instance “Key Vault Administrator”. Got some secrets in any AKV?
  • It does have a category of overprovisioned accounts, but it marked the Super Identity accounts as such, and not any other accounts.

There is “Analytics” view in the portal, which provides information on each individual account. You can see which accounts have high Permission Creep Index “PCI” and then drill into it and see why Permissions Management gave it such high score. It is useful if you have a lot of time to look at each account individually, but it does not provide (at least I have not spotted one) unified view of all accounts.

Other data that I could not find in the portal, as relates to each user account, if they are for example enabled for MFA. For applications it is actually provides information on certificates, password credentials, usage and owners. Yet, it did not say if password credential expired or valid (I didn’t configure any apps with certificates, but I guess it won’t tell that info either). But overall this is all useful data as it gives an indicator who to track for application conversations and how often it is used.

I’ll continue to explore capabilities of this solution to see if it will be able to tell me at a glance about all accounts that we need to pay attention, but for now we’ll continue use PowerShell scripts to gather required data in a format that is easy to massage and quickly take action, as we usually have to do with customers in critical situations.

In the next installment I’ll take a look at the similar reporting against AWS account.

See you around!

Permissions Management First Look

When we help customers to harden their Azure AD, Azure and on-premises AD DS, one of the first things we do is what we call Account Disposition. As part of Account Disposition in Azure, we review the following:

  • Azure AD privileged roles – goal to identify all accounts that are members of these roles so we can plan what to do with them,
  • Azure AD applications (SPNs) – review it for any risky permissions (more on it in one of the future posts),
  • RBAC on Azure Subscriptions – review who has what type of access to control resources in subscriptions.

Permissions Management is designed to help us with analyzing Azure, AWS and GCP to provide comprehensive visibility into permissions assigned to all identities (users and workloads), actions, and resources. I’m hoping to use this product in our engagements to assist with Account Disposition, not only in Azure, but also in other clouds.

As I started testing it in Azure, I quickly discovered that it does not actually look at Azure AD permissions – ie it does not provide analysis of Azure AD roles and it does not look at applications (SPNs) permissions. It only looks at permissions in connected Azure subscriptions (via Management group or individual subscriptions). For some reason I expected that it will look comprehensively at Azure AD and subscriptions, yet it is not current capability.

After reading documentation in more detail, it actually never explicitly states that it looks at Azure AD. Also, one of the names for this technology being “cloud infrastructure entitlement management” (CIEM) kind of leads to think that it only looks at infrastructure and does not look at the directory service that provides access to this infrastructure.

I hope product team will expand Permissions Management capability to look at Azure AD and applications (SPNs) so we have one truly comprehensive view over all permissions assignments in Azure.

For now, you have to look at and analyze Azure AD and applications using some other process.

In the next few posts, I’ll share some observations on Permissions Management over Azure subscriptions and AWS account, which is the main focus of this product.

See you around!

Microsoft Entra

Microsoft just announced a new brand name – Microsoft Entra. You can read about it here https://www.microsoft.com/security/blog/2022/05/31/secure-access-for-a-connected-worldmeet-microsoft-entra/

Currently it covers the following products: Azure AD, Permissions Management (use to be called CloudKnox) and Verified ID. I think it will probably cover more products as they get developed.

All of it in some form is part of Cybersecurity landscape that we, in Microsoft Cybersecurity, help our customers to manage and properly configure, so the bad guys don’t have super easy ways to get into their environments.

I hope that this new brand name will stick around for a long time and this is why I just changed the primary URL of this blog to https://entra.blog. It still can be accessed via prior names, like cloudidentity.blog and cloudidentityblog.com.

See you around!

I am back. Maybe.

Well, after a few years of absence, I’ll try to write here once in a while.

Something about cloud security… mainly Microsoft technology stack… as that is what I am most familiar with 🙂

See you soon!

Azure Security – Means of Control

Hi there, in my last post (yes, more than 15 month ago), I started describing overall framework that can be used to securely design and implement applications in Azure.

Secure administration is one of the foundation blocks of overall security for applications deployed into Azure. Over the next few blog posts we’ll discuss different aspects of secure administration over Azure resources and some ideas on how to make it all work.

One of the main principals of secure administration is trust in user identity that is used to perform administration over resources. This trust assures us that identity is not compromised and is not subject to compromise by the bad guys. To have this trust we need to have a solid administrative architecture that ensures that privileged accounts can be used only from approved, properly managed devices with no or very limited attack surface. The goal of such architecture is to ensure that trusted identity can be used only from managed trusted devices and its use on any other device would be denied.

Before we can look at different administration models of Azure based resources, we need to understand how resources in Azure can be controlled via different technical controls.

Means of Control over Azure based Resources

Resources deployed in Azure can be controlled via the following mechanisms.

1.     Administration of Azure based resources via Azure Control Plane

Azure Control Plane is the de facto access mechanism to control anything in Azure. All Azure AD accounts with administrative roles must be protected to reduce the likelihood of compromise of the customer environment.

  • What is controlled: All Azure resources, such as Azure AD, Resource Groups, RBAC, Networking, VM resources, Encryption, SQL DBs, ie any resource in Azure
  • Tools used on the client side:
    • Browser,
    • PowerShell,
    • Cross Platform CLI,
    • Visual Studio,
    • VSTS,
    • Visual Code,
    • GitHub and many other
  • Azure technologies and controls used for administration:
    • Account resets,
    • ARM templates,
    • Azure DSC,
    • Azure Automation and other
  • Accounts used for administration:
    • Azure AD account with assigned RBAC permission to accomplish the task
    • Account can be cloud based only or being synchronized from on-premises ADDS
    • Main type of accounts with admin access:
      • Account Admins in Enterprise Portal
      • Tenant level Admin roles:
        • Global Admins in Azure Tenant
        • Other AAD Roles
  • Subscription level Admin roles:
    • Owner
    • Service Admin and Co-Admin (access to the legacy portal)
    • User Access Administrator
    • Accounts with RBAC roles assigned at Subscription level, resource group level or resource level
  • Means of Control over the resources in Azure:
    • Direct via AAD account via assigned permissions on Azure resource
    • Indirect via one of the Azure controls that has ability to modify Azure resource or the state of the compute resource (like VM)

It is important to understand that Azure control plane can be used not only to configure Azure resource settings as it relates to Azure, it is also used to configure and manage internal settings and applications on the resource itself. For example, it can be used to configure VM with specific roles and applications and then ensure that this VM is configured exactly as needed, all without ever logging into this VM via traditional remote access mechanisms.

The following diagram shows different means of control over Azure via Azure Control Plane:

meansoncontrol11

2.     RDP, SSH or remote management into target VM

Console based access or access via remote management tools is the classic way to manage most operating systems.

This can be done via one of the following mechanisms:

  • Connect to the target VM via direct connection, such as ExpressRoute, Site-Site VPN, Point-Site VPN. It requires a routed connection from the client to the target VM
  • Connect via Jump Server VM
  • Connect via Jump Server VM configured with VM JIT
  • Accounts used for administration:
    • Operating System based account, such as Windows local accounts or AD DS based domain accounts
  • AuthZ within VM is done via OS based authorization model

meansofcontrol22

3.     Manage via tools like project “Honolulu”

  • What it is: Project “Honolulu” is the code name for new set of remote management tools which use Browser as its client.
  • Management of the target system is done via one of the following:
    • Direct connection from the client to the target system, with OS based account
    • Connection via Web Server Management Proxy that has direct connection to the target systems, authN is done via OS based account
  • https://aka.ms/ProjectHonolulu

Behind all of these “means of control” mechanisms are identities (user accounts) which must be properly secured. In the next post we’ll look at current administrative practices and identify where they fall short.

 

Deploying Applications in Microsoft Azure with Security in mind

It’s been a while since I blogged here. Perhaps writers block or something. Not like I was not busy over the last 18 or so months since my last post. I have been busy doing different type of work related to security of Microsoft platform, most of it related to the on-premises implementations, not whole lot of Cloud based solutions. This is starting to change and recently I’m getting more involvement with doing bit of security related work with Microsoft Cloud, ie Azure.

Azure adoption is going like crazy. Customers starting to implement their new apps in Azure IaaS or move their existing apps from on-premises to Azure IaaS. Many security conscious customers starting to ask on how to deploy this in the right way, using Azure provided security controls and shift their management strategies from how it was done in on-premises environments, where a lot of things are done via brick & mortar, to pretty much software only environment.

If you done any work in Azure and think about security, then you already found that there is a lot of Azure documentation available on all type of different topics, many are very detailed, some maybe not as one would wish. Azure Security team is starting to consolidate many topics under single umbrella here, which is totally awesome. There are many personal blogs from folks as well, documenting different Azure features. This is all good, having documentation available for specific technical areas.

But there is something missing. It the framework that would take a comprehensive look at all the different areas in Azure and provide guidelines and recommendations on how to design security around application that is being deployed in Azure IaaS. If we could have a framework that can adopt to the constantly improving Azure security controls and have some practical guidelines on how to apply it to our own applications, then I think companies would have a better way to deploy their applications in Azure in a secure way.

I have been thinking about it, read bunch of documentation and other blogs, and came up with initial security framework that I think every application deployed in Azure IaaS should be designed against. This is of course work in progress and subject to change.

AzureSecurityFramework

If you wonder what it is all about, in the next blog post I’ll try to get a bit more detailed about each of the main pillars in this framework. Thanks!

Token Replay Detection

Token Replay Detection is used to protect applications against replay of the issued tokens by Identity Provider Security Token Service. Active Directory Federation Services (AD FS) provides this capability when it is installed with SQL as its configuration store database. If AD FS is installed with Windows Integrated Database (WID) then this capability is not provided. When implementing AD FS services, it is common question to ask, should it be installed with WID or with full SQL back end. SQL back end provides other benefits besides token replay detection, so it should be evaluated for its full capabilities, not just token replay detection, but if you are wondering if you should use SQL while installing AD FS for token replay detection, you should fully understand in what type of federation topologies this capability is actually will be used.

The following couple diagrams are showing when token replay detection actually used and SQL back end should be used with AD FS implementation. Open them to see all the details.

image

image

AD FS Audit

Hello everyone! Hopefully you had a great summer, I had a good one, not even bothered to post anything on this blog. I do post almost daily on my travel blog, some cool photos from a few places I happen to visit, but this one does require a bit more thought about the technical stuff, not as easy as posting travel photos. Now summer is over, I’ll try to get back into the publishing mood with technical stuff as well.

Today we are going to review a few different ways we can do an audit and usage on the AD FS service.

By default when you install AD FS it does not provide any intelligence into how users use this service. For example if our AD FS is configured with couple Identity Providers (IdP) and a few Relying Parties (RP), and we want to know which users from what IdP are using what RP, and potentially what type of claims are being passed, there is no easy way to collect this data in manageable and usable way.

Let’s review a few different ways that we can do an audit on the AD FS.

1. Use AD FS Debug Tracing Log

See here how to enable it. While this log will collects the information about issued tokens, the way it is collected is not the most convenient way to analyze this data. It collects it in multiple events that would need to be used to extract relevant data. AD FS Debug Tracing logs are really designed for troubleshooting purposes and should not be used for any type of long term data collection.

2. Use Security Audit Logs 

See here how to enable it. We can enable Success Audit and Failure Audit in the AD FS Properties. This will log all activities in the Windows Security log. The key thing here to note is that it will log all information into the log. All information in the assertion will be logged into the Security log. If we have a farm of AD FS servers, it will log into the local log and there will be a need to push these logs to some central collection system and then figure out how to do some intelligent analysis against that data. As with the AD FS trace logs, each transaction is collected via multiple event entries, linked to each other by correlation ID.

Using Security Audit will allow to collect all necessary data, it has a few drawbacks:

  • Logs from each AD FS server will need to be pushed to a central collection system
  • To make any sense of the collected data, there will need to be a software that can comb through the events and pull out data that we are interested to analyze
  • As mentioned, these logs will contain all information in the security assertion, potentially including PII and other sensitive data.

So while it is possible to collect data and do an audit, making this type of collection on a long term basis might be fairly complicated to implement and manage.

3. Use SQL Attribute store

This is a custom way to collect information, but this will provide the most flexibility in what we can collect and then it will allow us to create custom reports against collected data in more streamlined way. The following diagram shows the concept of this solution:

clip_image002

1. User will attempt to access application and will be redirected to the AD FS (ADFS Server1 or 2) server.

2. They will be redirected to authenticate to their IDP (not shown) and come back to the AD FS to get a token for the application.

3. AD FS Server will process claims rules for the application and issue required claims for it. At the same time it will connect to the SQL Attribute Store and insert specified claims into the SQL DB.

4. SQL DB can be exposed to the administrator via different mechanisms, such as reporting web service or manual data transfer from DB into text format document.

The type of data collected and how it is collected is really up to you to design and implement. There are different ways to make this work. Let’s take a look at one of them. To use SQL Attribute store to collect data we’d need to perform the following configuration:

1. Create DB on the SQL server with a minimum of one table. You can use on-premises SQL or cloud based SQL. I use Azure SQL in my lab.

2. The table should have columns for the types of claims we want to collect. For example:

clip_image004

Create a stored procedure that will be used to inject data into this table:

clip_image006

Configure AD FS with SQL Attribute store and configure Claims rule on each RP to send claims to this attribute store. You’ll need to make sure that the account you use to connect to the SQL server have write permissions to the specified table.

Here is an example claims rule that works with the stored procedure and our table:

c1:[Type == “http://schemas.microsoft.com/2012/01/requestcontext/claims/client-request-id”%5D

&& c2:[Type == “http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork”%5D

&& c3:[Type == “http://schemas.microsoft.com/2012/01/requestcontext/claims/relyingpartytrustid”%5D

&& c4:[Type == “http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-forwarded-client-ip”%5D

&& c5:[Type == “http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-ip”%5D

&& c6:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod”%5D

&& c7:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant”%5D

&& c8:[Type == “http://schemas.microsoft.com/claims/authnmethodsreferences”%5D

&& c9:[Type == “http://schemas.xmlsoap.org/claims/UPN”%5D

&& c10:[Type == “http://ssoaad.com/appname”%5D

&& c11:[Type == “http://ssoaad.com/idp”%5D

=> add(store = “AuditLog”, types = (“http://schemas.contoso.com/status”), query = “EXEC [dbo].[LogRequest] @RequestId = {0},@Inside = {1},@RpId = {2},@forwardIP = {3},@clientIP = {4},@AuthMethod = {5},@AuthInstance = {6},@AuthRef = {7}, @Upn = {8}, @AppName = {9}, @Company = {10}”, param = c1.Value, param = c2.Value, param = c3.Value, param = c4.Value, param = c5.Value, param = c6.Value, param = c7.Value, param = c8.Value, param = c9.Value, param = c10.Value, param = c11.Value);

For this rule to work you’d need to make sure that all eleven claim types are present in the pipeline. If one of them is missing then the rule will not trigger the “add” action and it will not run the stored procedure on the target SQL server.

If it is all properly configured, every time user getting a new token for the target application, AD FS will insert a new row into the SQL DB. Based on that entry we will have information about their IP address, their UPN, their Company, the application they have accessed, timestamp etc. You can add or remove required columns as needed.

Of course, the table can be designed differently as well, for example have only two columns: claim type and claim value.

Now we can do some targeted logging and do very intelligent reporting on who uses our applications, where they are coming from, their affiliation, the rate of access etc. There is really no limit to flexibility of what can be done via this logging mechanism.