Archive for the ‘Microsoft’ Category

Configuring Azure Multifactor Authentication with Exchange 2013 SP1

March 2, 2014 8 comments

Thanks to Raymond Emile from Microsoft COX, the guy responded to me instantly and hinted me around the OWA + basic Auth, Thanks a lot Ray…

In case you missed it, Azure has a very cool new feature called Azure multifactor authentication, using MFA in Azure you can perform multifactor for Azure apps and for on-premise apps as well.

In this blog, we will see how to configure Azure Cloud MFA with Exchange 2013 SP1 on premise, this will be a long blog with multiple steps done at multiple levels, so I suggest to you to pay a very close attention to the details because it will be tricky to troubleshoot the config later.

here are the highlevel steps:

  • Configure Azure AD
  • Configure Directory Sync.
  • Configure multifactor Authentication Providers.
  • Install/Configure MFA Agent on the Exchange server.
  • Configure OWA to use basic authentication.
  • Sync Users into MFA agent.
  • Configure users from the desired login type.
  • Enroll users and test the config.

so let us RNR:

Setting up Azure AD/MFA:

Setting up Azure AD/MFA is done by visiting , here you have 2 options (I will list them because I had them both and it took me a while to figure it out):

    • If you have never tried azure, you can sign up for a new account and start the configuration.
    • If you have Office 365 enterprise subscription, then you will get Azure AD configured, so you can sign in into Azure using the same account in Office 365 and you will find Azure AD configured for you (I had this option so I had to remove SSO from the previous account and setting it up again).

Once you login to the portal, you can setup Azure AD by clicking add:


Since I had Office 365 subscription, It was already configured, so if you click on the directory, you can find list of domains configured in this directory:


If you will add a new domain, click on add and add the desired domain, you will need to verify the domain by adding TXT or MX record to prove you domain ownership, once done you will find the domain verified and you can configure it, the following screenshots illustrates the verification process:






Once done, go to Directory Integration  and choose to activate directory integration:



One enabled, download the dirsync tool on a computer joined to the domain:


Once installed, you will run through the configuration wizard which will ask you about the azure account and the domain admin account to configure the AD Sync:







Once done, you can check the users tab in Azure AD to make sure that users are sync’d to Azure successfully:


If you select a user, you can choose to Manage Multifactor Authentication


you will be prompt to add a multifactor authentication provider, the provider essentially controls the licensing terms for each directory because you have per user or per authentication payment, once selected you can click on manage to manage it:


Once you click manage, you will be taken to the phonefactor website to download the MFA agent:


click on downloads to download the MFA agent, you will install this agent on:

  • A server that will act as MFA agent and provides RADIUS or windows authentication from other clients or
  • Install the agent on the Exchange server that will do the authentication (frontend servers).

Since we will use Exchange, you will need to install this agent on the Exchange server, once install you will need to activate the server using the email and password you acquired from the portal:


Once the agent installed, it is time to configure the MFA Agent.

Note: the auto configuration wizard won’t work, so skip it and proceed with manual config.

Another note: FBA with OWA won’t work, also auto detection won’t work, so don’t waste your time.

Configuring the MFA Agent:

I need to stress on how important to follow the below steps and making sure you edit the configuration as mentioned or you will spend hours trying to troubleshoot the errors using useless error codes and logs, the logging still poor in my opinion and doesn’t provide much information for debugging.

the first step is to make sure the you have correct name space and ssl certificate in place, typically you will need users to access the portal using specific FQDN, since this FQDN will point to the Exchange server so you will need to publish the following:

  • Extra directories for MFA portal, SDK and mobile app.
  • or Add a new DNS record and DNS name to the ssl certificate and publish it.

In my case, I chose to use a single name for Exchange and MFA apps, I chose, MFA is just a name so it could be OWA, mail or anything.

SSL certificate plays a very important role, this is because the portal and mobile app speaks to SDK over SSL (you will see that later) so you will need to make sure that correct certificate in place as well as DNS records because the DNS record must be resolvable internally.

once the certificate/DNS issue is sorted, you can proceed with the install, first you will install the user portal, users will use this portal to enrol as well as configuring their MFA settings.

From the agent console, choose to install user portal:


It is very important to choose the virtual directory carefully, I highly recommend changing the default names because they are very long, in my case I chose using MFAPORTAL as a virtual directory.





once installed, go the user portal URL and enter the URL (carefully as there is no auto detection or validation method), and make sure to enable the required options in the portal (I highly recommend enabling phone call and mobile app only unless you are in US/EU country then you can enable text messages auth as well, it didn’t work with me because the local provider in Qatar didn’t send the reply correctly).


Once done, Proceed with SDK installation, again, I highly recommend changing the name, I chose MFASDK



Once installed, you are ready to proceed with the third step, installing the mobile app portal, to do this browse to the MFA agent installation directory, and click on the mobile app installation, also choose a short name, I chose MFAMobile



Once Installed, you will have to do some manual configuration in the web.config files for the portal and the mobile app.

You will have to specify SDK authentication account and SDK service URL, this configuration is a MUST and not optional.

to do so, first make sure to create a service account, the best way to do it is to fire you active directory users and computers management console, find PFUP_MFAEXCHANGE account and clone it.

Once cloned, open c:\intepub\wwwroot\<MFAportal Directory> and <MFA Mobile App Directory> and edit their web.config files as following:

For MFA portal:



For MFA mobile App:



Once done, you will need to configure the MFA agent to do authentication for IIS.

Configure MFA to do authentication from IIS:
To configure MFA agent to kick for OWA, you will need to configure OWA to do basic authentication, I searched on how to do FBA with MFA, but I didn’t find any clues (if you have let me know).

Once you configured OWA/ECP virtual directories to do basic authentication, go to the MFA agent , from there go to IIS Authentication , HTTP tab, and add the OWA URL:


Go to Native Module tab, and select the virtual directories where you want MFA agent to do MFA authentication (make sure to configure it on the front end virtual directories only):


Once done….you still have one final step which is importing and enrolling users…

to import users, go to users, select import and import them from the local AD, you can configure the sync to run periodically:


Once imported, you will see your users, you can configure your users with the required properties and settings to do specific MFA type, for example to enable phone call MFA, you will need to have the users with the proper phone and extension ( if necessary):


You can also configure a user to do phone app auth:


Once all set, finally, you can enrol users.

Users can enrol by visiting the user portal URL and signing with their username/password, once signed they will be taken to the enrolment process.

for phone call MFA, they will receive a call asking for their initial PIN created during their configuration in MFA, once entered correctly, they will be prompted to enter a new one, once validated the call will end.

in subsequent logins, they will receive a call asking them to enter their PIN, once validated successfully, the login will be successful and they will be taken into their mailbox.

in mobile app, which will see here, they will need to install a mobile app on their phones, once they login they can scan the QR code or enter the URL/Code in the app:




Once validated in the app, you will see a screen similar to this:


Next time when you attempt to login to OWA, the application will ask you to validate the login:


Once authentication is successful, you will see:


and you will be taken to OWA.

Final notes:

again, this is the first look, I think there are more to do, like RADIUS and Windows authentication which is very interesting, also we can configure FBA by publishing OWA via a firewall or a proxy that does RADIUS authentication + FBA which will work.

I hope that this guide was helpful for you.


Configuring Dynamic Access Controls and File Classification-Part4-#winservr 2012 #DAC #microsoft #mvpbuzz

September 12, 2012 Leave a comment

Part1: The Windows Server 2012 new File Server–part 1- Access Condition

Part2: The Windows Server 2012 new File Server–part 2- Install AD RMS

Part3: The new file server part3 using file classification & AD RMS:

In previous parts we have walked through the new file server features and permissions wizard, Data Classification, AD RMS installation and File Classification and AD RMS integration, in the final part of this series we will take about how to implement a new feature of Active Directory called claim based authentication and utilize it for something called Dynamic Access Control.

but wait a minute, what is the claim based authentication, from this reference:

Claims-based authentication relies on a trusted identity provider. The identity provider authenticates the user, rather than every application doing so. The identity provider issues a token to the user, which the user then presents to the application as proof of identity. Identity is based on a set of information that, taken together, identifies a particular entity (such as a user or computer). Each piece of information is referred to as a claim. These claims are contained in the token. The token as a whole has the digital signature of the identity provider to verify the authenticity of the information it contains.

Windows Server 2012 turns claims into Active Directory attributes. These claims can be assigned to users or devices, using the Active Directory Administrative Center (ADAC). The identity provider is the Security Token Service (STS). The claims are stored inside the Kerberos ticket along with the user’s security identifier (SID) and group memberships.

Once the data has been identified and tagged – either automatically, manually or by the application – and the claims tokens have been issued, the centralized policies that you’ve created come into play.

Now you can turn user’s attribute whatever they are, into security controls, now we have the power to control the access to files and set the permissions to files using attributes, we no longer controlled by group permissions only.

With that in mind, you can set the permissions on the files based on department attributes, connecting machine, location or any other attribute in Active Directory and you don’t have to create specific groups for that, also the permissions will be set on the fly, not only that, but you can set the permissions not based on the user’s properties but also based on the device the user is using, you can set the permissions to full control from corporate devices, but readonly from kiosk or non-corporate devices.

Not only that, but you can also include the attributes of the resources that is being accessed in the permissions equation, so you want “on the fly” to examine the resource classification and allow only specific users with specific attributes to access the resource (so files classified of country classification “Egypt” will be accessed by only users who are in country “Egypt” for example).

Dynamic Access Control (DAC) is a new era for permissions, I am blown by the power of DAC and how flexible it is, mixed with AD RMS you can have ultimate control on data within your corporate.

Lab Setup:

We will use the steps described here in this TechNet article: , the steps here are illustration of the steps, and prior parts of the blog series (part 1 to 3) are used as foundation to demonstrate the final environment:

Implementation steps:

the first ting to configure is the claim type, claim types represents what are the data queried in the user/device/resource attribute and then used in the permission evaluation, you want to query about the country, you create a claim type for that, you want to use department you create a claim type for that.

In our Lab we will create a claim type of Department and Country:

to create a claim type open the AD Administrative Center  and go to Claim Types, and from the menu select new:


Create a new claim for Department :


and for Country :


In the Country, Supply suggested values (to specify values for the claims as Egypt and Qatar):


Note: By defaults claims are issues to users, if you want to issue it for computers you must select that on the claim

Create a new reference resource property for Claim Country:


Now got to Resource Properties  and enable the department claim;



Now let us create a Central Access Rule, This rule will include the template permissions that will be applied when the claims are matched with the rules defined in the CAR:


In the rule, specify the security principle you want to use, in this demo we will grant access to Finance Admins full control and Finance Execs read only access, and this will be applied to all files “resources” that is classified in the Finance Department, we can also go with devices claims and specify the country of this device or any other property that we can to query about the device:




The Final rules will be :


Now create a Central Access Policy that will be applied using GPO to all file servers and the Administrator can select and apply them on individual folders:


In the CAP, include the finance data rule:


No you need to apply this CAP using GPO and make it available to file servers, now create a GPO and link it to the file servers OU:


In the Group Policy Management Editor window, navigate to Computer Configuration, expand Policies, expand Windows Settings, and click Security Settings.

Expand File System, right-click Central Access Policy, and then click Manage Central access policies.

In the Central Access Policies Configuration dialog box, add Finance Data, and then click OK.


You need now to allow the Domain Controllers to issue the Claims to the users, this is done by editing the domain controllers GPO and specify the claims settings:

Open Group Policy Management, click your domain, and then click Domain Controllers.

Right-click Default Domain Controllers Policy, and then click Edit.

In the Group Policy Management Editor window, double-click Computer Configuration, double-click Policies, double-clickAdministrative Templates, double-click System, and then double-click KDC.

Double-click KDC Support for claims, compound authentication and Kerberos armoring. In the KDC Support for claims, compound authentication and Kerberos armoring dialog box, click Enabled and select Supported from the Options drop-down list. (You need to enable this setting to use user claims in central access policies.)

Close Group Policy Management.

Open a command prompt and type gpupdate /force.

Testing the Configuration:

Going to the file server, and clicking on our finance data file, we can now find the data classification that we specific in the Claims:


Now let us classify the data as Finance Department.


Note: In order to allow DAC permissions to go into play, allow everyone NTFS full control permissions and then DAC will overwrite it, if the user doesn’t have NTFS permissions he will be denied access even if DAC grants him access.

Now checking the permissions on the folder:


going to the Central Policy tab and applying the Finance Data Policy:


now let us examine the effective permissions:

for the Finance Admins:

If the user has no claims (so he is a member of the group but not in the finance department and is not located in Egypt) he will be denied access:


Now, let us specify that he is from Finance Department, no luck, Why?!

This is because he must access the data from a device that has claim type country Egypt:


Now test the Finance Execs Permissions and confirm it is working.

You can test applying this rule also when the following condition is set, and wee what happens:


Note: the above rule will grant use access when his department matches the file classification department, so you can have a giant share from mix of departments and permissions will be granted to files based on users’ departments.


Mixing DAC with AD RMS and file classification is a powerful mix that helps organizations with the DLP dilemma, and with Windows Server 2012 organization has total control for the first time on the files and data within the files. please try the lab and let me know your feedback

The new File Server–Part3-Using File Classification & ADRMS #Microsoft #winserv 2012 #mvpbuzz

September 10, 2012 2 comments

Part1: The Windows Server 2012 new File Server–part 1- Access Conditions #Microsoft #winserv 2012 #mvpbuzz
Part2: The Windows Server 2012 new File Server–part 2- Install AD RMS #Microsoft #winserv 2012 #mvpbuzz

In part1 we took a look to the new conditions that can be applied to the new security permissions GUI in Windows Server 2012, in Part 2 we continued in our lab and setup AD RMS in order to setup the stage for Part3.

In Part3, we will delve into the file classification infrastructure in Windows Server 2012, and we will see how to utilize file classification infrastructure and integrate it with the Active Directory RMS.

But first, what is file classification in Windows Server?, FCI (File Classification Infrastructure) is not new in Windows Server 2012, it has been there since Windows Server 2008 but it was a separate set of tools and commands that classifies the files at the file server level.

The FCI scans the folders/file shares and reads the files inside them and stamp or classify the files inside those shares or folders based on specific attributes, once the classification is done it could be read by Windows Server File Server or 3rd party products and take actions according to each file’s classification, below is a screenshot for how the file is classified, the below screen shows that the file is classified with country “Egypt” and Department “Finance”, you can add and classify documents in endless attributes to include priority, sensitivity, location, security clearance…etc


How the files and folders are classified?

You can classify the folders/files manually by right clicking on the folder/file and view its properties, going the classification tab you can specify the file classification manually, in the below screen I can select from the county classification either “Egypt or Qatar”, and I can specify the department between a wide range of departments that are provided by default and of course the list is customizable:



How to classify the files automatically?

In order to classify the files and folders automatically in Windows Server 2012, install the File Server resources manager, you can do that by adding the role from the “Server Manager”.

After installing the File Server Resource Manager, you can open the MMC console and you will be able to manager Quotas, Shares and file screening, and you will find the new section for file classification:


The File Classification Management has 2 section:

  • Classification Properties: this is used to define the classification attributes Like country/department in our example


In the above screen you will find 2 attributes (Country and Department) and their scope are global and this is because they are defined in AD (configuring these will be explained in details in part 4 when we talk about the dynamic access control), you can define your own local attributes like file sensitivity…etc.

Now if you want to classify the documents automatically, you will need to create a classification rule, the classification rule will classify the documents automatically based on the file attributes, scope of content, let us see how:

Customizing Folder Usage:

Folder usage is an automatic way to identify the data that is contained in folders, this is not classification it defines what data is contained in the folder, and this could be used in the classification later.

to customize the folder usage, open the Classification Properties  and double click on Folder Usage.

By Default, there are 4 types of data:

  • Application data.
  • Backup Data.
  • Group Data.
  • User Files

in this page you can create your own data types


I will create Engineering and financial Data types:


Now to define which files are used by the Engineering team and which files are used by the financial team, click on the empty space in the Classification Properties and Select Set Folder Management Properties:


In the property, select Folder Usage and define the folders that is used by each team or contains each data type, you can have infinite number of folders and definition but again this is not classification it defines folder usage which will be used in our classification rule later, so select the file path and define the data usage:


The final settings will be as following:



Create Classification Rules:

Now let us create some classification rules, From the File Classification Rules, create a new Rule:


In the Rule Name, Specify a rule name, In this rule I will classify a folder as financial data:


In the Scope you can specify you can specify the data usage to be classified automatically, we will use the financial data as well specifying a manual folder (share 1) also to be classified as financial data, now when you select the financial data the folder selection will include all the paths you defined in the previous step, you can also specify paths manually, the final settings will be as following:


In the classification tab we have 2 ways to set classification:

  • Folder Classification: this classifies all the in the folder with the specified classification rules
  • Content classification: this searches the files for specific patterns, keywords and using regular expressions you can go epic searching your data for specific contents and when the content match found, the files are classified accordingly, an example could be Credit card Numbers, Projects codes..etc This rule will classify the folders, we will create another rule that classifies the content, so the rule will be as following:


Note: The Department/Country Classifications are organization wide and created based on dynamic access rules, you will learn how to create these in details in next blog post (Part4), if you would like to go along with the lab and don’t want to jump to the DAC part yet, create local properties and use them.

In the evaluation cycle, you can specify either to continuously evaluate the data and either to overwrite or aggregate the data, in my example I will overwrite the data and this will make sure that any user level settings are overridden by the company rules defined here:


Now the rule is ready, let us create another rule that does content classification:


This rules classifies the data country, so I will include all the engineering and financial data usage:


In the classification, I will choose content, and classify data that matches the rule as country Egypt:


In the Parameters section, click on Configure, you will find a regular expression and data finding fields with strings and case sensitive strings:


In my case I will search the document for word Egypt and then classify it, you can use regular expression and complex statements in your rules and even multiple rules, also you can define the minimum occurrence and maximum occurrence to fine tune your rule:


The Final Rules will be as following:


Now let us see, in each folder, I have 2 files one contains the word Egypt and other is not, I have placed the file group in the financial and R&D folders, right now nothing is classified:



Now if we go and run the classification rules:


and let us see how it will work, and let us examine the classification report:


it worked as expected Open-mouthed smile, sweeeeeeeeeeet.

until now we have done nothing with the data classification, we just tagged the data as in Egypt or financial or not, but what is the point, there are 2 things we can using data classification for:

  • Encrypt the files using AD RMS.
  • Control file access using Windows Server 2012 Dynamic Access Control (DAC).

In this post we will see how to use the AD RMS, in part4 we will use the Dynamic Access Control.

Encrypt Files Dynamically based on Data Classification:

So far we are doing great, we classified and identified the folder usage and tagged the files with the proper classification, now we will take actions based on those classifications, in the below steps we will encrypt the document using AD RMS:

Configuring RMS to Allow File Server to request Certificate:

In order to allow the file server to automatically request certificate & encrypt the documents, you must configure some permissions on the ServerCertification.asmx on the RMS Server:

  • Read and Execute permissions for the File Server machine account.
  • Read and Execute permissions for the AD RMS Service Group

Create File Server Management Task:

      From the

File Management Tasks,

    Create a new task:


In the General Tap, give the rule meaningful name:


In the scope we can select Financial or Engineering scopes or select custom folder, I will select Financial scope and “Share 1” which is a custom path:


In the Action you have 3 options:

  • Custom: you can create your own command that does the action, you can use powershell scripts…etc
  • Expire: you can expire the files or in other words moving them to another folder “the expiry folder” for review and deletion.
  • RMS Encryption: You can specify a template or custom permissions to apply to files matching the criteria.

In this article we will apply RMS encryption, you can choose between a predefined RMS template or creating custom permissions, I will set it to custom permissions where everyone will get read only access and only “Finance User” will have full control:


In the notification, you can send notification to email address, maybe the folder manager, department head or administrator:


In the Conditions, I will specify the rule to encrypt all the documents that belong to finance, you can also choose to apply time conditions like last day since accessed, modified or created or file names patterns:


In the Schedule, you can specify the schedule to run the rule, you can also choose to run it continuously and monitor for new files:


Now the rule is ready and configured, let us run it and see the report:


So, As Expected the files were encrypted and now based on their tagging everyone has ready only access and only the finance user will have full control, Super!!!

This was a long article, we have talked about data classification, Usage and RMS encryption integration using File Management Tasks, using the above knowledge; you can enforce and control data within your organization and massively improve Data Leak Control within your organization.

In Part4, we will speak about Dynamic Access Control and how to control access on the fly using Windows Server 2012 DAC.

The Windows Server 2012 new File Server–part 2- Install AD RMS #Microsoft #winserv 2012 #mvpbuzz

September 10, 2012 Leave a comment

Part1: The Windows Server 2012 new File Server–part 1- Access Conditions #Microsoft #winserv 2012 #mvpbuzz


In Part 2 of this blog series, We will continue our exploration of the new File Server functionality, In order to complete our journey we will stop by one of my favourite but less fortunate features, Active Directory Rights Management Server.

Active Directory Rights Management Server or AD RMS has been around for several years, and for hidden and secret reasons it wasn’t adopted by a lot of customers, although I believe it is one of the most important features of Windows Server.

What is Active Directory Rights Management Services?

An AD RMS system includes a Windows Server® 2008-based server running the Active Directory Rights Management Services (AD RMS) server role that handles certificates and licensing, a database server, and the AD RMS client. The latest version of the AD RMS client is included as part of the Windows Vista® operating system. The deployment of an AD RMS system provides the following benefits to an organization:

  • Safeguard sensitive information. Applications such as word processors, e-mail clients, and line-of-business applications can be AD RMS-enabled to help safeguard sensitive information Users can define who can open, modify, print, forward, or take other actions with the information. Organizations can create custom usage policy templates such as “confidential – read only” that can be applied directly to the information.
  • Persistent protection. AD RMS augments existing perimeter-based security solutions, such as firewalls and access control lists (ACLs), for better information protection by locking the usage rights within the document itself, controlling how information is used even after it has been opened by intended recipients.
  • Flexible and customizable technology. Independent software vendors (ISVs) and developers can AD RMS-enable any application or enable other servers, such as content management systems or portal servers running on Windows or other operating systems, to work with AD RMS to help safeguard sensitive information. ISVs are enabled to integrate information protection into server-based solutions such as document and records management, e-mail gateways and archival systems, automated workflows, and content inspection.

More Information:

In this blog we will install AD RMS on a new Windows Server 2012 machine, this machine will be used later in my next blog post for Data Classification and policy enforcement.

Installing Active Directory Rights Management Server in Windows Server 2012:

The AD RMS setup has been dramatically improved, in the old days it was hard, and even the improved setup experience in Windows 2008 is no match for the improved setup in Windows Server 2012, and as you can expect everything is controlled by the server manager so to install AD RMS, open the Sever manager and Select Add Roles and Features, from there select AD RMS, Once installed, the Server Manager will tell you that there is pending configuration


In the following screen, select the perform additional configuration:


and in the welcome screen click next:


In the AD RMS Cluster, and since this is the first server, we will create a new cluster:


In the Configuration Database, I will use internal Database, this is a lab environment but make sure to have the proper SQL installation in place if you are using the ADRMS setup in production:


In the Service Account, type in a designated service account, this is a normal account with special permissions (if you are installing the AD RMS on a DC”for testing”, this account must be a member of the Builtin “Administrators” group:


In the Cryptographic mode, Select mode-2 it is much more secure:


In the Key Storage, I will choose to use AD RMS to store the Key:


In the key password, supply a password to protect the key:


In the AD RMS Website, Select the Web Site that will host the AD RMS web services:


In the Cluster Address, Specify the FQDN that will be used my the clients to communicate with the AD RMS Server and the transport protocol, I will keep it simple and choose the HTTP, however you might want to use HTTPS since it is more secure:


In the Server Licensor Certificate name, specify a name for the certificate, and click next:


In the AD RMS service registration, register the AD RMS SCP unless for mysterious reasons you want to do it later:


In the installation summary, review the installation and click install:


Congrats, once finished you then you completed the AD RMS installation, you can configure templates and additional configuration.

In the next blog post, we will see how we can use the AD RMS and Data classification infrastructure to protect valuable and confidential data, on file shares.

The Windows Server 2012 new File Server–part 1- Access Conditions #Microsoft #winserv 2012 #mvpbuzz

September 9, 2012 9 comments

Part2: The Windows Server 2012 new File Server–part 2- Install AD RMS #Microsoft #winserv 2012 #mvpbuzz

I am so excited about the new Windows Server 2012, a lot of nice features and a lot of enhancement but one particular enhancement I am so interested in was around file servers.

for years, File Servers have been the same, a normal share that resides on the server and accessed by users, that is what they are and what they do, nothing new to introduce.

But with the recent increase of security demand, and huge need for DLP (Data leak prevention) and with the believe that most of leaks happens from employees not from hackers or intruders, companies kept looking to enhance their file servers.

The question now days is not about who is accessing the files, but it is about auditing that access, continuously enforcing that access, controlling the access and additionally knowing what is on that share and what sort of data inside and from where it is accessed.

let us take a normal example, a file share is located on corporate network, in the old days the control was only enforced by the File share and NTFS permissions, but there are some catches:

  • if the user has permissions to access the file share, he can access it from everywhere, he can access it from a kiosk on the hotel, from his IPAD or tablet device without any control, as long as he has access to data using permissions he can do access it from anywhere (provided that there is a remote access).
  • if he got access to the share, does that mean that he is allowed to access the data within the share, for example a share that is created for the R&D team contains all the R&D files, but not all R&D team members ]have the same level of access, now if a confidential file has been mistakenly placed on the share, all of the users who have access to the share can see the confidential data. although users should be aware about data confidentiality, but the company must be able to continuously control the data access on the data files themselves without warring about human mistakes which happens, and this is a big portion of the DLP controls.
  • Controlling Access properties using groups are really tricky, and more often groups are created to reflect access criteria, so we have a group for Egypt’s Accountants, and another group for Qatar’s Accountants, and a third groups for Egypt’s Accountants with confidential data…etc and group counts can grow and grow to thousands and thousands of groups to reflect the needed level of access.

Windows Server 2012 comes with a lot of handy features that we will explore in this blog series, talking about Access Conditions, Data Classification, Dynamic Access Controls and Rights Management enforcement.

In Part1, we will explore the new security permissions wizard and the new device permissions in Windows Server 2012.

(My lab setup contains only 1 Domain Controller and 1 file Server both running Windows Server 2012 ENT Edition).

NTFS permissions and the new Device Rules:

I have now a normal file share that is shared with the finance admin group:


This is a normal group that has been created in AD and contains one user account (Finance User) who is a finance admin, he has read only access permissions, this is what we have been doing for the past 20 years.

Now, the company wants him to access the share only from specific group of computers (for the sake of this blog we will use normal blog, in part 3 we will talk about claims based authentication where we will explore claims authentication and we will be able to query the device claims on the fly for more properties and control and access dynamically).

Now I created a Group and Placed Finance User1 computer in it (in this case the File Server), this means that if he logs from the DC on that file share he will not be able to access it. let us see how:

If we go to the Security properties and the advanced share permissions, we can see the FinanceAdmin read and execute permissions, if we click Edit:


We Will see the new security permission wizard:


The above wizard has been enhanced to reflect more usability and control over the process, and also a new section called conditions, let us explore this condition section.

If you click Add a Condition , you will get a new line of condition to control the access:

now we can place some conditions on the user how is accessing, the resource he is trying to access or the device he is accessing from, now let us create a condition to give the user access from a specific device, the device can only be queried about its group membership in later blog post we will see how to query for more properties using claims, now we can select if it is a member of any or each or not member of specific groups, I will control using any and specific my group:


My rule will control the access based on the AllowedFinancePCs which contains the computers from where the financeadmin group can use to access the files, they can login to any device in the corporate by only access the files if they use specific devices to access it “Sweeeeeeeet” Open-mouthed smile:


Now, The final Security permissions will be like:


Now let us try it:

I logged on locally to the Fileserver, when I try to access the file I can’t although I have the permission and login locally but I am not using the authorized machine to do that:


if we examine the permissions using the effective permissions. if the user tries to login from the 2008DC machine he will have no permissions:


But if he tries from another machine from the allowedFinancePC group, he will have read permissions:


Note: During my lab I have tried the above setup and didn’t work, although conditions worked correctly for users, it looks like something that needs to be enabled or configured in specific way, I am pinging Microsoft folks and when I reach a solution I will update this blog.


In this lab we have explored the new options for setting access permissions, this is very powerful controlling who and from where can access the data.

In the next blog we will see the power of data classification in Windows Server 2012, Stay Tuned.

Join me at the next event, Microsoft private cloud using Hyper-v and System Center hosted by Microsoft MEA Academic Center

August 28, 2012 Leave a comment

Next Wednesday, I will be speaking at one of the Microsoft MEA Academic Center events, In this event I will speak about the Private Cloud concepts and patterns, then delving on the Private Cloud Architecture using Microsoft Hyper-v and System Center then moving to the Private/Cloud user case and future innovations possibility.

from the event description:

In this session we will explore the cloud concepts and principles setting the ground for the cloud knowledge, then taking extra steps on how to build the private cloud using Windows Server 2012 and System center and finalizing
by integration and extensibility options of private, public and hybrid cloud and use cases.

I have built this session on top of the amazing session by Tom Schinder “Private Cloud Concepts and Patterns”, I believe that this session is the most important session in 2012, not because it contains valuable information but because it clearly defines what is the cloud, its architecture and the principles and concepts, then delving to the actual implementation and use case.

You Can Join us using the following Link:

I will be waiting for you.


Automate patch & restart management in the #datacenter using #Microsoft Orchestrator and #wsus #sysctr #automation #mvpbuzz

August 18, 2012 3 comments


I have been working on a very interesting task next week for our cloud which is patch management automation.

One of the challenges we face as service provider or cloud provider if you are not a service provider is the patch management within our infrastructure and the cloud.

for years there have been tools and applications that can push updates from vendors to our servers; WSUS and SCCM are great examples of those, but there has been a missing part of the puzzle.

What about the restart management for those Servers/Application, how do we manage the relationship between servers patches, restart and restart order, let us take a deeper look to that.

Suppose that you have a typical infrastructure; this could be based on the cloud or not, This infrastructure consists of the following:

  • 2 Domain Controllers.
  • 1 SQL cluster; 2 Nodes.
  • 2 IIS Front-End Servers running a web application.
  • 2 TMG 2010 servers.

suppose that you use WSUS/SCCM, specified restart schedule and approved the updates, and waiting for servers restart, you have 2 options here:

  • if you had all the servers using single restart option; this means that all servers will reboot in the same time.
  • configure multiple scheduling based on OU/GPO, servers will restart based on schedules for different roles which is fine.

In the first option IIS servers will usually restart faster than SQL cluster; their web application might not start because SQL is not running, IIS serves might restart before the Domain Controllers, and might find the required credentials needed to start the web applications and same for SQL clusters that might reboot before DC and the SQL cluster fails, at the end of the day; who knows?!

the second option is cool, however you will have a larger maintenance window, you don’t know when servers will finish rebooting so you will have to wait and assign 30 minutes for DC reboot for example, then another 30 minutes then SQL servers reboot…etc, but this hurts your SLA and increases your maintenance window.

The Solution:

Somehow, you know your infrastructure requirements, so you know the restart order and priority for your servers, you need to have this relationship mapping first before anything else; as this will be the foundation.

You don’t need a fancy visio diagram or relationship table, all what you need is a simple table saying for example:

Server Name Restart Order
Server1 1
Server2 2

and this is an example,you can go as much complex as you want.

later you can use System Center Orchestrator to automate your patching and restart based on the relationship you defined, this is a very effective way to save your life and time, Orchestrator can interpret your restart order, force servers that needs restart to restart in the order you specified in the schedule you need or you can kick the hall process manually it doesn’t make a difference.

The How:

Disclaimer: use this article at your own risk, the solution described here is not the complete one, you need to do further testing, customization and modification to be enterprise ready, the scripted, files and workflows here are provided AS-IS without any warranty.

Building the blocks: In this section we explore the high-level architecture of the solution and its components and then we proceed with its implementation.

The requirements is very simple, we are using WSUS to deploy updates to servers, we have a restart order as the above table for example we want to restart our servers according to the above restart order.

The Lab Setup: I am running 1 Domain Controller that also hosts my WSUS server, 1 Orchestrator Server running SQL 2008 and Orchestrator, 4 Servers running Windows 2008 (srv1, srv2, srv3,srv4).

The restart order for servers is as following:

Server Name Restart Order
srv1 1
srv2 3
srv3 4
srv4 2

I mapped this restart order in a simple SQL Database configured as the following:


The Runbooks Architecture:

The Orchestrator has 3 RBs defined to achieve what we want:

    1. the first RB is the launcher, it queries the the database using the following simple query: (use test select hostname from restartordertbl order by restartorder), it queries the table and retrieve the server names and order them with their restart order.
    2. the RB then writes the servers with their restart priority to a text file, it will be used by a later RB to query server names from that text file (you can write you own script to step that in SQL or csv file, I used text file for simplicity).
    3. the RB sets counters of no. of rows returned, the the incremental counter used in looping and invokes the Core RB.image
    4. the Core RB is the core RB for this environment, it gets the counters, compare them if they are not equal it knows that it needs to loop and then proceeds with reading from the text file.
    5. you need to know that the link between the compare value action and append line action (the link with the purple color ) performs the actual decision it allows the RB to proceed only if the value is false which means the values are not equal and stops if the values are equal which means the loop is completed or there is no servers returned by the query.
    6. it executes the following powershell script to know if the server is pending reboot or not (

$baseKey = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey(“LocalMachine”, “\`d.T.~Ed/{A7DF762F-4857-4114-9AD9-AD7FE15F7148}.LineText\`d.T.~Ed/”)
$key = $baseKey.OpenSubKey(“Software\Microsoft\Windows\CurrentVersion\Component Based Servicing\”)
$subkeys = $key.GetSubKeyNames()
If ($subkeys | Where {$_ -eq “RebootPending”})
throw “updates”


the scripts queries the pending reboot status of the machine, if the machine is pending reboot then it will break throwing an error, if not it will complete correctly.

  1. The Link between the run powershell action and the restart action (in red color) allows the RB to take the restart path only of the powershell result is failed which is caused by the break event as the server in this case will be pending restart. if not it will take the other path (the green link) which means that server is not pending restart and starts the “Counter Increaser” RB.image
  2. the counter increaser RB is the simplest one, it simply increases the incremental counter and invokes the Core RB looping again.

Things to note:

  • in order to loop in Orchestrator you can’t loop within the RB, you need to use another RB for that this is why I have the Counter Increaser RB.
  • the powershell could restart the machine, but that didn’t work for me so I used the restart action.
  • you can check the link behaviour by selecting a link and click properties.
    Things that needs improvement:

This is a test RBs, we use different RBs in production that meets our specific environment, you will need to modify that above RPs to do:

  • Server checking if the server online or not.
  • the RBs does restart directly, you will need to include sleep time and restart check to make sure that server completed its restart before proceed with the other restart.
  • make the process parallel and maybe restart servers that are not related to others directly.
  • send notification to administrator or customer.
  • run post restart checks to make sure that server completed the reboto and services started successfully.
  • maybe integrate that with SCSM and go with approvals and workflows from there.

you can go epic with this foundation, be dynamic in servers query and database names this can go endless, use this RBs as your foundation and add more and more blocks to meet your infrastructure and customers’ goals, also feel free to comment or ask question I will be glad to do so.

attached below the working RBs they include every thing, make sure to check each step and read description thoroughly, you can download them from

until later time and happy Eid


%d bloggers like this: