In a recent post, the private cloud architecture team posted an interesting blog http://blogs.technet.com/b/privatecloud/archive/2013/02/26/destination-private-cloud-are-we-there-yet.aspx which talks about the characteristics of the private cloud.
being one of those who are working on the cloud, in the cloud and by the cloud, I think that we can answer, no we are not there yet.
the blog talks about the main characteristics that needs to be available for you to say; I have a private cloud, but I am speaking about the hall picture.
the hall picture comes with a lot of things, HW integration, network integration, Security integration and a lot more.
yes, most of the “Private Cloud” providers, provide their own solution to have an end-to-end solution, but it is still locked, for example Microsoft does HW fast track, but with limited set of vendors and HW providers.
adding security, Backup/DR and networking to the show, you will have a more complex scene, in my opinion; we don’t have the cloud-ready security/network solution yet, they will come, but we are not there yet.
my 2 cents for you if you are working on your own “cloud” project, take a deep look, and don’t think it is easy to use, consume or build a cloud, because we are not there yet.
Configuring Dynamic Access Controls and File Classification-Part4-#winservr 2012 #DAC #microsoft #mvpbuzz
In previous parts we have walked through the new file server features and permissions wizard, Data Classification, AD RMS installation and File Classification and AD RMS integration, in the final part of this series we will take about how to implement a new feature of Active Directory called claim based authentication and utilize it for something called Dynamic Access Control.
but wait a minute, what is the claim based authentication, from this reference: http://www.windowsecurity.com/articles/First-Look-Dynamic-Access-Control-Windows-Server-2012.html
Claims-based authentication relies on a trusted identity provider. The identity provider authenticates the user, rather than every application doing so. The identity provider issues a token to the user, which the user then presents to the application as proof of identity. Identity is based on a set of information that, taken together, identifies a particular entity (such as a user or computer). Each piece of information is referred to as a claim. These claims are contained in the token. The token as a whole has the digital signature of the identity provider to verify the authenticity of the information it contains.
Windows Server 2012 turns claims into Active Directory attributes. These claims can be assigned to users or devices, using the Active Directory Administrative Center (ADAC). The identity provider is the Security Token Service (STS). The claims are stored inside the Kerberos ticket along with the user’s security identifier (SID) and group memberships.
Once the data has been identified and tagged – either automatically, manually or by the application – and the claims tokens have been issued, the centralized policies that you’ve created come into play.
Now you can turn user’s attribute whatever they are, into security controls, now we have the power to control the access to files and set the permissions to files using attributes, we no longer controlled by group permissions only.
With that in mind, you can set the permissions on the files based on department attributes, connecting machine, location or any other attribute in Active Directory and you don’t have to create specific groups for that, also the permissions will be set on the fly, not only that, but you can set the permissions not based on the user’s properties but also based on the device the user is using, you can set the permissions to full control from corporate devices, but readonly from kiosk or non-corporate devices.
Not only that, but you can also include the attributes of the resources that is being accessed in the permissions equation, so you want “on the fly” to examine the resource classification and allow only specific users with specific attributes to access the resource (so files classified of country classification “Egypt” will be accessed by only users who are in country “Egypt” for example).
Dynamic Access Control (DAC) is a new era for permissions, I am blown by the power of DAC and how flexible it is, mixed with AD RMS you can have ultimate control on data within your corporate.
We will use the steps described here in this TechNet article: http://technet.microsoft.com/en-us/library/hh846167.aspx#BKMK_1_3 , the steps here are illustration of the steps, and prior parts of the blog series (part 1 to 3) are used as foundation to demonstrate the final environment:
the first ting to configure is the claim type, claim types represents what are the data queried in the user/device/resource attribute and then used in the permission evaluation, you want to query about the country, you create a claim type for that, you want to use department you create a claim type for that.
In our Lab we will create a claim type of Department and Country:
to create a claim type open the AD Administrative Center and go to Claim Types, and from the menu select new:
Create a new claim for Department :
and for Country :
In the Country, Supply suggested values (to specify values for the claims as Egypt and Qatar):
Note: By defaults claims are issues to users, if you want to issue it for computers you must select that on the claim
Create a new reference resource property for Claim Country:
Now got to Resource Properties and enable the department claim;
Now let us create a Central Access Rule, This rule will include the template permissions that will be applied when the claims are matched with the rules defined in the CAR:
In the rule, specify the security principle you want to use, in this demo we will grant access to Finance Admins full control and Finance Execs read only access, and this will be applied to all files “resources” that is classified in the Finance Department, we can also go with devices claims and specify the country of this device or any other property that we can to query about the device:
The Final rules will be :
Now create a Central Access Policy that will be applied using GPO to all file servers and the Administrator can select and apply them on individual folders:
In the CAP, include the finance data rule:
No you need to apply this CAP using GPO and make it available to file servers, now create a GPO and link it to the file servers OU:
In the Group Policy Management Editor window, navigate to Computer Configuration, expand Policies, expand Windows Settings, and click Security Settings.
Expand File System, right-click Central Access Policy, and then click Manage Central access policies.
In the Central Access Policies Configuration dialog box, add Finance Data, and then click OK.
You need now to allow the Domain Controllers to issue the Claims to the users, this is done by editing the domain controllers GPO and specify the claims settings:
Open Group Policy Management, click your domain, and then click Domain Controllers.
Right-click Default Domain Controllers Policy, and then click Edit.
In the Group Policy Management Editor window, double-click Computer Configuration, double-click Policies, double-clickAdministrative Templates, double-click System, and then double-click KDC.
Double-click KDC Support for claims, compound authentication and Kerberos armoring. In the KDC Support for claims, compound authentication and Kerberos armoring dialog box, click Enabled and select Supported from the Options drop-down list. (You need to enable this setting to use user claims in central access policies.)
Close Group Policy Management.
Open a command prompt and type
Testing the Configuration:
Going to the file server, and clicking on our finance data file, we can now find the data classification that we specific in the Claims:
Now let us classify the data as Finance Department.
Note: In order to allow DAC permissions to go into play, allow everyone NTFS full control permissions and then DAC will overwrite it, if the user doesn’t have NTFS permissions he will be denied access even if DAC grants him access.
Now checking the permissions on the folder:
going to the Central Policy tab and applying the Finance Data Policy:
now let us examine the effective permissions:
for the Finance Admins:
If the user has no claims (so he is a member of the group but not in the finance department and is not located in Egypt) he will be denied access:
Now, let us specify that he is from Finance Department, no luck, Why?!
This is because he must access the data from a device that has claim type country Egypt:
Now test the Finance Execs Permissions and confirm it is working.
You can test applying this rule also when the following condition is set, and wee what happens:
Note: the above rule will grant use access when his department matches the file classification department, so you can have a giant share from mix of departments and permissions will be granted to files based on users’ departments.
Mixing DAC with AD RMS and file classification is a powerful mix that helps organizations with the DLP dilemma, and with Windows Server 2012 organization has total control for the first time on the files and data within the files. please try the lab and let me know your feedback
Backup&Restore Exchange 2010 mailbox database or mailbox item using ARCserve R16 #msexchange #arcserve
In my ultimate Journey discovering how to backup and restore Exchange 2010 by every single application on our universe, I blog today about how to do that using CA’s ARCserve r16 SP1.
We will continue using my single Exchange server hen installing ARCserver r16 SP1 and then discovering how to make a backup job to backup Exchange and Restore from our backup.
Installing ARCserve r16 SP1:
There is nothing genius about installing the ARCserve, you possible want to plan ahead for the following:
other than that, the installation itself is no brainer, next, next and ok
Configuring ARCserve r16 Devices:
Once you are finished installing and opening the ARCserve console “Manage”, you will be prompt with a very nice tutorial that walks you through the basic configuration of your ARCserve.
In this step we will configure “Disk device” that we will use for our backup to disk, so from Devices choose launch device configuration:
In the Login Server screen, enter your credentials to login to the server:
In the Login Server choose your login server:
In the Device Configuration screen, choose Windows File System Devices to configure a backup folder (the de-duplication device is a folder that could configured to store multiple backups, the ARCserve then divide the backup to small chunks that is compared and de-duplicated using the proprietary ARCserve algorithm) then click add:
and if you somehow missed the wizard, you can do the same using the device wizard from the administration menu:
Once the Device is configured, we can deploy the Agent and start protecting our Exchange server, you can do that from the administration, and then go to Agent Deployment :
Note: In Order to backup the Exchange server using ARCserve you must installing MAPI CDO, this is a must because unlike Symantec which uses EWS to restore emails, ARCserve using MAPI CDO to backup and restore individual email, also note that MAPI CDO must be installed before installing the ARCserve if you don’t you will get the following error message:
“The request is denied by the agent. The requested agent is not installed.”
When you deploy the agents for the first time, you must specify the ARCserve source to copy the agents from it, once copied you won’t need to do that again and you will be able to proceed with the deployment:
Once copied, you will proceed with the agent deployment, so specify the Login Server:
In the agent installation option and normally you will get the automatic, you might want to choose custom to fine tune the installation options:
In the agent select the agents that needs to be deployed:
In the host selection, you have a nice option here to discover the Exchange servers and deploy the agent to them automatically:
to discover the Exchange infrastructure, Just specify you Domain Controller and credentials and the ARCserve will discover the Exchange server for you, nice!!!:
Backup Exchange 2010 Mailbox Database and Mailboxes using ARCserve:
To Create a backup job, it is so easy, from the Protection & Recovery menu choose Backup:
From the Job Setup Menu select your Job Setup Type:
In the Source, select the Mailbox Database, if you want to recover specific mailboxes or mailox items you must configure the Document Level Type backup, unlike Symantec which uses 1 type of backups to either restore Mailbox Database or Mailbox or Mailbox item, ARCserve uses 2 types of backup (mailbox database backup for mailbox level and Mailbox Document level for Mailboxes and Mailbox items):
In the Schedule, select your scheduling:
In the Destination, select your destination, in my case I will use the folder I already configured previously:
Once all set, click the Submit button to submit the job for run.
Restore the Exchange Mailbox Database or Mailbox items from the ARCserve Backup:
Now you can restore either the Mailbox Database or the Mailbox items, you can go to the Restore section, explore the Exchange infrastructure and either select the Mailbox Database or the Mailbox Items:
In this Article we have explored the basic ARCserve configuration and how to backup and restore Exchange 2010 Mailbox and Mailboxes using ARCserve. it was easy and sweet although I don’t understand why in ARCserve I have to create 2 jobs and duplicates to backup Mailbox Database and Mailboxes (Document level).
So what is the next product, I don’t know I will be waiting for your suggestions , so let me know so I can blog it.
Windows Server 2012 introduces new ways of managing and configuring your Windows infrastructure, one of these components are the Active Directory.
First, Microsoft removed the famous “DCPROMO” and the functionality of installing and promoting a new Domain Controller is moved entirely to the Server Manager.
in this lab, we have a single DC that we would like to move all of its roles to a new fresh installed Windows Server 2012.
1- Install your Windows 2012 Server and Join it to the Domain.
2- open Server manager and from tasks, select “Add Roles and Features”:
3- In the Welcome screen click next:
4- In the select Installation type, select Role-based:
5- in the select server, select the desired server or server group (for server groups refer to my previous article “Windows 2012 first look”:
6- from the list of roles, select Active Directory Domain Services:
7- Active Directory Domain Services in Windows Server 2012 depends on other roles/features, you must add them, the wizard will add them if they are not pre-installed, so accept adding those missing roles/features:
8- In the installation summary, review your selection, also you might want to restart the Server directly after installation completes:
Until this point, we have not actually configured the server as domain controller, we were just adding the roles, after completing the installation, the wizard will inform you that there is post installation configuration to configure this server as domain controller, select more
In the following screen you will find the post deployment tasks are pending:
1- When you select the “Promote this server to domain controller” the following wizard opens:
from the previous screen you can select to install new forest, new domain or a new forest, in our case we are upgrading so select “add a domain controller to an existing domain”.
Note: you have the option to select the domain information if you have multiple domains.
Important Note: if this is the first Windows Server 2012 DC to be installed in the forest and you didn’t extend the schema yet, then you will need to make sure that this account has the necessary permissions to extend the schema (Enterprise Admin/Schema Admin), otherwise the setup will fail.
In Windows Server 2012, you don’t need to extend the schema separately as the wizard will handle this for you, unless you really want to perform it in a separate step.
If you do not run adprep.exe command separately and you are installing the first domain controller that runs Windows Server 2012 in an existing domain or forest, you will be prompted to supply credentials to run Adprep commands. The credential requirements are as follows:
- To introduce the first Windows Server 2012 domain controller in the forest, you need to supply credentials for a member of Enterprise Admins group, the Schema Admins group, and the Domain Admins group in the domain that hosts the schema master.
- To introduce the first Windows Server 2012 domain controller in a domain, you need to supply credentials for a member of the Domain Admins group.
- To introduce the first read-only domain controller (RODC) in the forest, you need to supply credentials for a member of the Enterprise Admins group.
2- from the Domain Controller Options, select if this server will be a Global Catalog and DNS server or not, since we are upgrading, we need to make sure that this server is a DNS and GC, also select the site where this server will be assigned to:
3- in the DNS delegation page, next:
4- In the additional options, you might have to select Install from media or replicate from a specific DC, or let it automatically:
5- Review the Paths for NTDS, SYSVOL, customize them if needed:
6- In the prerequisites check, make sure that you passed successfully and Install.
7- After installation finishes server will reboot and you will AD DS role installed and the server is identified as a DC:
You can now run “DCPROMO” on the old server to remove it, if it is a single server environment the FSMO roles will be moved to the 2012 DC, if not and you have multiple servers then you can move them as before from the ADUC and ADDT MMCs.
Raising the Forest/Domain Functional level:
Raising the Forest/Domain levels is needed only to enable one new feature: the Support for Dynamic Access Control and Kerberos armoring KDC administrative template policy has two settings (Always provide claims and Fail unarmored authentication requests) that require Windows Server 2012 domain functional level. otherwise and if you are not using these and not comfortable with raising the Forest/Domain Function yet, don’t.
You have successfully upgraded you domain controller, congrats.
So Scale-out file servers are a super cool feature from Windows Server 2012, but is it for every file server use, let us see:
You should not use Scale-Out File Server if your workload generates a high number of metadata operations, such as opening files, closing files, creating new files, or renaming existing files. A typical information worker would generate a lot of metadata operations. You should use a Scale-Out File Server if you are interested in the scalability and simplicity that it offers and you only require technologies that are supported with Scale-Out File Server.
and the below table is a nice reference from the same page to compare traditional clustered file server Vs. scale-out ones
so what are the design and selection criteria for scale-out file servers, the usual answer is “it depends”.
from my point of view, SO file servers are not for every use, although it offers greater scalability and performance for some workloads like SQL cluster and Hyper-v, it doesn’t really go well with the regular end-user usage for file servers as they generate a lot of metadata, also you will lose a lot of handy features like de-duplication, FCI and DFS.
So be careful when selecting your SO FS and make sure that you really need them, they are note for every use.
Microsoft Visual Studio 2012 is a powerful application development environment that ensures quality code throughout the entire Application Lifecycle Management (ALM) which is a proven set of tools and processes that helps organizations manage the entire lifespan of application development, reduce cycles times, and eliminate waste. ALM integrates different teams, platforms, and activities, enabling a continuous flow of business value
In this session we will introduce the major new features and improvements in Visual Studio 2012. Expect to see the new enhanced User Interface, Agile Planning Tools, Requirements Gathering Tool, Stakeholder Feedback Tool, Updates to the Developer and Tester Experience, Version Control Improvements, and DevOps Integration .
Please join us at the “Development using Microsoft Visual Studio 2012"
Date: Monday September 10, 2012
Time: 10:30 AM – 12:30 PM
Session: “Development using Microsoft Visual Studio 2012”
Speaker: Mohamed Radwan
Venue: Microsoft building – Smart Village
Now, you can join the session online through the below links:
Note: please make sure that you are using good internet connection
Join online meeting
Join by Phone
Find a local number
Conference ID: 95592439
Forgot your dial-in PIN?|First online meeting?
Join me at the next event, Microsoft private cloud using Hyper-v and System Center hosted by Microsoft MEA Academic Center
Next Wednesday, I will be speaking at one of the Microsoft MEA Academic Center events, In this event I will speak about the Private Cloud concepts and patterns, then delving on the Private Cloud Architecture using Microsoft Hyper-v and System Center then moving to the Private/Cloud user case and future innovations possibility.
from the event description:
In this session we will explore the cloud concepts and principles setting the ground for the cloud knowledge, then taking extra steps on how to build the private cloud using Windows Server 2012 and System center and finalizing
by integration and extensibility options of private, public and hybrid cloud and use cases.
I have built this session on top of the amazing session by Tom Schinder “Private Cloud Concepts and Patterns”, I believe that this session is the most important session in 2012, not because it contains valuable information but because it clearly defines what is the cloud, its architecture and the principles and concepts, then delving to the actual implementation and use case.
You Can Join us using the following Link:
I will be waiting for you.
Public Folders provide an awesome way for collaboration, for years there were rumors that Microsoft will drop PF with the introduction of Exchange 2007, Microsoft saw obstacles in PF as they are using different management and different hierarchy and architecture from regular mailbox.
With the introduction of Exchange 2013, Microsoft made PF leaps into the future with the changes that Microsoft introduced on PF storage in Exchange 2013, so what happened to PF in 2013, let us take a look:
- PFs are not stored in PF mailboxes: previously PF were stored in the PF database, thus prevented the use of modern protection technologies offered by Exchange 2007/2010 such as replication/DAG, in Exchange 2013 PF are now stored in special type of mailbox called a PF mailbox, this mailbox stores the PF hierarchy and the PFs content that were created on that mailbox.
- PFs no longer utilize PF replication architecture: In previous versions of Exchange PFs were utilizing the PF replication architecture, it was a separate architecture that was managed separately and required its own set of monitoring and management and was inherited from previous versions of Exchange, with the new architecture PFs no longer use replication as before, the mailbox itself can be replicated now using DAG architecture offering mailbox resiliency and protection, but content themselves are not replicated across mailboxes, each content mailbox holds his own content and he is the only holder of that content, the mailbox is replicated using underlying DAG architecture but not the content.
With the new architecture we have now a new type of mailboxes called “Public Folder Mailbox” this mailbox can be divided into 2 types:
- Master Hierarchy PF Mailbox: the Master Hierarchy mailbox is special kind of PF mailbox that you create to either import your hierarchy from previous versions or and hold your PF hierarchy and this is usually the first PF mailbox you create.
- PF mailbox: All later PF mailboxes are that kind of PF mailbox, there is a very important difference between PF mailboxes and Master PF mailbox, the Master PF mailbox holds a writable copy of the hierarchy but other PF mailboxes hold a read-only copy of the hierarchy (note: you can upgrade a PF mailbox to a master one anytime, but at any time there is only 1 writable copy of the hierarchy) (another note: all PF gets a copy of the hierarchy but it is read only one)
with the new architecture there is a very important point to note (PF contents are not replicated) so organizations that are geographically dispersed and utilizing PF replication to provide local access to Public Folders must reconsider their PF hierarchy and how it is planned now because in order for a user to access the PF content he will need to access the content PF mailbox directly and that might occur over the WAN if content distribution is not well planned.
For the last point some people might have some concerns, but with the all HTTPS traffic between clients and CAS I can imagine that with the use for WAN optimizers and proper planning this will offer orgs greater flexibility and even better management.
From end-users perspective, PFs in Mailboxes are just the same as PF in older versions of Exchange, the storage of the PF is different from admin point of view but users are not aware of that change
The other things you might want to consider is the PF mailbox storage limit, mailbox in Exchange 2013 supports 100 GB, although it is fine for normal mailboxes, you will need to take serious consideration if your organization is heavily using PFs and you have PF trees that is larger than this limit.
The only things that you will need to know that RTM launch, PF will be available from Outlook Only, OWA access to PF is not ready yet.
at this point and as this article is being written any of the secondary hierarchy mailboxes could be prompted to a primary one, but this is not documented until now, I will update this article to include a pointer for the new information, to identify which mailbox is the master hierarchy mailbox you can use this cmdlet:
Get-OrganizationConfig | fl DefaultPublicFolderMailbox
PF Migration from earlier versions:
As this article is being written Exchange 2010 SP3 is the only source from where migration can be done, Exchange 2007 is supported for coexistence with Exchange 2013 but an update that is unknown so far will be released later to allow such coexistence.
The migration high-level steps are done as following:
- You Generate a CSV file that contains your hierarchy from your older Exchange server. Keep in mind that you can open that CSV and edit its content mapping to PF mailbox if you would like to spread your content across mailboxes for geo-access or for proper distribution.
- You create a Master Hierarchy PF mailbox and import that CSV to it.
- Create a new PF migration request.
- Lock down the access to the PF, at the final stages a lock down is placed which prevents users from accessing the PF to lock access to finalize the migration.
- Complete the request and resume the migration.
the steps are detailed here http://technet.microsoft.com/en-us/library/jj150486(v=exchg.150), once lab is done I will post a blog post about editing the CSV before migration.
I hope that you enjoyed the post and wish you happy Public Foldering .