Archive

Archive for the ‘Cloud’ Category

Destination: Private cloud…are we there yet?–No we are not

March 6, 2013 Leave a comment

In a recent post, the private cloud architecture team posted an interesting blog http://blogs.technet.com/b/privatecloud/archive/2013/02/26/destination-private-cloud-are-we-there-yet.aspx which talks about the characteristics of the private cloud.

being one of those who are working on the cloud, in the cloud and by the cloud, I think that we can answer, no we are not there yet.

the blog talks about the main characteristics that needs to be available for you to say; I have a private cloud, but I am speaking about the hall picture.

the hall picture comes with a lot of things, HW integration, network integration, Security integration and a lot more.

yes, most of the “Private Cloud” providers, provide their own solution to have an end-to-end solution, but it is still locked, for example Microsoft does HW fast track, but with limited set of vendors and HW providers.

adding security, Backup/DR and networking to the show, you will have a more complex scene, in my opinion; we don’t have the cloud-ready security/network solution yet, they will come, but we are not there yet.

my 2 cents for you if you are working on your own “cloud” project, take a deep look, and don’t think it is easy to use, consume or build a cloud, because we are not there yet.

Categories: Cloud Tags: , ,

Join me at the next event, Microsoft private cloud using Hyper-v and System Center hosted by Microsoft MEA Academic Center

August 28, 2012 Leave a comment

Next Wednesday, I will be speaking at one of the Microsoft MEA Academic Center events, In this event I will speak about the Private Cloud concepts and patterns, then delving on the Private Cloud Architecture using Microsoft Hyper-v and System Center then moving to the Private/Cloud user case and future innovations possibility.

from the event description:

In this session we will explore the cloud concepts and principles setting the ground for the cloud knowledge, then taking extra steps on how to build the private cloud using Windows Server 2012 and System center and finalizing
by integration and extensibility options of private, public and hybrid cloud and use cases.

I have built this session on top of the amazing session by Tom Schinder “Private Cloud Concepts and Patterns”, I believe that this session is the most important session in 2012, not because it contains valuable information but because it clearly defines what is the cloud, its architecture and the principles and concepts, then delving to the actual implementation and use case.

You Can Join us using the following Link:

https://join.microsoft.com/meet/b-amshad/F9CLHSSD

I will be waiting for you.

Mahmoud

Automate patch & restart management in the #datacenter using #Microsoft Orchestrator and #wsus #sysctr #automation #mvpbuzz

August 18, 2012 3 comments

Introduction:

I have been working on a very interesting task next week for our cloud which is patch management automation.

One of the challenges we face as service provider or cloud provider if you are not a service provider is the patch management within our infrastructure and the cloud.

for years there have been tools and applications that can push updates from vendors to our servers; WSUS and SCCM are great examples of those, but there has been a missing part of the puzzle.

What about the restart management for those Servers/Application, how do we manage the relationship between servers patches, restart and restart order, let us take a deeper look to that.

Suppose that you have a typical infrastructure; this could be based on the cloud or not, This infrastructure consists of the following:

  • 2 Domain Controllers.
  • 1 SQL cluster; 2 Nodes.
  • 2 IIS Front-End Servers running a web application.
  • 2 TMG 2010 servers.

suppose that you use WSUS/SCCM, specified restart schedule and approved the updates, and waiting for servers restart, you have 2 options here:

  • if you had all the servers using single restart option; this means that all servers will reboot in the same time.
  • configure multiple scheduling based on OU/GPO, servers will restart based on schedules for different roles which is fine.

In the first option IIS servers will usually restart faster than SQL cluster; their web application might not start because SQL is not running, IIS serves might restart before the Domain Controllers, and might find the required credentials needed to start the web applications and same for SQL clusters that might reboot before DC and the SQL cluster fails, at the end of the day; who knows?!

the second option is cool, however you will have a larger maintenance window, you don’t know when servers will finish rebooting so you will have to wait and assign 30 minutes for DC reboot for example, then another 30 minutes then SQL servers reboot…etc, but this hurts your SLA and increases your maintenance window.

The Solution:

Somehow, you know your infrastructure requirements, so you know the restart order and priority for your servers, you need to have this relationship mapping first before anything else; as this will be the foundation.

You don’t need a fancy visio diagram or relationship table, all what you need is a simple table saying for example:

Server Name Restart Order
Server1 1
Server2 2

and this is an example,you can go as much complex as you want.

later you can use System Center Orchestrator to automate your patching and restart based on the relationship you defined, this is a very effective way to save your life and time, Orchestrator can interpret your restart order, force servers that needs restart to restart in the order you specified in the schedule you need or you can kick the hall process manually it doesn’t make a difference.

The How:

Disclaimer: use this article at your own risk, the solution described here is not the complete one, you need to do further testing, customization and modification to be enterprise ready, the scripted, files and workflows here are provided AS-IS without any warranty.

Building the blocks: In this section we explore the high-level architecture of the solution and its components and then we proceed with its implementation.

The requirements is very simple, we are using WSUS to deploy updates to servers, we have a restart order as the above table for example we want to restart our servers according to the above restart order.

The Lab Setup: I am running 1 Domain Controller that also hosts my WSUS server, 1 Orchestrator Server running SQL 2008 and Orchestrator, 4 Servers running Windows 2008 (srv1, srv2, srv3,srv4).

The restart order for servers is as following:

Server Name Restart Order
srv1 1
srv2 3
srv3 4
srv4 2

I mapped this restart order in a simple SQL Database configured as the following:

image

The Runbooks Architecture:

The Orchestrator has 3 RBs defined to achieve what we want:

    1. the first RB is the launcher, it queries the the database using the following simple query: (use test select hostname from restartordertbl order by restartorder), it queries the table and retrieve the server names and order them with their restart order.
    2. the RB then writes the servers with their restart priority to a text file, it will be used by a later RB to query server names from that text file (you can write you own script to step that in SQL or csv file, I used text file for simplicity).
    3. the RB sets counters of no. of rows returned, the the incremental counter used in looping and invokes the Core RB.image
    4. the Core RB is the core RB for this environment, it gets the counters, compare them if they are not equal it knows that it needs to loop and then proceeds with reading from the text file.
    5. you need to know that the link between the compare value action and append line action (the link with the purple color ) performs the actual decision it allows the RB to proceed only if the value is false which means the values are not equal and stops if the values are equal which means the loop is completed or there is no servers returned by the query.
    6. it executes the following powershell script to know if the server is pending reboot or not (

$baseKey = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey(“LocalMachine”, “\`d.T.~Ed/{A7DF762F-4857-4114-9AD9-AD7FE15F7148}.LineText\`d.T.~Ed/”)
$key = $baseKey.OpenSubKey(“Software\Microsoft\Windows\CurrentVersion\Component Based Servicing\”)
$subkeys = $key.GetSubKeyNames()
$key.Close()
$baseKey.Close()
If ($subkeys | Where {$_ -eq “RebootPending”})
{
throw “updates”
}
Else
{

})

the scripts queries the pending reboot status of the machine, if the machine is pending reboot then it will break throwing an error, if not it will complete correctly.

  1. The Link between the run powershell action and the restart action (in red color) allows the RB to take the restart path only of the powershell result is failed which is caused by the break event as the server in this case will be pending restart. if not it will take the other path (the green link) which means that server is not pending restart and starts the “Counter Increaser” RB.image
  2. the counter increaser RB is the simplest one, it simply increases the incremental counter and invokes the Core RB looping again.

Things to note:

  • in order to loop in Orchestrator you can’t loop within the RB, you need to use another RB for that this is why I have the Counter Increaser RB.
  • the powershell could restart the machine, but that didn’t work for me so I used the restart action.
  • you can check the link behaviour by selecting a link and click properties.
    Things that needs improvement:

This is a test RBs, we use different RBs in production that meets our specific environment, you will need to modify that above RPs to do:

  • Server checking if the server online or not.
  • the RBs does restart directly, you will need to include sleep time and restart check to make sure that server completed its restart before proceed with the other restart.
  • make the process parallel and maybe restart servers that are not related to others directly.
  • send notification to administrator or customer.
  • run post restart checks to make sure that server completed the reboto and services started successfully.
  • maybe integrate that with SCSM and go with approvals and workflows from there.

you can go epic with this foundation, be dynamic in servers query and database names this can go endless, use this RBs as your foundation and add more and more blocks to meet your infrastructure and customers’ goals, also feel free to comment or ask question I will be glad to do so.

attached below the working RBs they include every thing, make sure to check each step and read description thoroughly, you can download them from https://skydrive.live.com/embed?cid=6B566FD2C47B21C4&resid=6B566FD2C47B21C4%21130&authkey=AB25TJ854Zc4IT0

until later time and happy Eid

Mahmoud

Microsoft Egypt Open Doors, what to expect, meet us there #Microsoft #Egypt #Cairo #opendoors

February 15, 2012 Leave a comment

it has been a while since the last blog entry, I so moved with this post and proud of it, we are so excited about the upcoming event for Microsoft Egypt open doors that will be held next Monday 20/2/2012.

I will not speak in the event, Microsoft has decided that this year speakers should be MSFTs, but we will have some cool demos to run in the demo area, my team and I will have some very cool demo to show you in the event, we have 3 main demos to run:

  • I will be presenting Exchange/SQL workload on Hyper-v and the benefits/challenges of running those workloads on top of Hyper-v.
  • Karim Hamdy and I will be presenting the DR site how to with Windows Data Center  2008 R2 edition, Hyper-v and Netapp for Active Directory, SQL, Exchange and Hyper-v workloads demystifying the building blocks for having a DR site for your main site on top of Hyper-v.
  • Mai Fawzi, will be demoing large VDI workload on top of Windows Data Center 2008 R2 with Citrix Xendesktop.

We will be waiting for you in the event, will be also happy to speak to you with regarding any specific technical workload you have enquiry about.

you can register in this link: https://msevents.microsoft.com/CUI/InviteOnly.aspx?EventID=39-14-67-BB-B1-DE-12-3D-F4-CC-1B-89-E1-F2-07-4F&Culture=en-EG

See you there

#Microsoft Office 365 is now available in #UAE, #Kuwait, #Qatar, #South Africa, #Saudi Arabia, #Egypt and #Turkey

December 2, 2011 Leave a comment

We all know that O365 has been released in Europe and USA, now it is our time, Microsoft has launched the program in selected countries and trials of it could be tested.

   

Got this Message this morning and would like to share it with you

Microsoft is delighted to announce the availability of Office 365 trial services to its MVPs, partners and customers in UAE, Kuwait, Qatar, South Africa, Saudi Arabia, Egypt and Turkey.

Microsoft Office 365 brings together online versions of our trusted email and collaboration software* with our familiar Office Professional Plus suite; it is designed to help meet your needs for robust security, 24/7 reliability, and user productivity in the cloud.

If you are in these geographies we encourage you to try Office 365 by signing up on our trial page.  We are currently working on the web experience, the way to activate the trial properly is to go to these sites and select the “Free Trial” button.

 

Enjoy

The 3 Copies Benefits for Hosting companies #Exchange #Exchange2010 #Microsoft

October 25, 2011 Leave a comment

Last week, I went through a very interesting discussion with one of my customers who is implementing one of the largest Exchange cloud in the region, they are currently in the planning phase.

the discussion occurred during our design session planning for the overall implementation, the discussion started with the question of Storage design and RAID level required.

the storage that were calculated to host the projected hosted mailboxes were about 26 TB of data for 2 copies (around 13 TB) per copy.

the question that we got, now we will need to design our RAID storage, how many disk is needed? we will need a lot of disks!!!!

I paused a little and said, Why you will need to do RAID, there is actually a better way that will save you money and even  offer better and more services to you customer, I paused “as part of the excitement process Smile“ and said “Use 3 copies of Exchange 2010”.

Background:

my customer has a reputable SAN from a very reputable vendor, they have shelves and money to buy the shelves, but yet each penny you add to the solution will affect the final offering to the customers which is correct and true.

let us take for example, suppose that you want to host 10,000 users with 512 MB quote, and suppose that your Exchange factors will maintain the same factors you will do for Enterprise use (Like Deleted Items retention, overheads…etc). if you used the Exchange calculator this will lead you to need around 10 TB per copy (DB + Logs) and 20 TB per environment, and total required 1200 IOPs. ( to reach the same calculation, use the Exchange Storage Calculator).

of course I am not talking about the penetration factor and over-subscription factors that you as ISP might use during the calculation, you might assume that you have 10% of concurrency and usage or you might go with 90% this will be totally based on your marketing and strategy teams decision.

so to host the 10,000 users , 10 TB of data on RAID 5 or even RAID 10, let us see how many disk is needed.

using http://www.wmarow.com/strcalc/goals.html you will be able to set the storage and required IOPs, assuming 15 K, 450 GB disks, you will find that to accommodate all of the databases on single big RAID 5 LUN you will need it to be generated from 36 Disks which provides the required storage and around 2471 IOPs (obviously there is a lot of waste in the IOPs)

same calculation can be done with SATA Disks and that will leave you with about 21 * 1 TB disks which leaves you with 619 IOPs (limited set of IOPs) with about 13 TB of storage.

now let us go back to my suggestion, my suggestion was not to use any RAID level protection, using SATA disks and 3 Exchange nodes hosting the Exchange 2010 Databases, let us investigate the options:

  Using 15 K, 450 GB Disks Using 1 TB SATA Disks Using 3 Nodes Copy (single Disk 2 TB per DB)
Usable Storage 10058 GiB 14901 GiB 1 TB
IOPs 2574 644 70
No. of Disks Per Copy 36 21 14
Total Disks for environment 72 42 42
User Quote 512 MB 512 MB 512 MB
Possible Increase in quote None 250 MB 250 MB
No. of Users per 1 TB storage (max recommended DB size) 1562 1200 1400
Max. No of supported users 10,000 10,000 14,000 (this is based on the calculation that assumes 512 MB with 1.25 overheads)

 

The Possible Increase in quote calculated by available space that still remains on the disk divided by no. of users hosted per LUN which is 1 TB size.

for SAS/FC disks the maximum users per DB is based on the recommendations of max of 1 TB DB size, size overheads are set to 1.25, so space and max. users is limited with available usable space.

for SATA disks, the max of users determined and limited by max of IOPs that could be generated from the RAID Group.

it is much clearer for you know that with the use of 3 copies, you might have the same amount no. of disks used with 1 TB RAID 5, but you will enjoy larger mailboxes that you can offer for your customers without adding any cost to your investment or have more users capacity without sacrificing performance, adding the capacity to remove backup need and much simplified storage management (with the option of SAN elimination).

NetApp and VMware View 5,000-Seat Performance Report #Netapp #vmware #vdi

September 5, 2011 Leave a comment

I got evolved in the past few months in designing and implementing large VDI solution, that will be weird for an Exchange MVP but I love the virtualization technology and couldn’t resist the temptation.

one of the most ugly parts of the VDI project is the storage design, in fact every VDI architect knows that storage sizing is one the painful aspects and one of the most critical parts for the VDI deployment success.

I spent hours trying to figure out the best model for the IOPs and Storage calculations for best and optimum user experience, and through hundreds of documents from Citrix, Netapp and Microsoft I found my method.

to start here is a nice link that will help you understand how things will go and spare the time of re-explaining the process

http://blogs.citrix.com/?s=Finding+a+Better+Way+to+Estimate+IOPS+for+VDI&submit_button=Search

to better know Citrix’s Side of the story (watch out, the CTX holds a lot of netapp’s data although that it doesn’t use or recommend netapp) http://support.citrix.com/article/CTX130632

and finally we see a closer look to storage performance from netapp, I have to say that this is one of the most well written documents concerning storage, storage performance and storage reporting, the document can be read here

http://media.netapp.com/documents/tr-3949.pdf

 

what I really loved that the report says that storage performance goes into several stages of its life cycle within the VDI project. the biggest IOPs hits are received during the first login attempts which is displayed in table 11 in the TR:

image

what made me excited that I developed my own IOPs predictor that I used for my projects, gladly my calculation were less than 1000 IOPs difference than the actual testing WOOOOOOOOOOHOOOOOOOOOO

image

I will put the calculator under further testing and it should be published later this month.

 

have a nice VDI sizing.

#Lync Client #Virtualization the full story #ucoms #Citrix #xendesktop #xenapp

April 27, 2011 Leave a comment

if you have been reading carefully, Citrix released a document the article published here http://support.citrix.com/article/CTX128831 .

by that time, I knew internally that Microsoft didn’t support Client virtualization for OCS/Lync. although if you have been reading and even attended Citrix Xenapp 6 or Xendesktop training you will hear a lot about Lync/OCS client virtualization delivery with Xenapp or Xendesktop.

starting 14/4, Microsoft released a document that explains the supportability statement for Xenapp and Xendesktop and virtualization techniques that they support/no support.

the document is available here http://www.microsoft.com/downloads/en/details.aspx?FamilyID=f865e66d-1163-46ef-ba9c-d585376dfbae.

in summary Microsoft now supports client virtualization through full desktop or application delivery/streaming with some considerations “check the document for more details” it is so amazing to see that Microsoft finally released such a support statement and changed the fully rigid statement of the big NO before.

Automating #Linux Machines #provisioning on #Microsoft Hyper-v #Cloud using #opalis #hyperv

April 23, 2011 4 comments

I was assigned the task by one of my customers to automate their Linux machines provisioning on their Hyper-v cloud they are running, they still evaluating the Hyper-v Cloud capabilities, and they were wondering if they can automate the Linux machines provisioning into Hyper-v Servers.

They currently still evaluating it, so the process for the request and automation still not clear in their mind, but the question and request was simple, we want to automate the process of copying and configuring the machine, specially that they are running lots of Linux virtual machines.

the setup:

– 2 Hyper-v nodes running in cluster, each with 128 GB of memory, SCCM,SCOM, SCVMM 2008 R2 SP1, DPM 2010.

– request will come from a help desk or purchasing system, this is not clear yet.

before we start here is some notes for Microsoft guys working on that:

– I spent couple of days trying to figure out how sysprep can be done on the Linux machines and how to script it, the important note that Linux doesn’t has SID related information bounded to the machine, so copying the machine and renaming it will bring a totally new machine to the cloud. reference here.

– the machine name for Linux can be placed and configured in several places, keep in mind that if you used the command hostname to set the Linux machine host name it will be changed to the default name after the restart, to set it permanently, you will need to set the host name on /etc/sysconfig/network file.

– to execute commands remotely you will need to SSH on the machine.

now let us rock n roll:

I have no experience on Linux scripting so steps mentioned here are just guidelines and placeholders for others to use and kick off their implementations, however I don’t claim that those are the best way to do it.

the workflow you will configure will require the following:

– Create a template Linux virtual machine by creating a normal machine on any hyper-v Host.

– Install the Linux Integration components for Hyper-v, the main factor to note that you will need to install the development tools on the machine so it can successfully compile the source.

– after the integration tools installation you will need to assign a static IP to the machine (this will be used by Opalis later to SSH to the machine and run the configuration commands).

– Shut down the machine, from the SCVMM 2008 R2 admin console, copy the virtual machine to the library (if the hyper-v hosts located in different forest or DMZ this can be done by copying it).

now let us start:

– Install Opalis, best video can be found here.

– Import the SCVMM 2008 R2 Integration pack.

– Create the provisioning work flow as following:

image

the work flow will do the following:

– Create a random name that will be assigned to the machine, this is just a placeholder, the machine can be retrieved from text file, SQL DB..etc

image

– Create a VM from the VM template from the SCVMM 2008 R2 Server, and assign the name generated by the previous task to it, the name will be Linux-randomtextvalue

image

to assign the name linux-randometextname, in the vmname field you can pass the results of the previous task by typing, linux- then right click in the field and choose subscribe and choose published data and choose random text results from the previous step.

– the next step will get the vm, make the name as the name linux-randometextname same as previous step.

-the next step will start the vm, and pass the VM ID retrieved by the “Get VM” task, since this task requires VM ID, use the subscribe and published data to pass the VM ID from the “Get VM” Task.

– the link between the “start VM” and next SSH command will wait for 300 seconds or 5 minutes to allow the machine to fully start.

– the next ssh command will ssh to the static IP of the machine, and change the name by altering the file /etc/sysconfig/network and searching it for the default name “localhost.localdomain” and change it with the random text results:

image

the command will be : sed -i ‘s/localhost.localdomain/Linux-{Randome Text from “Generate VM name”}/g’ /etc/sysconfig/network

– the next step will configure the machine to use DHCP commands, same SSH step the command will be: sed -i ‘s/none/dhcp/g’ /etc/sysconfig/network-scripts/ifcfg-eth0

– the next SSH command will restart the VM to apply the settings.

and you are done.

again you can play with the workflow and create you own flow, there is some guides on the internet to automate the request that came from SCSM into Opalis..etc but this article to give you an idea about how generally the Linux machine configuration will be done.

#Microsoft #Hyper-V #Cloud Fast Track with #Dell technology

January 10, 2011 Leave a comment

Curious about Cloud ?

  • Should you use public cloud offerings from providers, build your own private cloud, or develop a hybrid of both?
  • What cloud-based services are right for you?
  • What are the best practices and proven process for implementing cloud technologies that minimize risk and maximize success?

Microsoft in partnership with Dell

Microsoft Hyper-V Cloud Fast Track is a reference architecture for building private clouds that combines Dell technology, including servers, networking and storage, with Microsoft software, technical guidance and validated configurations.

Hyper-V Cloud Fast Track solutions offer a turnkey approach to delivering scalable, preconfigured, validated infrastructure platforms for on-premises private cloud implementations. With local control over data and operations, your IT can dynamically pool, allocate, secure and manage resources for agile IaaS. Likewise, business units can deploy line-of-business applications with speed and consistency using self-provisioning and automated data center services in a virtualized environment.

Hyper-V Cloud Fast Track solutions offer:

  • Faster deployment — Rich features and support make private clouds easy to deploy.
  • Reduced risk — Validated configurations mean you can implement with confidence.
  • Dell advantage — Dell provides business-ready configurations for virtualization that are optimized for Microsoft Hyper-V.

Dell Business-Ready Configurations for Microsoft Hyper-V Cloud Fast Track

Dell offers a range of pre-engineered, business-ready configurations that conform to Microsoft’s Hyper-V Fast Track reference architecture:

Source http://virtualisationandmanagement.wordpress.com/2011/01/10/microsoft-hyper-v-cloud-fast-track/

Categories: Cloud Tags: , , , ,