it has been a while since the last blog entry, I so moved with this post and proud of it, we are so excited about the upcoming event for Microsoft Egypt open doors that will be held next Monday 20/2/2012.
I will not speak in the event, Microsoft has decided that this year speakers should be MSFTs, but we will have some cool demos to run in the demo area, my team and I will have some very cool demo to show you in the event, we have 3 main demos to run:
- I will be presenting Exchange/SQL workload on Hyper-v and the benefits/challenges of running those workloads on top of Hyper-v.
- Karim Hamdy and I will be presenting the DR site how to with Windows Data Center 2008 R2 edition, Hyper-v and Netapp for Active Directory, SQL, Exchange and Hyper-v workloads demystifying the building blocks for having a DR site for your main site on top of Hyper-v.
- Mai Fawzi, will be demoing large VDI workload on top of Windows Data Center 2008 R2 with Citrix Xendesktop.
We will be waiting for you in the event, will be also happy to speak to you with regarding any specific technical workload you have enquiry about.
you can register in this link: https://msevents.microsoft.com/CUI/InviteOnly.aspx?EventID=39-14-67-BB-B1-DE-12-3D-F4-CC-1B-89-E1-F2-07-4F&Culture=en-EG
See you there
Last week, I went through a very interesting discussion with one of my customers who is implementing one of the largest Exchange cloud in the region, they are currently in the planning phase.
the discussion occurred during our design session planning for the overall implementation, the discussion started with the question of Storage design and RAID level required.
the storage that were calculated to host the projected hosted mailboxes were about 26 TB of data for 2 copies (around 13 TB) per copy.
the question that we got, now we will need to design our RAID storage, how many disk is needed? we will need a lot of disks!!!!
I paused a little and said, Why you will need to do RAID, there is actually a better way that will save you money and even offer better and more services to you customer, I paused “as part of the excitement process “ and said “Use 3 copies of Exchange 2010”.
my customer has a reputable SAN from a very reputable vendor, they have shelves and money to buy the shelves, but yet each penny you add to the solution will affect the final offering to the customers which is correct and true.
let us take for example, suppose that you want to host 10,000 users with 512 MB quote, and suppose that your Exchange factors will maintain the same factors you will do for Enterprise use (Like Deleted Items retention, overheads…etc). if you used the Exchange calculator this will lead you to need around 10 TB per copy (DB + Logs) and 20 TB per environment, and total required 1200 IOPs. ( to reach the same calculation, use the Exchange Storage Calculator).
of course I am not talking about the penetration factor and over-subscription factors that you as ISP might use during the calculation, you might assume that you have 10% of concurrency and usage or you might go with 90% this will be totally based on your marketing and strategy teams decision.
so to host the 10,000 users , 10 TB of data on RAID 5 or even RAID 10, let us see how many disk is needed.
using http://www.wmarow.com/strcalc/goals.html you will be able to set the storage and required IOPs, assuming 15 K, 450 GB disks, you will find that to accommodate all of the databases on single big RAID 5 LUN you will need it to be generated from 36 Disks which provides the required storage and around 2471 IOPs (obviously there is a lot of waste in the IOPs)
same calculation can be done with SATA Disks and that will leave you with about 21 * 1 TB disks which leaves you with 619 IOPs (limited set of IOPs) with about 13 TB of storage.
now let us go back to my suggestion, my suggestion was not to use any RAID level protection, using SATA disks and 3 Exchange nodes hosting the Exchange 2010 Databases, let us investigate the options:
|Using 15 K, 450 GB Disks||Using 1 TB SATA Disks||Using 3 Nodes Copy (single Disk 2 TB per DB)|
|Usable Storage||10058 GiB||14901 GiB||1 TB|
|No. of Disks Per Copy||36||21||14|
|Total Disks for environment||72||42||42|
|User Quote||512 MB||512 MB||512 MB|
|Possible Increase in quote||None||250 MB||250 MB|
|No. of Users per 1 TB storage (max recommended DB size)||1562||1200||1400|
|Max. No of supported users||10,000||10,000||14,000 (this is based on the calculation that assumes 512 MB with 1.25 overheads)|
The Possible Increase in quote calculated by available space that still remains on the disk divided by no. of users hosted per LUN which is 1 TB size.
for SAS/FC disks the maximum users per DB is based on the recommendations of max of 1 TB DB size, size overheads are set to 1.25, so space and max. users is limited with available usable space.
for SATA disks, the max of users determined and limited by max of IOPs that could be generated from the RAID Group.
it is much clearer for you know that with the use of 3 copies, you might have the same amount no. of disks used with 1 TB RAID 5, but you will enjoy larger mailboxes that you can offer for your customers without adding any cost to your investment or have more users capacity without sacrificing performance, adding the capacity to remove backup need and much simplified storage management (with the option of SAN elimination).
[Blog Post] Timeout error when Trying to connect to a disk drive over iSCSI using #Netapp #snapdrive
Consider the following scenario:
you have a Netapp SAN, you created a FlexVol and Created a SAN LUN to be accessed from a system over iSCSI, when you connect to the LUN from the Snapdrive from the iSCSI initiator you get the following error
Error code : A timeout of 120 secs elapsed while waiting for volume arrival notification from the operating system.
if you create the LUN from the System Manager or the CLI, the Drive MUST be formatted first before adding it using Snapdrive, otherwise you must create the LUN using the Snapdrive wizard.
to solve this issue:
create the LUN using the snapdrive wizard or following the following steps:
1- assign the LUN/s to the server that will access the LUN/s by assigning initiator group to the LUN
2- format the drive and don’t map any drive letter to it.
3- disconnect the LUNs from the host by removing the LUN mapping to the initiator group.
4- connect to the drive using the snapdrive software.
it worth mentioning that it the best practice is to create the LUNs from the snapdrive interface unless you really need to pre-create the LUNs.
Speaking on Wednesday 28/9 at Microsoft about VDI building blocks with #Microsoft,#Citrix & #Netapp #mvpbuzz #xendesktop
next Wednesday I will speak at Microsoft hero event about VDI building blocks with Microsoft, Citrix and Netapp solutions.
the session will be level 300-350 going from design to implementation, the session content will be:
No Marketing stuff , it is All Hot technical materials. so Drink a lot of Coffee :), The session is for Arabic language speakers
book your calendar, you can confirm your registration and share it on linedin or facebook:
- Introduction to Desktop Virtualization and what does it mean.
- Benefits of VDI for corporates
- Building Blocks for VDI:
- Understand Hypervisor Requirements, Hyper-v, SCVMM
- Understand Connection Broker Requirements Xendesktop
- Understand application delivery requirements (Terminal Services and Xenapp)
- Understand VDI Type and OS Delivery Types.
- Get your VDI on the right track:
- Sizing your Hypervisor correctly Including Memory, Processor and Storage.
- Designing Operating System Delivery
- Sizing your application delivery infrastructure
- Sizing remote access and network
- Storage optimization matrix for VDI deployments (De-duplication, Thin Provisioning and Snapshots)
- Design backup and restore
- Lab for end to end solution implementation
See you there,
I got evolved in the past few months in designing and implementing large VDI solution, that will be weird for an Exchange MVP but I love the virtualization technology and couldn’t resist the temptation.
one of the most ugly parts of the VDI project is the storage design, in fact every VDI architect knows that storage sizing is one the painful aspects and one of the most critical parts for the VDI deployment success.
I spent hours trying to figure out the best model for the IOPs and Storage calculations for best and optimum user experience, and through hundreds of documents from Citrix, Netapp and Microsoft I found my method.
to start here is a nice link that will help you understand how things will go and spare the time of re-explaining the process
to better know Citrix’s Side of the story (watch out, the CTX holds a lot of netapp’s data although that it doesn’t use or recommend netapp) http://support.citrix.com/article/CTX130632
and finally we see a closer look to storage performance from netapp, I have to say that this is one of the most well written documents concerning storage, storage performance and storage reporting, the document can be read here
what I really loved that the report says that storage performance goes into several stages of its life cycle within the VDI project. the biggest IOPs hits are received during the first login attempts which is displayed in table 11 in the TR:
what made me excited that I developed my own IOPs predictor that I used for my projects, gladly my calculation were less than 1000 IOPs difference than the actual testing WOOOOOOOOOOHOOOOOOOOOO
I will put the calculator under further testing and it should be published later this month.
have a nice VDI sizing.