Archive for January, 2014

a Slick Way to to bypass Terminal Services Remote Apps/ Citrix XenApp to gain access to command line from Internet Explorer

January 20, 2014 2 comments

Today, a friend of mine who works in our security team, shared with me a slick way to bypass published applications (in our case IE) to gain command line and PowerShell access.

Although users will have access based on his permissions; so if he is a user he won’t be able to do much, yet , in my opinion it bypasses the hall point of Remote Apps/ Citrix XenApp and gives the user access to execution capabilities on the server, if he is a knowledgeable enough, he will be able to compromise the server.


XenApp 6.5 Server on Windows Server 2008 R2 with all patches installed, Only IE published.

How to:

Since IE is published only, we assume that user has no execution capabilities on the server, to gain access to PowerShell or command line, do the following:

  • From IE open help.
  • Within help, search for notepad.
  • click on How I can How can I use my devices and resources in a Remote Desktop session?
  • image

  • Scroll down and click open notepad


  • once note pad opened (note that we have access to another application now), type in the file PowerShell and save the file as filename.bat.
  • once you saved the file, from Internet Explorer choose, file, Open and open the saved file and voilaaaa, you have powershell and cmd access.

although we can discuss for years if this is a security issue or not, I believe it is for some organizations and it sheds some light on a area where people can bypass a specific published application and gain execution mechanism on servers, Any thoughts ?!


You receive error message : ERROR: MsiGetActiveDatabase() failed. Trying MsiOpenDatabase(). while installing VMware SRM 5.5 and installation fails

January 16, 2014 Leave a comment

If you are installing VMware SRM 5.5 you might get the following error message in installer:

Can not start VMware Site Recovery Manager Service.

when digging in the installation logs you will find the following error message:

ERROR: MsiGetActiveDatabase() failed. Trying MsiOpenDatabase().

To fix this issue, change the logon on the VMware Site Recovery Manager service from Local System to an Account with the DB Owner on the database.

Categories: VMware, VMware SRM

vCloud Director automation via Orchestrator, Automating Org,vDC,vApp Provisioning via CSV Part4 – Adding Approval and Email Notifications

January 12, 2014 Leave a comment

This is the final blog of this series, in the previous 3 parts:




we explored how to automate most of cloud provisioning elements including organizations, vDCs, vAPPs and Virtual machine and customizing their properties like adding vNICs, VHDs and memory/CPU.

In this final part, we will explore how we can add approval cycle to the above provisioning.

In our Scenarios, we will send an email notification to the administrator that include the CSV file used to generate the cloud as attachment, and include a hyperlink to approve/deny the request, let us see how we can do it.

Import the PowerShell Plugin:

We will use Powershell to send email notifications, I tried to use Javascripting but had no luck with attaching the CSV, Powershell comes to rescue here, so you need to import the powershell plugin to your Orchestrator through the Orchestrator configuration interface:


Once you import the powershell plugin, make sure to restart the VCO.

When you complete the restart, go to add a powershell host, you need to make sure that remote powershell is enabled on the server, once done kick of the add powershell host workflow:




if you are adding a kerberos host, make sure to type the username in UPN or email format otherwise you will get this weird error: Client not found in Kerberos database (6) (Dynamic Script Module name : addPowerShellHost#16)

Once added you are ready to go.

Building the Approval workflow:

Build a workflow that includes user interaction and decision as following:


The attributes are defined as following:


the scriptable task, sends email notification with attachments as we said, let us the Javascript portion of it:

//var scriptpart0 = "$file =c:\\customer.csv"

// URL that will open a page directly on the user interaction, so that user can enter the corresponding inputs
var urlAnswer = workflow.getAnswerUrl().url ;
var orcdir = "C:\\orchestrator\\" ;
var fileWriter = new FileWriter(orcdir + name+".html")
var code =     "<p>Click Here to <a href=\"" + urlAnswer + "\">Review it</a>"; ;

fileWriter.writeLine(code) ;
fileWriter.close() ;

var output;
var session;
try {
    session = host.openSession();
    var arg = name+".html";
    Server.log (arg);
    var script =  ‘& "’ + externalScript + ‘" ‘ + arg;
    output = System.getModule("com.vmware.library.powershell").invokeScript(host,script,session.getSessionId()) ;
} finally {
    if (session){


The script attaches the CSV file, then starts the powershell script from the host and attaching the HTML file (in the arguments, this HTML file contains a link to approve the reqeust, and it was built in the above Javascript), let us see the powershell script:

Param ($filename)
$file = "c:\orchestrator\customer.csv"
$htmlfile = "C:\orchestrator\" + $filename
$smtpServer = ""

$att = new-object Net.Mail.Attachment($file)
$att1 = new-object Net.Mail.Attachment($htmlfile)

$msg = new-object Net.Mail.MailMessage

$smtp = new-object Net.Mail.SmtpClient($smtpServer)

$msg.From = ""



$msg.Subject = "New Cloud is requested"

$msg.Body = "A new cloud service is request, attached the generation file, you can approve the request using the below link"





Now you are ready to go, let see the outcome:

If you run the script successfully, you will receive the following email notification:


you can see the link to approve the request and the CSV file included in the email to approve it, if you click on the link, you can see the request and approve/deny it:


What is next:

You might think this is the end, however it is not, this blog series is the foundation of cloud automation and it is just a placeholder, cloud automation can go epic, here is some improvement suggestions for people who might want to take it further:

  • Add error checking, the script is error checking free which might raise serious issues in production environment.
  • Add More logging.
  • Add automation to network provisioning and vShield operations.
  • Automate application provisioning on top of provisioned VMs.

the above is a small list, we can spend years adding to this list, but those are the areas that I will be working on in the upcoming version of this script.

Till next.

Optimizing WAN Traffic Using Riverbed Steelhead–Part 2-Optimizing Exchange and MAPI traffic

January 6, 2014 2 comments

In part one we explored how we can optimize SMB/CIFS traffic using Steelhead appliances, in part 2 we will explore how we can optimize MAPI Connections.

WARNING: Devine Ganger, a fellow Microsoft Exchange MVP warned me that MAPI traffic optimization works in very specific scenarios, so you might want to go ahead and try it, but I checked the documentation an in my lab and it worked, of course my lab doesn’t reflect real life scenarios.

Joining Steelhead to Active Directory Domain:

In order to optimize MAPI traffic, you must join the Steelheads to Active Directory domain, this is because if you don’t you will see the MAPI traffic but Steelheads won’t be able to optimize it because it is encrypted, to allow Steelhead to Decrypt the traffic you need to join it to Active Directory and configure delegation.


as you can see above, the Steelhead compressed the traffic, but didn’t have a visibility on the contents and couldn’t optimize it further, now let us see what to do.

To join the Steelhead to Active Directory, visit the configuration/Windows Domain and add the Steelhead as RODC or Workstation if you prefer:


(You need to do this for both sides steelheads).

Once done, you will see the Steelhead appear in AD as RODC:


Now you need to configure account delegation, create a normal AD account with mailbox, I will call this account MAPI, once created, add the SPN to it as following:

setspn.exe -A mapi/delegate MAPI

Once done, Add the delegation to the Exchange MDB service in the delegation tab:


Once add, go to Optimization/Windows Domain Auth and add the account:


Test the delegation and make sure it works fine:


Now go to Optimizaiton/MAPI and enable Outlook Anywhere optimization and MAPI delegated Optimization:


And restart the optimization service, then configure the other Steelhead with the same config.

Now let us test the configuration and see if Steelhead works or not Winking smile.


while checking the realtime monitoring, the first thing you will not that the appliance detected the traffic as Encrypted MAPI now:


I will send a 5 MB attachment from my client which resides at the remote branch to myself (sending and receiving), let us see the report statistics:



You can see now the some traffic flows, since it is decrypted now it has been compressed and reduced in size, the WAN traffic is 3 MB and WAN traffic is 1.8 MB, then while receiving the email, it received the email as 5 MB but can you see the WAN traffic, it is 145 KB only, because the attachment wasn’t sent over the WAN it was received by the client from the Steelhead.

now let us send the same attachment again and see how the numbers will move this time.


can you see the numbers, the WAN traffic was around 150 KB (the email header..etc), but the attachment didn’t travel over the WAN, it is clear the attachment traveled over the LAN in sending and receiving but didn’t traverse the WAN and the WAN traffic was massively reduced, impressive ha…

Enhancing WAN performance using Riverbed Steelhead–Part1– File Share Improvements

January 2, 2014 Leave a comment

WAN is the issue, I loved what Riverbed said in their document explaining WAN bandwidth (it is an scapegoat), yes it is. If something is not right at the apps it is the WAN’s issue.

I had a decent networking experience, and I dealt in the past with several networking products, but this is the first time I see a product with such easy configuration steps , and  can expose deep insights about what is happening in the network, and work.

Installing Riverbed Steelhead virtual appliance:

You can download the virtual appliance from here keep in mind that you will need to ask for 2 demo keys because Riverbed appliances work in pairs.

Once downloaded, import the OVF (you can import it to ESXi or Hyper-v or VMware workstation). the only note here is to pay attention to network cards connectivity, when you import the appliance to your hypervisor, the NIC ordering is as following:


Running the Configuration wizard:

Once the appliance started, you will be prompted with the configuration wizard, alternatively you can start it by going into enable mode and running

#Config t

(Config)”#Configurations jump-start

The wizard will ask you for several questions (Check the list here\steelhead\8.5.1\html\vsh_8.5.1_icg\index.html&displayHtmlWindow=true&displayHtml=true) but here are several notes will help you in placing and configuring your steelhead appliance”:

– Steelhead is preferred to be physically in-line with traffic, meaning that traffic passes the WAN through the steelhead appliance.

– Steelhead appliance is not a routing device, it passes the traffic transparently from clients/servers/routers/switches so it is like a bridge, it optimizes the traffic on the fly without altering source/destination IPs or ports (unless it is installed as proxy which is a separate discussion)

– Steelhead appliances have LAN interface which is connected to the LAN side and WAN interface which is connected to the WAN side (router), there is a virtual in-path interface that is created and assigned IP, the in-path interface will be used in configuring peering rules and in-path rules.

– WAN/LAN interfaces can’t be connected to the same layer 2 domain or a loop error will be logged and interfaces shutdown.

– To be able to start the configuration wizard, LAN/WAN interfaces must not be shutdown, to do so you must issue the command no in-path lsp enable which disables link state propagation. I had to do this because when i ran the configuration wizard I kept getting “Setting IP address on invalid interface” error.

Lab Setup:

The lab setup is very simple, but you will need to pay attention to steehead cabling or you will get errors or optimization will not work.

My lab setup is:

DC (IP => Riverbed 1 (In Path Interface => Windows Machine as a router (NIC 1 IP is (NIC 2 IP is => Riverbed 2 (IP => File Server (IP

if you configuring the Windows Router machine correctly and configured machines pointing the router as default GW, you will be fine and ping should be working correctly.

now let us configure a VERY simple peering rule to optimize the traffic.

Peering rules allows the steelhead appliance to react to probe queries from other appliances, think about it is defining another peer in a remote site to optimize the traffic directly.

You can also use in-path rules but it uses discovery rules so I believe peering rules are much simpler.

to create a peering rule, from configuration menu select peering rules and configure peering rule to match all the traffic coming from all sources to all destination and optimize it with the other peer IP ( you will have to the same with the other side appliance


That is it!!! really, you are done, let us see the effect of the optimization.

to see it, I had to record a video, because it was unbelievable, I haven’t edited the video by any mean, just watch it below (here I am copying 50 MB file (.net framework 4 ISO), the first copy is not optimized and the WAN speed is about 500 KB, the second copy is optimized, let us see how fast it was Open-mouthed smile

%d bloggers like this: