The sum of all of really silly OCS questions
- In web conferencing, I know that the conferencing server has a built-in load balancing functionality, so if we have a pool of servers and new conference will start, the least loaded server will be selected to host the conference, now suppose that a user 1 scheduled a conference in a pool with 3 conf. server. Server 1 is elected to host the conference. Within the conference user 8 and user 9 wanted to join the conference but server1 became loaded so what exactly will happen, user 9 and user 8 will be hosted on server2, or will not be able to join the conf?.
- In the VOIP when a user forward a call to his mobile, how is the caller/receiver are charged? User1 called user2 and he forwarded the call to his mobile. So user1 will be charged on that as mobile call, does user’s 2 phone system get charged for the call forward?
- In the OCS planning examples I found a note that indicates that in the OCS topology only 1 edge server in the OCS topology should serve the external IM access even for multiple pools. Doesn’t seems logic for me why I cannot have multiple edge server for each site and users configured the FQDN for the edge serve and login using edge server in their site.
- If the note in point 3 is right, this means that I will have to plan the BW at the central location to serve the hall clients logging in, and I will need a director as well. The other point the access proxy redirect or proxy the traffic?
- What is the HA planning for the Media Gateway?, searched for that and didn’t find it mentioned any where…
- How to schedule a telephony meeting?
What is passive SUP for?
Long time I haven’t blogged sorry I was busy with some re-allocation to new place..etc
A nice discussion I have involved in about the benefit of having a passive SUP in the SCCM hierarchy, well NOTHING.
you do need to create a non active SUP to install the active Internet-based SUP (selected and configured in the Software Update Point Component Properties) and when creating an NLB for the active SUP. The non active SUP would be installed on each server in the NLB and then the active SUP would be configured as the NLB.
Other than this, you cannot use it for reporting. I thought that clients in branch sites can report to the local WSUS and this info will be replicated to the parent WSUS somehow, well this is wrong. So if you will not deploy a WSUS you don’t need passive SUPs.
Regards,
Mahmoud.
SCCM across forests – one final note
Hello,
One final note regarding this topic, if you will install SCCM site in different forest, Microsoft doesn’t support installing the secondary site in the new forest, you will have to install a child site only on the new site. So all the planning has to be done base on child site planning.
PXE boot creates a new record for computers after reinstalling the OS using PXE
Hi,
I spent this week trying to understand this issue from a lot of SCCM folks, thanks god I finally got it, so I would like to share it with you:
When you take a laptop, PXE boots it and drops an image onto it. ConfigMgr client installs and laptop joins site and applies packages – cool! Now then you clear PXE advertisement and roll out an OS onto that system again. There has been no change at all on the laptop so all the BIOS GUIDS etc. are identical, however a new ConfigMgr resource record and a new ConfigMgr GUID is assigned to the machine. The machine maintains its domain SID, MAC address, SMBIOS GUID etc. so why SCCM is creating the a new records:
This is the default behavior If site in mixed mode, and manually resolve conflicts is not enabled, the rebuilt machine gets new sccm identity (guid). If the site setting to manually resolve conflicts is enabled, then those records would appear in the resolve conflicts node. If in native mode this should not occur.
The basic problem is that when a computer is re-imaged from bare-metal in mixed mode security, ConfigMgr has no way to know if it’s really the same computer and you want it to have the same identity, or whether it is some rogue computer trying to usurp the identity of an already managed computer. PXE is a very insecure protocol, and things like the MAC address and SMBIOS are easily spoofed.
The “Manually resolving conflicting records” option is a site-wide setting, but if you set it, it requires IT Admin intervention to resolve the conflict. The current behavior is not considered to be a bug, though arguably we should offer an “automatically merge” option that doesn’t require IT Admin intervention
In native mode you don’t have this issue because of the certificates, In native mode, essentially SCCM punted the problem off to whomever (or whatever) is issuing the certificates. If the certificate issuer thinks it’s the same computer, then the new issued certificate should have the same subject name and #2 under Native Mode Security in the slide below applies. If the certificate issuer doesn’t think it’s the same computer or doesn’t know, then the new issued certificate should have a different subject name and #3 under Native Mode Security applies.
But How the certificate issuer knows that the computer is old is new one then?
Could be by a wide spectrum of different ways, depending on how certain the certificate issuer wants to be that it really is the same computer and not some rogue.
At the least secure, but most automatic, end of the spectrum, the certificate could be issued by AD automatically for any computer joining the domain, with the subject name set of the FQDN of the computer. In this case, if the computer runs an OS deployment task sequence, it will be able to join the domain (since the task sequence has the domain join credentials) and it will automatically get a certificate from AD without the IT Admin doing anything. Obviously, this isn’t very secure.
At the other end of the spectrum, and IT Admin might have to physical visit the computer, verify its identity, and install the certificate from removable media once he has verified the identity of the computer. This is the most secure approach, but the hardest because of the manual steps required.
Any given customer will have to decide what approach meets their needs on the security/ease-of-use tradeoff.
In the case of PXE boot, the MAC address of the computer and/or the SMBIOS UUID of the computer are matched against entries in the ConfigMgr database. Assuming that a matching entry is found, the computer name from the database entry is sent back to the computer running the task sequence and that computer name is assigned, this is how SCCM recognize the machine in the case of PXE where not certificate is installed on the machine the machine is wiped off.
SCOM is not monitoring DHCP server cluster resource
here is interesting info:
My customer has DHCP server is running on a cluster node, all of the other cluster groups are monitored but not the DHCP group this result that the virtual server not included in the windows 2003 DHCP group and DHCP on it is not monitored, I found the issue: the issue that the discovery is processing the start reg value with object of 2, on the cluster the attribute is set to 3:
<Configuration>
<ComputerName>$Target/Property[Type="WindowsLibrary!Microsoft.Windows.Computer"]/NetworkName$</ComputerName>
<RegistryAttributeDefinitions>
<RegistryAttributeDefinition>
<AttributeName>Microsoft_Windows_DHCP_Server</AttributeName>
<Path>SYSTEM\CurrentControlSet\Services\DHCPServer\Start</Path>
<PathType>1</PathType>
<AttributeType>2</AttributeType>
As I searched and most of my fellow consultants did, it looks that the DHCP MP is not cluster aware, and this will be fixed in the next version, so be careful as this is not mention anywhere in the documentation either in DHCP MP or cluster MP.
Bye
SCCM distributed application is not monitored – SCOM 2007
Hi,
Here is some tips for people who cannot monitor SCCM using the latest SCOM management pack:
– Make sure the agent proxy is enabled on the SCCM server.
– Make sure that you have created SMS_INSTALL_DIR_PATH system variable.
– Make sure that you installed the SCOM x86 agent, you cannot monitor SCCM using the x64 agent, this is a known issue, so you might need to install the agent manually.
I am pretty sure that the SCCM distributed application will be monitored now J
what information are lost when using the restore-mailbox
Here is a nice info you have to consider,
When restoring mailboxes from RSG all of the information are retained like special folders, dumpster and calendar items, but when it comes to rules, ACL and views they are lost.
Rules, views, forms and ACLs are not recoverable when you use an RSG, so you might need to make sure to document those down and make sure that you inform the end-user about that to avoid any troubles.
deploying OS using SCCM to unknown computers
Because people might want to know how, Michael Niehaus posted a great article about it here http://blogs.technet.com/mniehaus/archive/2008/01/19/microsoft-deployment-configmgr-boot-media-unknown-computers-web-services.aspx so visit it you will like it.
Backing up Exchange 2007 using Veritas 11d
Well, most of us had issues with backing up Exchange 2007 using Veritas 11d, I fought for over of 2 months to get it working and finally I did, I found that there is a lof of folks out there that were fighiting with it with no luck, but after opening so many cases with Symantec and Microsoft we managed to push it to work, most of us got the ugly error:
Completed status: Failed
Final error: 0xe0008488 – Access is denied.
Final error category: Security Errors
For additional information regardaxing this error refer to link V-79-57344-33928
Errors
Click an error below to locate it in the job log
Backup- Exchange2007.lab.localMicrosoft Information StoreFirst Storage Group V-79-57344-33928 – Access is denied.
Access denied to database Log files.
Backup- Exchange2007.lab.localMicrosoft Information StoreSecond Storage Group V-79-57344-33928 – Access is denied.
Access denied to database Log files.
Exceptions
Click an exception below to locate it in the job log
Backup- Exchange2007.lab.localMicrosoft Information StoreFirst Storage Group WARNING: “Exchange2007.lab.localMicrosoft Information StoreFirst Storage GroupLog files” is a corrupt file.
This file cannot verify.
Backup- Exchange2007.lab.localMicrosoft Information StoreSecond Storage Group WARNING: “Exchange2007.lab.localMicrosoft Information StoreSecond Storage GroupLog files” is a corrupt file.
This file cannot verify.
Verify- Exchange2007.lab.localMicrosoft Information StoreFirst Storage Group WARNING: “Log files” is a corrupt file.
This file cannot verify.
Verify- Exchange2007.lab.localMicrosoft Information StoreSecond Storage Group WARNING: “Log files” is a corrupt file.
This file cannot verify.
This guide will contain the required detailed steps to configure Veritas 11d to make things works.
here is the guide
Edge server is not applying Recpient filtering
I implemented a big Exchange organization (couple of Clustered mail server and hubs and CASs and edge server) however when I tested Email filtering on the edge server I found that the HUB server is delivering the NDR not the EDGE as below:
Microsoft Mail Internet Headers Version 2.0
Received: from mail.ourdomain.corp ([z.z.z.z]) by mail2.ourdomain.corp with Microsoft SMTPSVC(6.0.3790.3959);
Wed, 4 Jul 2007 22:38:23 +0300
Received: from mail-relay.Domain.com ([x.x.x.x]) by mainserver.domain.com with Microsoft SMTPSVC(6.0.3790.3959);
Wed, 4 Jul 2007 23:38:00 +0400
Received: from Edge.Domain.local (Edge.Domain.com [y.y.y.y])
by mail-relay.Domain.com (Postfix) with ESMTP id 519514F856
for <user@mydomain.com>; Wed, 4 Jul 2007 15:34:02 -0400 (EDT)
Received: from Hub-cas.Domain.local (10.10.20.12) by edge.Domain.com
(10.10.10.10) with Microsoft SMTP Server (TLS) id 8.0.700.0; Wed, 4 Jul 2007
22:33:27 +0300
MIME-Version: 1.0
From:
To:
Date: Wed, 4 Jul 2007 22:34:00 +0300
Content-Type: multipart/report; report-type=delivery-status;
boundary="b5fe7bbb-1255-4b2c-a096-a956efb3c516"
Content-Language: en-AU
Message-ID:
In-Reply-To: <5486BE6683AFD54F935039CCF748F6645EB83E@EGMAIL02.SPSEGY.synergyps.corp>
References: <5486BE6683AFD54F935039CCF748F6645EB83E@EGMAIL02.SPSEGY.synergyps.corp>
Thread-Topic: sdkfj
Thread-Index: Ace+chJwkNcA93RTS0eP7gFPEivGtwAADLvH
Subject: Undeliverable: sdkfj
Return-Path: <>
X-OriginalArrivalTime: 04 Jul 2007 19:38:00.0514 (UTC) FILETIME=[D4908A20:01C7BE72]
–b5fe7bbb-1255-4b2c-a096-a956efb3c516
Content-Type: multipart/alternative; differences=Content-Type;
boundary="2b3dfb83-92ed-49eb-83af-e3cc6a5daa71"
–2b3dfb83-92ed-49eb-83af-e3cc6a5daa71
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
–2b3dfb83-92ed-49eb-83af-e3cc6a5daa71
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
–2b3dfb83-92ed-49eb-83af-e3cc6a5daa71–
–b5fe7bbb-1255-4b2c-a096-a956efb3c516
Content-Type: message/delivery-status
–b5fe7bbb-1255-4b2c-a096-a956efb3c516
Content-Type: message/rfc822
Although that the agent were enabled in the GUI, it seems that the transport agent were disabled when I got them using the get-trasnportagent cmdlet, enabling them using enable-trasnportagent solved the issue J.