In large Microsoft Systems Management Server (SMS) 2003 hierarchies that have many sites, site-to-site replication slows down.
The volume of files may be larger than you expect in the following folders on a site server:
These files represent site-to-site replication data that has been queued for processing by several components of the SMS_EXECUTIVE service. A baseline for the site is required to determine whether the counts are larger than expected. Large queues of replication information are occasionally expected. These large queues are typical when specific conditions exist.
Note The baseline is defined here as some historical measure of the volume of files in the Inboxes folder structure.
The following conditions can cause backlog scenarios:
Network or other infrastructure issues prevent the sender component from completing pending replication work.
Poor disk performance or slow I/O occurs because of a contention for disk resources.
SMS bandwidth restrictions limit the throughput of the sender component. This behavior keeps more send requests and jobs around for longer periods.
When addresses are unavailable, the SMS Scheduler component cannot schedule send requests by using the sender for the given address. This issue delays the part of the work that is associated with scheduling the send request until the address is available.
Distributing many or large packages in a short time creates a high load on the components that are involved in site-to-site replication.
Overly aggressive schedules exist for discovery data generation, inventory collection, collection evaluation, and so on.
In a hierarchy that has three or more tiers, middle-tier sites that have many child sites handle larger volumes of jobs and replication objects. This behavior occurs because of site-to-site replication routing. The load of a middle-tier site is increased for each child site that is attached. Therefore, reducing the number of attached sites can, in some cases, reduce this load.
Sites are removed from the hierarchy incorrectly.
In most cases, when the conditions that cause significant replication queuing have been corrected or when these conditions have subsided, the queued replication data is processed and then cleared.
When the SMS Scheduler component is processing large quantities of active jobs and send requests, the throughput of the Scheduler component begins to slow. This behavior occurs because of a corresponding increase in processing overhead for the increased quantities of objects.
In some instances, if a large enough queue of data is formed, it can take days or even weeks to be completely processed. The time that is required to process the queued data depends on the many variables that affect replication performance in the hierarchy and in the environment. These variables include disk I/O performance, network speeds, bandwidth restrictions, size of queued data, and object count. When a large queue of backlogged replication data has been formed, adding additional loads increases the time that is required for all data to be processed.
In most cases, the appropriate action for a large backlog of replication data is to first correct any issue that may be preventing processing of replication data. Next, you may have to reduce the quantity of site-to-site replication traffic. Finally, make sure that the SMS_EXECUTIVE service can run uninterrupted to complete processing in a timely manner. Service restarts can add significant overhead. Limiting SMS_EXECUTIVE service restarts is important because the initialization work for the SMS Scheduler component is proportional to the number of jobs, send requests, and routing requests that are currently queued for processing.
Note The SMS_EXECUTIVE service hosts the SMS Replication Manager, SMS Scheduler, and SMS Sender components.