General information


The Sippy Standby server provides customers with three primary use cases to keep their critical business operating. The Sippy Standby server is a secondary instance of the Sippy Softswitch that can be promoted to the primary production server in the event of failure. This adds a functional layer of network redundancy to enhance the operational stability of all voice network architectures. The Standby server can also be promoted to the primary role during scheduled maintenance allowing for continuity of service. Achieve active failover (hot Standby), passive failover (warm Standby) or data replication for reporting purposes, through the set up of a Standby server.


Streaming data replication ensures the mirroring of data to a redundant Sippy Database, delivering data securely to a live standby server. Sippy employs Slony PostgreSQL for its streaming data replication, resulting in a more flexible and stable mirroring of configurations and call accounting data for your business continuity in the event of failure.


The Sippy Standby server can be configured with a degree of flexibility to deliver redundancy benefits for different business implementations. When an issue has been resolved and the primary server becomes available again, any changes to the standby server's copies of databases must be restored back to the primary server. A reversion switchover procedure is performed by Sippy Support, returning original network architecture to normal.


Active Failover/Hot Standby model (limited to a single subnet)


The Sippy Standby server can be configured to provide Active Failover (Hot Standby) for service continuity. The main difference of the active failover in comparison to a passive one is that it is a fully automatic process of services (service IP/Database/services) that is relocated onto a Standby server when the Primary server is down or is deemed temporarily unavailable.


When a failover from Primary to Standby server occurs, the system drops the replication to make the Standby server independent and is readied for normal calls processing. Due to this fact, the reverse (from Standby to Primary) replication should be reconfigured from scratch when the Primary server is finally up and ready to resume traffic processing. When the Primary becomes available after a crash, all changes from the standby server should be copied back to the Primary server, otherwise, those changes would be lost.


Additionally, there are few conditions that should be followed to configure an active failover:


 1. Three (3) IPs minimum should be provided (1 for the Primary, second for Standby and the 3rd one a service IP).

 2. All IPs should belong to the same network subnet (because in the case of a failover the service IP will be re-assigned to the Standby server).

 3. The Standby server should have hardware specification that is sufficient to perform the primary role with your traffic load.



As an additional option that could be configured for an active failover is a CARP (Common Address Redundancy Protocol) interface usage. The Common Address Redundancy Protocol, or CARP, is an inherent application within the FreeBSD OS network stack, which provides the “heartbeat” private network connection between two Sippy instances, performing error detection and presenting a virtual IP to bind each Sippy instance.


CARP advantages in comparison to Sippy's script:

  • Support of multiple service IPs (floating IPs). In other words, more than one IP can float between Primary and Standby. However, all of these IPs should also cover points #1 and #2 from the list of conditions above.
  • Configurable failover sensitivity (timeout) to deliver a minimal downtime at the event of failover, but avoid unnecessary switch-overs when Primary is out of Standby's sight for a while.


Illustration of active failover within one subnet:


Sippy Standby diagram - normal (1).png


  1. Under normal operating conditions, the designated primary Sippy Softswitch server handles all normal traffic and operational procedures from the primary floating/service IP, either as managed by CARP or without (when Sippy's script pings Primary from time to time to check it's availability).

  2. Sippy DB dedicated for all primary DB operations.

  3. CARP or Sippy's script (if CARP isn't configured) maintains the operational “floating”/service IP. The failover interval can be configured (CARP only) which activates the failover procedure to allow for network and traffic abnormalities that may affect your voice network.

  4. DB Replication is performed from the Primary Sippy DB (Sippy DB) to Sippy DB2 (Standby's DB).


Sippy Standby diagram - failed.png


  1. In the event of failure on the Primary server, CARP activates service IP on Standby server and Sippy breaks the replication. The database on Standby server is activated to support all manipulations and activities. After the failover, the Standby server takes the role of Primary and passes all traffic as normal.

  2. When the original Primary server has been brought back online, the Sippy support team configures a reverse replication (after clients approval) to copy the latest changes from Standby to Primary.

    Diagram of Standby DB to Master DB switchover

  3. When all data has been fully synced, a manual switchover action will be performed by Sippy Support to change active DB from Standby (Sippy DB2) back to the original Primary (Sippy DB). A short downtime will be sustained for this procedure as scheduled. Normal operations are resumed.

Additional recommendations:


It's recommended to split DB replication and services (SIP+Web) between different network interfaces to get more flexible redundancy and avoid unnecessary replication consistency issues. In other words, using the same three (3) IP addresses to configure Active failover (either Sippy's script or CARP) assigned to one (public) network interface while another one (separate network interface) is running DB replication between the servers. Replication can be even setup using private network (e.g. 192.168.0.0/30).


The main benefit of this setup is avoid unnecessary DB failovers if something happens with service (public) IP/network and vice versa.

Possible cases are:

  • Public interface/network goes down at Primary server. In this case, services + service IP are moved to Standby (replication/private interface remains running at Primary server).
  • Private interface/network goes down at Primary server. In this case, DB is switched to Standby (services + service IP will stay at Primary server).
  • Primary server goes down, In such a case, both services + service IP and DB are switched to Standby server (usual Active Failover). That's because both private and public interfaces are unavailable on Primary server.


Passive Failover/Warm Standby/Manual failover (geographical distribution model)


The Sippy Standby server can be configured for geographical redundancy to provide a secondary peripheral instance of Sippy in the event of a catastrophic failure within your primary/production network. A warm-standby model is requiring a manual intervention in the event of failure to switchover normal operations to the Standby server. The advantage of this model in comparison to the hot standby configuration is that there is a possibility to maintain the Primary and Standby servers in geographically disperse networks and replicate data between each other. The notable disadvantage of this model is that there is an impossibility to employ the automatic failover and service IP which is the same for Primary and Standby in a hot standby model.


Sippy Standby diagram - geographic.png


  1. Under normal operating conditions, the designated primary Sippy Softswitch server handles all normal traffic and operational procedures.

  2. Sippy DB dedicated for all primary DB operations.

  3. Data replication is streamed to geographically distributed Standby Database DB2.

  4. In the event of Failure - The standby sends and email to our support team which is automatically turned into a support ticket for further investigation.  In some cases its just a momentary network issue, in some cases the issue can be more severe.  An assessment is made by our support team accordingly.

  5. Upon investigation if the support team deems it necessary they will initiate a manual switch-over procedure should be completed by Sippy Support to activate the Standby server (take a role of Primary). The IP of the environment should also be changed, because of different sub-nets and due to the impossibility of the Primary service IP usage.  This process can take 10-30 minutes. 

  6. Normal operations are resumed on Standby node after IP change and after activation of the database on Standby.

  7. When the original Primary server has been brought back online, the Sippy support team configures a reverse replication (after clients approval) to copy the latest changes from Standby to Primary. When all data has been fully synced, a manual switchover action will be performed by Sippy Support to change active DB from Standby (Sippy DB2) back to the original Primary (Sippy DB).


Load Allocation model


The Sippy Standby server can be configured to alleviate load causing processes, such as reporting, from your mission-critical primary server to the standby server for decreased congestion and additional stability. During peak times, a server’s performance can be impeded by running CPU intensive reporting duties. As a benefit of streaming data replication, your idle Standby server will perform your reporting duties without imposing the additional load on your mission-critical Primary server.


Sippy Standby diagram - reporting.png


  1. Under normal operating conditions, the Primary Sippy Softswitch server handles all normal traffic and operational procedures.

  2. Sippy DB dedicated for all primary DB operations.

  3. Data replication is streamed to Standby Database DB2.

  4. Read-only queries are directed to the backup DB for reporting, whereas write queries are all directed to the primary database, maintaining mission-critical process optimization on the Primary system.


Who is eligible to deploy a Standby License and what additional conditions?


The Sippy Standby Server is available to all exclusive Sippy Softswitch license operators. These include Sippy's Dedicated Hosted, Flex Subscription Rental licenses, and the Sippy Perpetual License owners. A Sippy Support contract is required as both instances must run the same Sippy version to maintain DB replication and consequently support your traffic in the event of failure.


Sippy Dedicated Hosted customers employ a premium hosted service, which includes managed Hardware and infrastructure, the Sippy Softswitch license, Sippy 24x7 Support, priority software upgrades, and bug fixes. The Sippy Dedicated Standby server is predominantly recommended specifically for high-value business due to its high economic outlay. It is a solution that requires Hardware and infrastructure rental on a server that has the same performance as your Primary server to gracefully resume traffic in the event of failure. The monthly rental of this redundant node, therefore, almost doubles monthly operating expenses.


Sippy Flex Subscription Rental customers operate from their own hardware and infrastructure on a license that is inclusive of Sippy 24x7 Support. The Sippy Flex Standby Subscription license is invoiced on a month to month basis in addition to the primary Sippy license rental. The Standby license also includes Sippy Support so there are no additional charges required to maintain synced versions for replication purposes.


Sippy Perpetual License holders are the official title holders of a Sippy Softswitch license for the long-term successful voice business management. The Perpetual Sippy Standby license is a secondary active, although non-production Sippy License that is installed in your own voice network. Both instances must run the same Sippy version to maintain DB replication. A current Sippy Support contract is required on both Primary and Standby Licenses, keeping both instances in sync and your traffic protected.


Sippy Standby server hardware requirements


The Sippy Standby server hardware configurations must be sufficient to perform as the Primary server. Once traffic has failed over, the Standby server can resume the role of Primary without any performance limitation affecting normal processes. As a converged, single-site platform, integrating the Sippy Standby server into your network only requires one additional server added to your existing single-server network to operate effectively. Minimum specification guidelines can be found at the Sippy Hardware Requirements document.