The SRP service is designed to run on multiple independent service nodes. In the overall network architecture, a STP or switch will round-robin between multiple SRP nodes in the manner configured in the network switches.

As each SRP node is independent, a failure of one node will not directly cause the failure of another node. To achieve independence, no shared connection between the SRP nodes exists. A service node does not rely on a connection to a centralised (or remote) database. Each node must hold the complete set of active audio files and SRP announcement configuration.

The SRP GUI synchronises the working copy of data across to all service nodes on administrator request to maintain this independence. Depending on the deployment model, this synchronisation is achieved using either:

  1. A file-system sync of the disk-based audio file working set maintained by N2SRP, using rsync over ssh.
  2. Database replication of the PostgreSQL n2in database to replica copies of the database running on each service node.

File synchronisation requires secure passwordless access from the primary SRP through to each active SRP service node. With file-based synchronisation, only the active files are copied to each service node, minimising disk storage requirements.

Database replication relies on the configuration of PostgreSQL replication across nodes from the primary to replica nodes using full database replication. A full copy of all files (including all historically used files) is replicated to each service node which must be accounted for in database sizing.