[clug] Postgres sync / duplication

Paul Wayper paul.wayper at anu.edu.au
Mon May 8 01:38:59 GMT 2006

Robert Edwards wrote:
> This is such an artificially restricted situation, that it is hard to
> imagine how it would apply in real life, but, given the scenario, you could run a job when
> the user logs out that:
>  - inspects the logs on each PostgreSQL server looking for inserts,
> updates and deletes
>  - determines in which direction the syncs need to take place
> (checkpoint each log)
>  - apply the equivalent inserts, updates and deletes to the other server
A company that I worked for in Melbourne did something like this (with 
Ingres, too).  Replication for us was having a local site where data was 
entered in, and then a remote site that would be kept as a mirror or 
archive of the local site.  You could do what you wanted on the remote 
site and it would never be replicated back.  It worked like this:

Have a 'replication' table for each table to be replicated, that just 
contains the unique ID fields of the replicated table.  The replication 
table is not a unique index, though, so it can have multiple rows.

Have a trigger that runs on insert or update that calls a stored 
procedure to insert the same unique ID into the replication table.

The replication process (run each night, for example), does something like:
CREATE TEMPORARY TABLE replicated_table_temp AS
 SELECT * FROM replicated_table WHERE unique_id IN
 (SELECT DISTINCT unique_id FROM replicated_table_replication)
COPY replicated_table_temp TO FILE AS "replicated_table.repl.txt"
DROP TABLE replicated_table_temp

Then you copy the created text file over to the remote server, load it 
into the database as a temporary table, delete any rows from the 
replicated table that exist in the replication copy, insert the 
replicated copy in, and drop the temporary table again.

The actual replication script was a piece of Perl written before the 
days of DBI, so all the database calls involved echoing a command to the 
sql interpreter, redirecting the output into a text file, reading the 
text file and working out what the results were.  It was bizarre.

When I pointed out that it doesn't cope with deletes, Development made 
it even more bizarre by having a separate module that was used in the 
original, and _did_ use the DBI by passing the database handle around as 
a parameter to most of the functions.  It was a classic example of 
Development not understanding the language they were writing in.  The 
fact that I had to support it and knew more about Perl and the DBI than 
they did made for interesting relations.

Hope this helps,


More information about the linux mailing list