INSERT statements. A couple have 30000 or 40000 INSERTs. As a
result, these modules take several minutes to load, and generate a LOT
of screen output.
I'm wondering whether storing the data as delimited text files and
using the psql copy command would speed things up significantly? It
seems to in my ad hoc testing... 42K records are copied into
us_zipcodes in under ten seconds from a file, but 42K INSERT INTOs
take 2 or 3 minutes (and this generates 42K lines of junk to the
screen in the process, unless you use the -q switch, which the
OpenACS4 package installer does not seem to do). Even with -q it
takes 100 seconds,
ten times slower than the copy command.
Is there any reason not to switch to using copy for reference data
files with more than (say) 1000 records in them? The equivalent bulk
load utility for Oracle could also be used, unless Oracle is plenty
fast enough already doing this sort of thing?