Forum OpenACS Development: Re: Limitation on length of bind variables

Collapse
Posted by Dirk Gomez on

I ran this:


SQL> exec runstats_pkg.rs_start;

PL/SQL procedure successfully completed.

SQL> begin
  for i in 1 .. 1000
  loop
    insert into test values (i);
  end loop;
end;
/

  2    3    4    5    6    7  
PL/SQL procedure successfully completed.

SQL> SQL> exec runstats_pkg.rs_middle

PL/SQL procedure successfully completed.

SQL> begin
  for i in 1 .. 10000
  loop
    insert into test values (i);
  end loop;
end;
/

  2    3    4    5    6    7  
PL/SQL procedure successfully completed.

SQL> SQL> exec runstats_pkg.rs_stop
Run1 ran in 968 hsecs
Run2 ran in 1201 hsecs
run 1 ran in 80.6% of the time
	
Name				    Run1      Run2	Diff
LATCH.FOB s.o list latch	       0	 1	   1
LATCH.dml lock allocation	       1	 0	  -1
LATCH.list of block allocation	       0	 1	   1
LATCH.ncodef allocation latch	       1	 0	  -1
LATCH.session timer		       3	 4	   1
LATCH.transaction branch alloc	       1	 0	  -1
STAT...index scans kdiixs1	       0	 1	   1
STAT...parse count (total)	       5	 6	   1
STAT...shared hash latch upgra	       0	 1	   1
STAT...opened cursors cumulati	       5	 6	   1
STAT...bytes received via SQL*	     940       939	  -1
LATCH.sort extent pool		       1	 0	  -1
LATCH.session switching 	       1	 0	  -1
LATCH.session idle bit		      12	13	   1
LATCH.channel operations paren	       6	 8	   2
LATCH.session allocation	       0	 2	   2
STAT...table fetch by rowid	       0	 2	   2
LATCH.simulator lru latch	       0	 4	   4
STAT...no work - consistent re	       0	 4	   4
LATCH.Consistent RBA		       3	 8	   5
STAT...buffer is not pinned co	       0	 6	   6
STAT...enqueue requests 	       0	 6	   6
STAT...messages sent		       0	 6	   6
STAT...enqueue releases 	       0	 6	   6
LATCH.lgwr LWN SCN		       2	 9	   7
LATCH.mostly latch-free SCN	       2	 9	   7
LATCH.row cache enqueue latch	       0	 8	   8
LATCH.row cache objects 	       0	 8	   8
LATCH.checkpoint queue latch	     178       192	  14
STAT...calls to kcmgcs		       5	19	  14
STAT...cleanout - number of kt	       5	19	  14
STAT...consistent gets - exami	       5	19	  14
STAT...active txn count during	       5	19	  14
LATCH.undo global data		       6	22	  16
LATCH.enqueue hash chains	      10	28	  18
LATCH.enqueues			      10	29	  19
LATCH.redo writing		      18	37	  19
STAT...consistent gets		       6	26	  20
LATCH.messages			      33	68	  35
STAT...CPU used by this sessio	       9	53	  44
STAT...recursive cpu usage	       6	50	  44
STAT...CPU used when call star	       9	53	  44
STAT...free buffer requested	       0	62	  62
LATCH.cache buffers lru chain	       1	64	  63
LATCH.SQL memory manager worka	     201       268	  67
LATCH.simulator hash latch	     128       646	 518
LATCH.shared pool		   1,083    10,081     8,998
STAT...execute count		   1,005    10,006     9,001
STAT...calls to get snapshot s	   1,001    10,003     9,002
STAT...redo entries		   1,504    10,507     9,003
STAT...recursive calls		   1,003    10,012     9,009
LATCH.redo allocation		   1,510    10,524     9,014
STAT...consistent changes	   1,512    10,576     9,064
STAT...db block gets		   1,540    10,758     9,218
STAT...session logical reads	   1,546    10,784     9,238
LATCH.library cache		   2,115    20,115    18,000
LATCH.library cache pin 	   2,066    20,072    18,006
STAT...db block changes 	   3,029    21,175    18,146
LATCH.cache buffers chains	   7,652    53,308    45,656
STAT...redo size		 193,276 1,350,144 1,156,868
	
Run1 latches total versus runs -- difference and pct
Run1	  Run2	    Diff     Pct
15,078	 115,563   100,485  13.05%

PL/SQL procedure successfully completed.

(You can download the runstats package from here: http://asktom.oracle.com/~tkyte/runstats.html)

Note that 10.000 inserts take 10 times more latches. A latch is a serialization device in Oracle. The bottom line here is that the more queries per page you run, the more serialization you'll experience - a bad thing on a heavily loaded site. Add to that the overhead of our db_ API.

Also note that inserts into temporary tables generate redo. And redo is always written to the harddisk and thus causes I/O. As to temporary tablespace I/O rocketing up for AIESEC: that may have been caused by inappropriate sort_area_size settings or whatever. (It was a quick-fix situation...🤔)

In ShareNet we used temporary tables as in "insert into temporary_table_name select values from somewhere_else where the result set was relatively small and used in a 2 or 3 subsequent queries and the CBO had problems with finding a decent access path.

So in a situation with concurrent accesses and ids outside the database it seems to be the best to use cursor_sharing=force.