Forum OpenACS Q&A: Re: Random Book off bookshelf

Collapse
Posted by Jeff Davis on
Jarkko, the "select ... orderby random() limit 1" can be pretty slow for a large table. On a users table with 8300 users I get:
head=# explain analyze select * from users order by random() limit 1 offset 0;
                                                     QUERY PLAN                                                      
---------------------------------------------------------------------------------------------------------------------
 Limit  (cost=803.12..803.12 rows=1 width=1225) (actual time=113.42..113.42 rows=1 loops=1)
   ->  Sort  (cost=803.12..823.86 rows=8298 width=1225) (actual time=113.41..113.41 rows=2 loops=1)
         Sort Key: random()
         ->  Seq Scan on users  (cost=0.00..262.98 rows=8298 width=1225) (actual time=0.02..39.11 rows=8298 loops=1)
 Total runtime: 114.64 msec
(5 rows)
whereas if you have the id picked randomly from a cached list the query to get the single row is small numbers of milliseconds.

I think caching the id's is generally a win...