He's got a test table with 10 million rows in it, containing a single varchar(64) column. Selecting either a single value or a count(*) is very slow, and the query plan shows a sequential scan.
He tried to build an index, but the resulting load on the server brought his live site to its knees, so he had to kill the process.
So, two questions:
1. Is building an index the right fix here, or will I also/instead need to read up on tuning parameters? I don't usually have to do anything to PG, it Just Works. But we don't generally see tables with that many rows, either.
2. Is there some way to throttle the index building process so it doesn't take up all the system resources? A fallback would be to have him create a new table, put the index on while it's empty, and then programmatically move his data over, putting in delays as necessary to slow it down. But that's pretty kludgy.
Suggestion? Thanks in advance!