1. If you are going to be spending money anyways, buy the highest-end systems you can afford that have the most memory bandwidth.
Then run the front-end AOLserver on Solaris 10 or OpenSolaris (best multi-threading, better tuned VM system, best chance to take advantage of memory via e.g. MPO).
If you really want to spend, use OpenSolaris in combination with the flash drive (SSD) based read and write cache capability of newer versions of ZFS.
Put Pound or another SSL capable proxy (nginx, lightppd) on a separate machine to handle SSL negotiation, setup, teardown, etc. Put gigabit ethernet links with jumbo frames turned on, between Pound and the AOLserver machine. If you need even more bandwidth you can combine gigabit links (link aggregation) to get 2, 3, or 4 Gbps.
Architecture thus looks like this:
for both SSL and non-SSL:
Internet -- Pound Web/SSL proxy -- AOLserver front end -- database server
Cost: $2K for proxy, $5K for system with SSDs, $8K for database server with more SSDs (like an 8-SAS or SATA drive system); so $15K and 3-4U of rack space to get started (if you are handling a lot of data you will need to add more rack space for your storage system).
Note that in such a situation you still have single points of failure; but with the SSDs you will have the performance you need. In my experience, the main failures are the power supplies , so specify systems that have dual power supplies.
2. The other alternative is to use the nsv_* procs to ensure that the things that matter are shared amongst multiple front-end servers and that the cache is invalidated when it should be. I think that Malte S. spent some time on this, but I don't know the status of his work on it.
#2 will give you more resilience but will require more programming and testing.