Here, at Innova-UNED, we have been testing some uses of the parser and collected the results.
First of all we needed to know if some of the problems came due to our development over dotLRN or the dotLRN code itself. To test this we use the Test-servers and several other internal instances.
These are the differences between the machines:
--
dotaLF-1 (this is the codename of the Innova development)
Debian Etch 4.0
aolserver: v4.0.10
TCL: v8.4.12
XOTcl: v1.6.1
libthread: v2.6.5
RAM: 16G
RAM bus speed: 667MHz
2 XEON CPUs Quadcore 2,33Ghz
CPU bus speed: 1333MHz
This instance has an Oracle Database with nearly 130.000 users, and a 62G content-repository.
dotaLF-2
Debian Etch 4.0
aolserver: v4.0.10
TCL: v8.4.12
XOTcl: v1.5.6
libthread: v2.6.5
RAM: 32G
RAM bus speed: 400MHz
4 XEON CPUs Dualcore 3,4Ghz
CPU bus speed: 800MHz
This instance has an Oracle Database with nearly 130.000 users, and a 62G content-repository.
dotLRN-1
Debian Sarge 3.1
aolserver: v4.0.10
TCL: v8.4.9
XOTcl: v1.5.3
libthread: v2.6.1
RAM: 1,2G
RAM bus speed: 133MHz
2 Pentium III CPUs 1,3Ghz
CPU bus speed: 133MHz
This instance has an Oracle Database with 9 users, and a 2,2M content-repository.
--
We started with some web requests with the developer-support tool activated and checked how many time the parser and the database took.
The URLs testeds were like this ones:
/
/dotlrn/
/dotlrn/communities
/dotlrn/clubs/innova/forums/message-view?message%5fid=10399941
/dotlrn/?page_num=1
/dotlrn/?page_num=2
/dotlrn/clubs/pruebasdeinnova/one-community?page_num=0
/dotlrn/clubs/pruebasdeinnova/one-community?page_num=1
/dotlrn/clubs/pruebasdeinnova/one-community?page_num=2
/dotlrn/clubs/pruebasdeinnova/one-community-admin
/dotlrn/clubs/pruebasdeinnova/community-edit
/dotlrn/clubs/pruebasdeinnova/one-community-portal-configure
/dotlrn/clubs/pruebasdeinnova/member-email
/dotlrn/clubs/pruebasdeinnova/members
/dotlrn/clubs/pruebasdeinnova/uforums/admin/permissions?object_id=16647427
Repeated several times with an external script.
Here are some of the results:
dotaLF-1
286 web requests:
Total time average: 554s
Parser time average: 289s
Database time average: 138s
dotaLF-2
286 web requests:
Total time average: 1100s
Parser time average: 646s
Database time average: 271s
dotLRN-1
170 web requests:
Total time average: 509s
Parser time average: 482s
Database time average: 70s
Besides these times, using the platform we've noticed a slow behavior, with 3-4s request when only 20 users appeared in the request-monitor (logged in the last 10 minutes). Are there any other tools we could use to test the performance in dotLRN?.
Interesting things are that parse time is 50-60% of the total time in the dotaLF instances. That's something we have to analyze properly.
We miss some testing in a clean dotLRN instance with a huge database, like 100.000 users and 50-100G content-repository. Is there anyone that could post some test like those ones in their instances?.
Has anyone tested the parser so far in big dotLRN instances? in order to know if the problem is the platform scalability or some other problem we haven't found.