Cluster wikiload project
Contents
Purpose
The purpose of this project is to build a distributed program which run on several workstations loading a Wiki server farm. The aggregated load from all workstations can stress the serverfarm and gather information to get idea of how the serverfarm performs.
Limitations
This is not a program to measure the performance. The time needed to build such a system exceeds the time at hand.
Another limitation is that its targeted to [www.mediawiki.org MediaWiki] and not as a general workload framework. Though I will try and separate the skeleton from the MediaWiki stuff, to make it easier to implement in other environments.
Name
To refer to the project it needs a name. I think I will call it Lobster.
How it works
A Lobster Management Station (LMS) will control a number of Lobster clients (LC), telling them how to load the server. From the LMS you set how the LC's should load the serverfarm they willreturn the worload results to the LMS, where it will be presented to you.
Implementation
Because of the tight time schedule I've decided to implement it using Perl and the following modules.
- WWW::Mechanize as the HTML Client engine.
- POE::Session as the multitasking engine on the LC's to load the serverfarm
- Curses for controling the Screens of the LC's (Perhaps also for the LMS)
- IO::Socket::Multicast for the LMS auto discovering the LC's.
- TK for making a GUI on the LMS. (Takes a long time. Perhaps I will just use Curses also here)
- Storable for saving results to disk. (A bad idea but very fast to implement.)