Difference between revisions of "Cluster wikiload project"

From Teknologisk videncenter
Jump to: navigation, search
m (Project faces)
m (Skeleton)
Line 24: Line 24:
 
=== Skeleton ===
 
=== Skeleton ===
 
Built as a statemachine
 
Built as a statemachine
*State 1: Connect to the LMS then go to state 2
 
*State 2: Wait for orders from LMS then go to state 3
 
** Keepalive packets every 10 seconds else go to state 1
 
*State 3: Fetch one page (Several attemps with 10 Seconds timeout) from serverfarm if OK go to state 10 else go to state 4
 
*State 4: Report Error to Connect error to LMS
 
*State 10: Start monitor session go to state 11
 
** monitor session get results from Workload session and report every 10 sec. to LMS
 
*State 11: Start Workload sessions reporting to monitor session.
 
 
 
  
 
== LC - Lobster clients ==
 
== LC - Lobster clients ==
 
=== Skeleton ===
 
=== Skeleton ===

Revision as of 15:25, 15 March 2009

Purpose

The purpose of this project is to build a distributed program which run on several workstations loading a Wiki server farm. The aggregated load from all workstations can stress the serverfarm and gather information to get idea of how the serverfarm performs.

Limitations

This is not a program to measure the performance. The time needed to build such a system exceeds the time at hand.
Another limitation is that its targeted to [www.mediawiki.org MediaWiki] and not as a general workload framework. Though I will try and separate the skeleton from the MediaWiki stuff, to make it easier to implement in other environments.

Name

Lobster.gif

To refer to the project it needs a name. I think I will call it Lobster.

How it works

A Lobster Management Station (LMS) will control a number of Lobster clients (LC), telling them how to load the server. From the LMS you set how the LC's should load the serverfarm they willreturn the worload results to the LMS, where it will be presented to you.

Implementation

Because of the tight time schedule I've decided to implement it using Perl and the following modules.

  • WWW::Mechanize as the HTML Client engine.
  • POE::Session as the multitasking engine on the LC's to load the serverfarm
  • Curses for controling the Screens of the LC's (Perhaps also for the LMS)
  • IO::Socket::Multicast for the LMS auto discovering the LC's. (Or just enter the IP of the LMS on the LC's - easier)
  • TK for making a GUI on the LMS. (Takes a long time. Perhaps I will just use Curses also here)
  • Storable for saving results to disk. (A bad idea but very fast to implement.)

Project Schedule

LMS - Lobster Management Station

Skeleton

Built as a statemachine

LC - Lobster clients

Skeleton