Difference between revisions of "Cluster wikiload project"

From Teknologisk videncenter
Jump to: navigation, search
m (Implementation)
m (Skeleton)
Line 32: Line 32:
 
** Keepalive packets every 10 seconds else go to state 1
 
** Keepalive packets every 10 seconds else go to state 1
 
*State 3: Fetch one page (Several attemps with 10 Seconds timeout) from serverfarm if OK go to state 10 else go to state 4
 
*State 3: Fetch one page (Several attemps with 10 Seconds timeout) from serverfarm if OK go to state 10 else go to state 4
*State 4: Report Error to Connect error to LMS  
+
*State 4: Report Error to Connect to LMS go to state 3
 
*State 10: Start monitor session go to state 11
 
*State 10: Start monitor session go to state 11
 
** monitor session get results from Workload session and report every 10 sec. to LMS
 
** monitor session get results from Workload session and report every 10 sec. to LMS
*State 11: Start Workload sessions reporting to monitor session.
+
*State 11: Start each Workload sessions in state 20.
 +
*State 20: Report ''I am alive and session number N'' to ''monitor session'' if receive ''acknowledge'' within 2 seconds from monitor go to state 21 else die.
 +
*State 21: Send keepalive every 5 second to ''monitor session'' if three sent without ''acknowledge'' send ''die'' and then die.
 +
** Load server and report in keepalive packets load.

Revision as of 20:12, 15 March 2009

Purpose

The purpose of this project is to build a distributed program which run on several workstations loading a Wiki server farm. The aggregated load from all workstations can stress the serverfarm and gather information to get idea of how the serverfarm performs.

Limitations

This is not a program to measure the performance. The time needed to build such a system exceeds the time at hand.
Another limitation is that its targeted to [www.mediawiki.org MediaWiki] and not as a general workload framework. Though I will try and separate the skeleton from the MediaWiki stuff, to make it easier to implement in other environments.

Name

Lobster.gif

To refer to the project it needs a name. I think I will call it Lobster.

How it works

A Lobster Management Station (LMS) will control a number of Lobster clients (LC), telling them how to load the server. From the LMS you set how the LC's should load the serverfarm they willreturn the worload results to the LMS, where it will be presented to you.

Implementation

Because of the tight time schedule I've decided to implement it using Perl and the following modules.

  • Moose For object orientated.
  • WWW::Mechanize as the HTML Client engine.
  • POE::Session as the multitasking engine on the LC's to load the serverfarm
  • Curses for controling the Screens of the LC's (Perhaps also for the LMS)
  • IO::Socket::Multicast for the LMS auto discovering the LC's. (Or just enter the IP of the LMS on the LC's - easier)
  • TK for making a GUI on the LMS. (Takes a long time. Perhaps I will just use Curses also here)
  • Storable for saving results to disk. (A bad idea but very fast to implement.)

Project Schedule

LMS - Lobster Management Station

Skeleton

Built as a statemachine

LC - Lobster clients

Skeleton

  • State 1: Connect to the LMS then go to state 2
  • State 2: Wait for orders from LMS then go to state 3
    • Keepalive packets every 10 seconds else go to state 1
  • State 3: Fetch one page (Several attemps with 10 Seconds timeout) from serverfarm if OK go to state 10 else go to state 4
  • State 4: Report Error to Connect to LMS go to state 3
  • State 10: Start monitor session go to state 11
    • monitor session get results from Workload session and report every 10 sec. to LMS
  • State 11: Start each Workload sessions in state 20.
  • State 20: Report I am alive and session number N to monitor session if receive acknowledge within 2 seconds from monitor go to state 21 else die.
  • State 21: Send keepalive every 5 second to monitor session if three sent without acknowledge send die and then die.
    • Load server and report in keepalive packets load.