dinsdag 8 januari 2013

Further developments

Having worked further on the required changes, I have come to the conclusion that nearly everything is going to be changed.
The current single threaded locally running master will become a multithreaded remotely(on an EC2 instance)  functioning program but this has a couple of consequences.
All the local file operations and job handling is going to be replaced.
The messaging structure is going to be transformed to be compatible with our SNS functionality.
Also created a factory style handling of the messages.
The master will no longer be keeping information about the jobs locally, this will all be written to the database. Perhaps some kind of cashing mechanism can be put in place to make batch updates.
Although this can be dangerous with out of date updates being perceived by the workers.


A EC2 Metadata processor has been created, with this functionality it is now possible to query EC2 for information about our instance using the instance metadata functionality :
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html

The HTTP server is up and running, with which we can now send and receive SNS messages.
To filter out the messages the Subject field of the SNS messages are filled with the receivers endpoint address(it's public DNS).

Using the Object persistence model the DynamoDB entries of the jobs can now be created, edited and removed.

The WorkerManager was created which currently only processes the PING's received from the workers and keeps tracks of timers. If a timer fires, the corresponding worker gets terminated by informing the ResourceManager. The ResourceManager is capable of launching or terminating instances asynchronously.
I have also noted that the information that is being stored by the ResourceManager, namely what instances are running, is quite important and should be stored either in S3 or DynamoDB to prevent "resource leaking".

As a final point of thought I would want to suggest a small frontend java program that will handle the JDL creation and initial communication with the master. This will just be for testing purposes and can be replaced by other means of frontend to the system.
The JDL will have the following layout :

  • Prologue={file1,file2, ...}
  • InputSandbox={file1,file2,...}
  • OutputSandbox={name} (Changed, see remarks*)
  • Arguments=args
  • Executable=executableName
I have tried to stick with the JDL standards but needed some additional info.
As can be seen the Prologue and OutputSandBox are new options. 
Prologue would be scripts that will run before the execution of the job, to initialize the environment.
OutputSandbox would be the name of the file where the result will be stored in. (as an archive)
This can ofcourse be extended in future versions. 

With the usage of the frontend, the JDL will now be automatically generated through the supply of the files necessary in each option. And the JDL together with the input archive that has been created will be stored on S3.
Operation sequence: 
  1. A new task is created on the frontend.
  2. Frontend requests a new job id. 
  3. After receiving, creates the JDL file and the archive.
  4. Upload to S3.
  5. Inform master of the newly added job. 
The frontend needs to know the job id up front in order to prevent naming problems.