affiliate marketing

Saturday, 10 December 2011

Scheduling of ScientificWorkflows on Data Grids - I

4 Model
4.1 A Workflow Model

A workflowW is represented by a directed acyclic graph G = (V,E) , where V = D ∪ C ∪ J , and E represents the edge that maintains execution precedence constraints. Having a directed edge from jx to jy means that jy cannot start to execute until jx is completed. Having a directed edge from dx to jx means that job jx cannot start to execute until file is transferred or made available to the execution host executing the job. The components are described as follows:
1. A set of jobs J = {j1, j2, . . . , jn}
2. A set of files F = {f1, f2, . . . , fn}
3. A set of compute-hosts C = {c1, c2, . . . , cn}
4. A set of data-hosts D = {d1, d2, . . . , dn}
A job jx requires a set of files Fx = {f1, f2, . . . , fn} to be staged in for execution. In the set of files, we denote ft k as temporary file and ff k as fixed file to distinguish files produced as a result of execution of a job and those files already hosted by data-hosts, respectively. File ff k is hosted by multiple data-hosts.
 
4.2 Resource Model
A compute resource is a high performance computing platform such as a cluster which has a 'head' node that manages a batch job submission system. Each computehost has its own storage constrained data-host. There exists data-hosts that are only for storage purposes and do not provide computation. The communication cost between the compute-host and its own data-host is local and thus minimal in comparison to the communication cost between different hosts [22].


Data is organized in the form of datasets that are replicated on the data-hosts by a separate replication process that follows a strategy [2] that takes into consideration various factors such as locality of access, load on the data-host and available storage space. Information about the datasets and their location is available through a catalog such as the Storage Resource Broker Metadata Catalog [13][21].

5 Problem Statement
We now describe the problem of data-host selection and job to resource mapping in the presence of large number of replicated files. A set of data-hosts that hosts the required files for the jobs in the workflow should be found. The selection of the optimal set of data-hosts in the presence of large number of replicated files for a single job is computationally intensive [21]. The selection is being made by using one of the solutions to the Set-Coverage problem [1]. This selection procedure is compute-intensive.

The mapping of jobs to the compute-resources is an NPcomplete problem in the general form. The problem is NPcomplete even in two simple cases: (1) scheduling jobs with uniform weights to an arbitrary number of processors and (2) scheduling jobs with weights equal to one or two units to two processors [20]. There are only three special cases for which there exist optimal polynomial-time algorithms.

Figure 2: Data-host and Compute-host Selection Problem.
These cases are (1) scheduling tree-structured job graphs with uniform computation costs on an arbitrary number of processors [10]; (2) scheduling arbitrary job graphs with uniform computation costs on two processors [12]; and (3) scheduling an interval-ordered job graph [8]. However, even in these cases, communication among jobs of the parallel program is assumed to take zero time. The mapping of jobs in a workflow to the resources in our case is significantly different than the general mapping problem. Given the replicated files and numerous datahosts, proper selection of data-hosts and compute-hosts should be made for every job in the workflow such that the dependent jobs will benefit from selection set of its parents. The best case is such that each job gets the optimal datahost set and compute-host pair for execution with the optimal make-span and minimum cost of the workflow. The naive case is when the set is chosen for each job irrespective of their dependencies.

Figure 2 shows a simple workflow with separate compute-hosts and separate data-hosts mapped to each job. Lets first consider the data-transfers occurring due to the selection of compute-host and data-host for job a and job c.

Since the jobs are mapped to two different compute-hosts, the intermediate files needs to be transferred from C1 to C2 with cost dtc12. Job a has cost dta and job c has cost dtc for transferring data from Dhs1 and Dhs3, respectively. Now the optimal solution would select a combination of C1, C2, Dhs1 and Dhs3 to minimize the data transfer cost (dtc12 + dta + dtc) and execution time (C1 + C2)t. There are several ways to select the data-hosts Dhs1 and Dhs3 and compute-hosts C1 and C2. We describe two techniques. The first way is by considering the proximity of data-hosts in terms of network distance with the set of compute-hosts. The second way is by trying to maximize the co-relation between two set of data-hosts, depicted as rs12 in the figure, for some set of jobs that require same set of files.

The first way always searches the entire data-host and compute-host set to find one combination (which is NPcomplete) that most closely satisfies the objective function. But it does not take into account that jobs might be sharing the same set of files. Moreover, previous iterations might have already found the data-host and compute-host set combination that can be applied to subsequent set of jobs. For example jobs a and c might be requiring same set of files,

so the compute-intensive selection of candidate data-host sets and compute-host can be avoided for the second job. The second way would almost always select the same set of data-hosts by trying to maximize their co-relation. This also restricts the compute-host set to within the proximity of the co-related data-host set. When the number of jobs increases, both the compute-host and data-host become overloaded, increasing waiting time of the jobs. Hence a proper selection algorithm should distribute the load and satisfy the objective function. The problem is formally stated in Definition 1.
Definition 1. DDCSP(D,C, J, F,G,L) Given a set of data-hosts D, a set of compute-hosts C, a set of jobs J, a set of files F, the workflow DAG G and workflow execution time bound L, Dynamic Data-Compute Scheduling Problem (DDCSP) is a problem of finding obtainable mappings of jobs J, where each job ji may be requiring more than one input file, to compute-hosts C with replicated files in data-hosts D to 1) minimize the make-span of the workflow bounded within L time limits, 2) minimize the total cost of execution , 3) satisfy system and users’ constraints.

No comments:

Post a Comment