16 results on '"Ghaderi, Javad"'
Search Results
2. Scheduling Parallel-Task Jobs Subject to Packing and Placement Constraints.
- Author
-
Shafiee, Mehrnoosh and Ghaderi, Javad
- Subjects
APPROXIMATION algorithms ,PARALLEL programming ,SCHEDULING ,PARALLEL processing ,PROBLEM solving - Abstract
Jobs in modern parallel-computing frameworks, such as Hadoop and Spark, are subject to several constraints. In these frameworks, the data are typically distributed across a cluster of machines and is processed in multiple stages. Therefore, tasks that belong to the same stage (job) have a collective completion time that is determined by the slowest task in the collection. Furthermore, a task's processing time is machine dependent, and each machine is capable of processing multiple tasks at a time subject to its capacity. In "Scheduling Parallel-Task Jobs Subject to Packing and Placement Constraints," by Mehrnoosh Shafiee and Javad Ghaderi, multiple approximation algorithms with theoretical guarantees are provided to solve the problem under preemptive and nonpreemptive scenarios. The numerical results, using a real traffic trace, demonstrate that the algorithms yield significant gains over the prior approaches. Motivated by modern parallel computing applications, we consider the problem of scheduling parallel-task jobs with heterogeneous resource requirements in a cluster of machines. Each job consists of a set of tasks that can be processed in parallel; however, the job is considered completed only when all its tasks finish their processing, which we refer to as the synchronization constraint. Furthermore, assignment of tasks to machines is subject to placement constraints, that is, each task can be processed only on a subset of machines, and processing times can also be machine dependent. Once a task is scheduled on a machine, it requires a certain amount of resource from that machine for the duration of its processing. A machine can process (pack) multiple tasks at the same time; however, the cumulative resource requirement of the tasks should not exceed the machine's capacity. Our objective is to minimize the weighted average of the jobs' completion times. The problem, subject to synchronization, packing, and placement constraints, is NP-hard, and prior theoretical results only concern much simpler models. For the case that migration of tasks among the placement-feasible machines is allowed, we propose a preemptive algorithm with an approximation ratio of (6 + ϵ). In the special case that only one machine can process each task, we design an algorithm with an improved approximation ratio of four. Finally, in the case that migrations (and preemptions) are not allowed, we design an algorithm with an approximation ratio of 24. Our algorithms use a combination of linear program relaxation and greedy packing techniques. We present extensive simulation results, using a real traffic trace, that demonstrate that our algorithms yield significant gains over the prior approaches. Funding: This work was supported by the National Science Foundation [Grants CNS-1652115 and CNS-1717867]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2021.2198. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Hierarchical cooperation in ad hoc networks: optimal clustering and achievable throughput
- Author
-
Ghaderi, Javad, Xie, Liang-Liang, and Shen, Xuemin 'Sherman'
- Subjects
Algorithm ,Mathematical optimization -- Evaluation ,Algorithms -- Analysis ,Ad hoc networks (Computer networks) -- Usage - Published
- 2009
4. Scheduling Coflows With Dependency Graph.
- Author
-
Shafiee, Mehrnoosh and Ghaderi, Javad
- Subjects
PRODUCTION scheduling ,APPROXIMATION algorithms ,SCHEDULING ,SERVER farms (Computer network management) - Abstract
Applications in data-parallel computing typically consist of multiple stages. In each stage, a set of intermediate parallel data flows (Coflow) is produced and transferred between servers to enable starting of next stage. While there has been much research on scheduling isolated coflows, the dependency between coflows in multi-stage jobs has been largely ignored. In this paper, we consider scheduling coflows of multi-stage jobs represented by general DAGs (Directed Acyclic Graphs) in a shared data center network, so as to minimize the total weighted completion time of jobs. This problem is significantly more challenging than the traditional coflow scheduling, as scheduling even a single multi-stage job to minimize its completion time is shown to be NP-hard. In this paper, we propose a polynomial-time algorithm with approximation ratio of $O(\mu \log (m)/\log (\log (m)))$ , where $\mu $ is the maximum number of coflows in a job and $m$ is the number of servers. For the special case that the jobs’ underlying dependency graphs are rooted trees, we modify the algorithm and improve its approximation ratio. To verify the performance of our algorithms, we present simulation results using real traffic traces that show up to 53% improvement over the prior approach. We conclude the paper by providing a result concerning an optimality gap for scheduling coflows with general DAGs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. On the Power of Randomization for Scheduling Real-Time Traffic in Wireless Networks.
- Author
-
Tsanikidis, Christos and Ghaderi, Javad
- Subjects
TRAFFIC patterns ,SCHEDULING ,MARKOV processes ,DEADLINES - Abstract
In this paper, we consider the problem of scheduling real-time traffic in wireless networks under a conflict-graph interference model and single-hop traffic. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio. This problem has been studied before under restrictive frame-based traffic models, or greedy maximal scheduling schemes like LDF (Largest-Deficit First) that can lead to poor delivery ratio for general traffic patterns. In this paper, we pursue a different approach through randomization over the choice of maximal links that can transmit at each time. We design randomized policies in collocated networks, multi-partite networks, and general networks, that can achieve delivery ratios much higher than what is achievable by LDF. Further, our results apply to any traffic (arrival and deadline) process that evolves as an unknown positive recurrent Markov chain. Hence, this work is an improvement with respect to both efficiency and traffic assumptions compared to the past work. We further present extensive simulation results over various traffic patterns and interference graphs to illustrate the gains of our randomized policies over LDF variants. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. High-Throughput Bin Packing: Scheduling Jobs With Random Resource Demands in Clusters.
- Author
-
Psychasand, Konstantinos and Ghaderi, Javad
- Subjects
DISTRIBUTED computing ,SCHEDULING ,PARALLEL algorithms - Abstract
We consider a natural scheduling problem which arises in many distributed computing frameworks. Jobs with diverse resource demands (e.g. memory requirements) arrive over time and must be served by a cluster of servers. To improve throughput and delay, the scheduler can pack as many jobs as possible in each server, however the sum of the jobs’ resource demands cannot exceed the server’s capacity. Motivated by the increasing complexity of workloads in shared clusters, we consider a setting where jobs’ resource demands belong to a very large set of diverse types, or in the extreme case even infinitely many types, i.e. resource demands are drawn from a general unknown distribution over a possibly continuous support. The application of classical scheduling approaches that crucially rely on a predefined finite set of types is discouraging in this high (or infinite) type setting. We first characterize a fundamental limit on the maximum throughput in such setting. We then develop oblivious scheduling algorithms, based on Best-Fit and Universal Partitioning, that have low complexity and can achieve at least 1/2 and 2/3 of the maximum throughput respectively, without the knowledge of the resource demand distribution. Extensive simulation results, using both synthetic and real traffic traces, are presented to verify the performance of our algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Hybrid Scheduling in Heterogeneous Half- and Full-Duplex Wireless Networks.
- Author
-
Chen, Tingjun, Diakonikolas, Jelena, Ghaderi, Javad, and Zussman, Gil
- Subjects
IEEE 802.11 (Standard) ,SCHEDULING ,COMPUTER network protocols ,COMPUTER scheduling - Abstract
Full-duplex (FD) wireless is an attractive communication paradigm with high potential for improving network capacity and reducing delay in wireless networks. Despite significant progress on the physical layer development, the challenges associated with developing medium access control (MAC) protocols for heterogeneous networks composed of both legacy half-duplex (HD) and emerging FD devices have not been fully addressed. Therefore, we focus on the design and performance evaluation of scheduling algorithms for infrastructure-based heterogeneous HD-FD networks (composed of HD and FD users). We first show that centralized Greedy Maximal Scheduling (GMS) is throughput-optimal in heterogeneous HD-FD networks. We propose the Hybrid-GMS (H-GMS) algorithm, a distributed implementation of GMS that combines GMS and a queue-based random-access mechanism. We prove that H-GMS is throughput-optimal. Moreover, we analyze the delay performance of H-GMS by deriving lower bounds on the average queue length. We further demonstrate the benefits of upgrading HD nodes to FD nodes in terms of throughput gains for individual nodes and the whole network. Finally, we evaluate the performance of H-GMS and its variants in terms of throughput, delay, and fairness between FD and HD users via extensive simulations. We show that in heterogeneous HD-FD networks, H-GMS achieves 16– $30\times $ better delay performance and improves fairness between HD and FD users by up to 50% compared with the fully decentralized Q-CSMA algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate
- Author
-
Ghaderi, Javad and Srikant, R.
- Published
- 2014
- Full Text
- View/download PDF
9. Randomized Algorithms for Scheduling Multi-Resource Jobs in the Cloud.
- Author
-
Psychas, Konstantinos and Ghaderi, Javad
- Subjects
CLOUD computing ,TASK analysis ,COMPLEXITY (Philosophy) - Abstract
We consider the problem of scheduling jobs with multiple-resource requirements (CPU, memory, and disk) in a distributed server platform, motivated by data-parallel and cloud computing applications. Jobs arrive dynamically over time and require certain amount of multiple resources for the duration of their service. When a job arrives, it is queued and later served by one of the servers that has sufficient remaining resources to serve it. The scheduling of jobs is subject to two constraints: 1) packing constraints: multiple jobs can be served simultaneously by a single server if their cumulative resource requirement does not exceed the capacity of the server, and 2) non-preemption: to avoid costly preemptions, once a job is scheduled in a server, its service cannot be interrupted or migrated to another server. Prior scheduling algorithms rely on either bin packing heuristics which have low complexity but can have a poor throughput, or MaxWeight solutions that can achieve maximum throughput but repeatedly require to solve or approximate instances of a hard combinatorial problem (Knapsack) over time. In this paper, we propose a randomized scheduling algorithm for placing jobs in servers that can achieve maximum throughput with low complexity. The algorithm is naturally distributed and each queue and each server needs to perform only a constant number of operations per time unit. Extensive simulation results, using both synthetic and real traffic traces, are presented to evaluate the throughput and delay performance compared to prior algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters.
- Author
-
Shafiee, Mehrnoosh and Ghaderi, Javad
- Subjects
PARALLEL programs (Computer programs) ,SERVER farms (Computer network management) ,ABSTRACTION (Computer science) - Abstract
In data-parallel computing frameworks, intermediate parallel data is often produced at various stages which needs to be transferred among servers in the datacenter network (e.g., the shuffle phase in MapReduce). A stage often cannot start or be completed unless all the required data pieces from the preceding stage are received. Coflow is a recently proposed networking abstraction to capture such communication patterns. We consider the problem of efficiently scheduling coflows with release dates in a shared datacenter network so as to minimize the total weighted completion time of coflows. Several heuristics have been proposed recently to address this problem, as well as a few polynomial-time approximation algorithms with provable performance guarantees. Our main result in this paper is a polynomial-time deterministic algorithm that improves the prior known results. Specifically, we propose a deterministic algorithm with approximation ratio of 5, which improves the prior best known ratio of 12. For the special case when all coflows are released at time zero, our deterministic algorithm obtains approximation ratio of 4 which improves the prior best known ratio of 8. The key ingredient of our approach is an improved linear program formulation for sorting the coflows followed by a simple list scheduling policy. Extensive simulation results, using both synthetic and real traffic traces, are presented that verify the performance of our algorithm and show improvement over the prior approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. Adaptive TTL-Based Caching for Content Delivery.
- Author
-
Basu, Soumya, Sundarrajan, Aditya, Ghaderi, Javad, Shakkottai, Sanjay, and Sitaraman, Ramesh
- Subjects
CONTENT delivery networks ,CACHE memory ,STOCHASTIC approximation ,HETEROGENEITY ,STATIONARY processes - Abstract
Content delivery networks (CDNs) cache and serve a majority of the user-requested content on the Internet. Designing caching algorithms that automatically adapt to the heterogeneity, burstiness, and non-stationary nature of real-world content requests is a major challenge and is the focus of our work. While there is much work on caching algorithms for stationary request traffic, the work on non-stationary request traffic is very limited. Consequently, most prior models are inaccurate for non-stationary production CDN traffic. We propose two TTL-based caching algorithms that provide provable performance guarantees for request traffic that is bursty and non-stationary. The first algorithm called d-TTL dynamically adapts a TTL parameter using stochastic approximation. Given a feasible target hit rate, we show that d-TTL converges to its target value for a general class of bursty traffic that allows Markov dependence over time and non-stationary arrivals. The second algorithm called f-TTL uses two caches, each with its own TTL. The first-level cache adaptively filters out non-stationary traffic, while the second-level cache stores frequently-accessed stationary traffic. Given feasible targets for both the hit rate and the expected cache size, f-TTL asymptotically achieves both targets. We evaluate both d-TTL and f-TTL using an extensive trace containing more than 500 million requests from a production CDN server. We show that both d-TTL and f-TTL converge to their hit rate targets with an error of about 1.3%. But, f-TTL requires a significantly smaller cache size than d-TTL to achieve the same hit rate, since it effectively filters out non-stationary content. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Maximizing Broadcast Throughput Under Ultra-Low-Power Constraints.
- Author
-
Chen, Tingjun, Ghaderi, Javad, Rubenstein, Dan, and Zussman, Gil
- Subjects
BROADCAST engineering ,WIRELESS communications ,ENERGY consumption ,BANDWIDTHS ,COMPUTER network protocols - Abstract
Wireless object-tracking applications are gaining popularity and will soon utilize emerging ultra-low-power device-to-device communication. However, severe energy constraints require much more careful accounting of energy usage than what prior art provides. In particular, the available energy, the differing power consumption levels for listening, receiving, and transmitting, as well as the limited control bandwidth must all be considered. Therefore, we formulate the problem of maximizing the throughput among a set of heterogeneous broadcasting nodes with differing power consumption levels, each subject to a strict ultra-low-power budget. We obtain the oracle throughput (i.e., maximum throughput achieved by an oracle) and use Lagrangian methods to design EconCast—a simple asynchronous distributed protocol in which nodes transition between sleep, listen, and transmit states, and dynamically change the transition rates. EconCast can operate in groupput or anyput mode to respectively maximize two alternative throughput measures. We show that EconCast approaches the oracle throughput. The performance is also evaluated numerically and via extensive simulations and it is shown that EconCast outperforms prior art by $6\times $ – $17\times $ under realistic assumptions. Moreover, we evaluate EconCast’s latency performance and consider design tradeoffs when operating in groupput and anyput modes. Finally, we implement EconCast using the TI eZ430-RF2500-SEH energy harvesting nodes and experimentally show that in realistic environments it obtains 57%–77% of the achievable throughput. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. A Simple Congestion-Aware Algorithm for Load Balancing in Datacenter Networks.
- Author
-
Shafiee, Mehrnoosh and Ghaderi, Javad
- Subjects
DATA flow computing ,COMPUTER networks - Abstract
We study the problem of load balancing in datacenter networks, namely, assigning the end-to-end data flows among the available paths in order to efficiently balance the load in the network. The solutions used today rely typically on an equal-cost multi path (ECMP) mechanism, which essentially attempts to balance the load in the network by hashing the flows to the available shortest paths. However, it is well-known that the ECMP performs poorly when there is asymmetry either in the network topology or the flow sizes, and thus, there has been much interest recently in alternative mechanisms to address these shortcomings. In this paper, we consider a general network topology where each link has a cost, which is a convex function of the link congestions. Flows among the various source–destination pairs are generated dynamically over time, each with a size (bandwidth requirement) and a duration. Once a flow is assigned to a path in the network, it consumes bandwidth equal to its size from all the links along its path for its duration. We consider low-complexity congestion-aware algorithms that assign the flows to the available paths in an online fashion and without splitting. Specifically, we propose a myopic algorithm that assigns every arriving flow to an available path with the minimum marginal cost (i.e., the path which yields the minimum increase in the network cost after assignment) and prove that it asymptotically minimizes the total network cost. Extensive simulation results are presented to verify the performance of the myopic algorithm under a wide range of traffic conditions and under different datacenter architectures. Furthermore, we propose randomized versions of our myopic algorithm, which have much lower complexity and empirically show that they can still perform very well in symmetric network topologies. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
14. The Impact of Access Probabilities on the Delay Performance of Q-CSMA Algorithms in Wireless Networks.
- Author
-
Ghaderi, Javad and Srikant, R.
- Subjects
WIRELESS communications ,TELECOMMUNICATION systems ,DATA transmission systems ,ALGORITHMS ,ELECTRONIC systems - Abstract
It has been recently shown that queue-based carrier sense multiple access (CSMA) algorithms are throughput-optimal. In these algorithms, each link of the wireless network has two parameters: a transmission probability and an access probability. The transmission probability of each link is chosen as an appropriate function of its queue length, however the access probabilities are simply regarded as some random numbers since they do not play any role in establishing the network stability. In this paper, we show that the access probabilities control the mixing time of the CSMA Markov chain and, as a result, affect the delay performance of the CSMA. In particular, we derive formulas that relate the mixing time to access probabilities and use these to develop the following guideline for choosing access probabilities: Each link i should choose its access probability equal to 1/(d_i+1), where d_i is the number of links that interfere with link i. Simulation results show that this choice of access probabilities results in good delay performance. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. supp1-3223315.pdf
- Author
-
Ghaderi, Javad, primary
- Full Text
- View/download PDF
16. High risky behavior and HIV/AIDS knowledge amongst street children in Shiraz, Iran.
- Author
-
Motazedian N, Sayadi M, Beheshti S, Zarei N, and Ghaderi J
- Abstract
Background: Street children around the world are accompanying a wide range of risky behaviors. The most common ones include risky sexual behavior, substance and alcohol abuse, and violence. This study aimed to assess risk behaviors and HIV knowledge of street children in Shiraz. Methods: A total of 329 street children (7-18 years of age who spend days or nights on streets with or without their family for earning money) were interviewed through 2014-2016 in Shiraz. Data were collected through a structured interview about high-risk behaviors and HIV/AIDS Knowledge based on a form and questionnaire. Street children were asked to identify HIV/AIDS mode of transmission. All correct answers were scored as one (1), and incorrect, "don't know" responses and no responses scored as zero. The data were analyzed by SPSS software 16 (SPSS, Inc. Chicago, USA) using the Independent t-test and chi-square test, and Pearson's correlation test. P value< 0.05 was considered as statistically significant Results: The mean ± SD age was 13.46±3.09. A total of 86.6% of them were boys. A total of 97.6% of them reported staying with their parents. Street children reported sleeping place as follow: with their parents (n=312, 94.8%), sharing accommodation with other kids (n=13, %4), sleeping in parks (n=2, 7%), and one with relatives. The frequency of smoking, alcohol drinking, and drug abuse were 35 (10.6%), 47 (14. 3%), and 6 (1.8%) respectively. A total of 43 (13.1%) street children reported sexual activity, among them 30 (9.1%) had sexual activity without a condom. Mean ± SD HIV/AIDS knowledge scoring of street children was, 4.1±3.9. Conclusion: Special programs should be implemented in order to reduce high-risk behavior among street children. Intervention should include increasing awareness about alcohol and drug abuse, HIV/AIDS knowledge, sexual and verbal abuse through an organized system with the help of peer education., Competing Interests: Conflicts of Interest: None declared, (© 2020 Iran University of Medical Sciences.)
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.