À propos de ce cours
4.4
156 notes
23 avis
100 % en ligne

100 % en ligne

Commencez dès maintenant et apprenez aux horaires qui vous conviennent.
Dates limites flexibles

Dates limites flexibles

Réinitialisez les dates limites selon votre disponibilité.
Niveau intermédiaire

Niveau intermédiaire

Heures pour terminer

Approx. 13 heures pour terminer

Recommandé : Four weeks of study, 4-8 hours/week depending on past experience with sequential programming in Java...
Langues disponibles

Anglais

Sous-titres : Anglais...

Compétences que vous acquerrez

Distributed ComputingActor ModelParallel ComputingReactive Programming
100 % en ligne

100 % en ligne

Commencez dès maintenant et apprenez aux horaires qui vous conviennent.
Dates limites flexibles

Dates limites flexibles

Réinitialisez les dates limites selon votre disponibilité.
Niveau intermédiaire

Niveau intermédiaire

Heures pour terminer

Approx. 13 heures pour terminer

Recommandé : Four weeks of study, 4-8 hours/week depending on past experience with sequential programming in Java...
Langues disponibles

Anglais

Sous-titres : Anglais...

Programme du cours : ce que vous apprendrez dans ce cours

Semaine
1
Heures pour terminer
1 heure pour terminer

Welcome to the Course!

Welcome to Distributed Programming in Java! This course is designed as a three-part series and covers a theme or body of knowledge through various video lectures, demonstrations, and coding projects....
Reading
1 vidéo (Total 1 min), 5 lectures, 1 quiz
Video1 vidéo
Reading5 lectures
General Course Info5 min
Course Icon Legend2 min
Discussion Forum Guidelines2 min
Pre-Course Survey10 min
Mini Project 0: Setup20 min
Heures pour terminer
4 heures pour terminer

DISTRIBUTED MAP REDUCE

In this module, we will learn about the MapReduce paradigm, and how it can be used to write distributed programs that analyze data represented as key-value pairs. A MapReduce program is defined via user-specified map and reduce functions, and we will learn how to write such programs in the Apache Hadoop and Spark projects. TheMapReduce paradigm can be used to express a wide range of parallel algorithms. One example that we will study is computation of the TermFrequency – Inverse Document Frequency (TF-IDF) statistic used in document mining; this algorithm uses a fixed (non-iterative) number of map and reduce operations. Another MapReduce example that we will study is parallelization of the PageRank algorithm. This algorithm is an example of iterative MapReduce computations, and is also the focus of the mini-project associated with this module....
Reading
6 vidéos (Total 49 min), 6 lectures, 2 quiz
Video6 vidéos
1.2 Hadoop Framework8 min
1.3 Spark Framework11 min
1.4 TF-IDF Example7 min
1.5 Page Rank Example8 min
Demonstration: Page Rank Algorithm in Spark4 min
Reading6 lectures
1.1 Lecture Summary5 min
1.2 Lecture Summary5 min
1.3 Lecture Summary5 min
1.4 Lecture Summary5 min
1.5 Lecture Summary5 min
Mini Project 1: Page Rank with Spark15 min
Quiz1 exercice pour s'entraîner
Module 1 Quiz30 min
Semaine
2
Heures pour terminer
4 heures pour terminer

CLIENT-SERVER PROGRAMMING

In this module, we will learn about client-server programming, and how distributed Java applications can communicate with each other using sockets. Since communication via sockets occurs at the level of bytes, we will learn how to serialize objects into bytes in the sender process and to deserialize bytes into objects in the receiver process. Sockets and serialization provide the necessary background for theFile Server mini-project associated with this module. We will also learn about Remote Method Invocation (RMI), which extends the notion of method invocation in a sequential program to a distributed programming setting. Likewise, we will learn about multicast sockets,which generalize the standard socket interface to enable a sender to send the same message to a specified set of receivers; this capability can be very useful for a number of applications, including news feeds,video conferencing, and multi-player games. Finally, we will learn about distributed publish-subscribe applications, and how they can be implemented using the Apache Kafka framework....
Reading
6 vidéos (Total 43 min), 6 lectures, 2 quiz
Video6 vidéos
2.2 Serialization/Deserialization9 min
2.3 Remote Method Invocation6 min
2.4 Multicast Sockets7 min
2.5 Publish-Subscribe Model6 min
Demonstration: File Server using Sockets4 min
Reading6 lectures
2.1 Lecture Summary5 min
2.2 Lecture Summary5 min
2.3 Lecture Summary5 min
2.4 Lecture Summary5 min
2.5 Lecture Summary5 min
Mini Project 2: File Server15 min
Quiz1 exercice pour s'entraîner
Module 2 Quiz30 min
Heures pour terminer
15 minutes pour terminer

Talking to Two Sigma: Using it in the Field

Join Professor Vivek Sarkar as he talks with Two Sigma Managing Director, Jim Ward, and Senior Vice President, Dr. Eric Allen at their downtown Houston, Texas office about the importance of distributed programming....
Reading
2 vidéos (Total 13 min), 1 lecture
Video2 vidéos
Industry Professional on Distribution - Dr. Eric Allen, Senior Vice President6 min
Reading1 lecture
About these Talks2 min
Semaine
3
Heures pour terminer
4 heures pour terminer

MESSAGE PASSING

In this module, we will learn how to write distributed applications in the Single Program Multiple Data (SPMD) model, specifically by using the Message Passing Interface (MPI) library. MPI processes can send and receive messages using primitives for point-to-point communication, which are different in structure and semantics from message-passing with sockets. We will also learn about the message ordering and deadlock properties of MPI programs. Non-blocking communications are an interesting extension of point-to-point communications, since they can be used to avoid delays due to blocking and to also avoid deadlock-related errors. Finally, we will study collective communication, which can involve multiple processes in a manner that is more powerful than multicast and publish-subscribe operations. The knowledge of MPI gained in this module will be put to practice in the mini-project associated with this module on implementing a distributed matrix multiplication program in MPI....
Reading
6 vidéos (Total 49 min), 6 lectures, 2 quiz
Video6 vidéos
3.2 Point-to-Point Communication9 min
3.3 Message Ordering and Deadlock8 min
3.4 Non-Blocking Communications7 min
3.5 Collective Communication7 min
Demonstration: Distributed Matrix Multiply using Message Passing9 min
Reading6 lectures
3.1 Lecture Summary7 min
3.2 Lecture Summary5 min
3.3 Lecture Summary5 min
3.4 Lecture Summary5 min
3.5 Lecture Summary5 min
Mini Project 3: Matrix Multiply in MPI15 min
Quiz1 exercice pour s'entraîner
Module 3 Quiz30 min
Semaine
4
Heures pour terminer
4 heures pour terminer

COMBINING DISTRIBUTION AND MULTITHREADING

In this module, we will study the roles of processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs. With this background, we will then learn how to implement multithreaded servers for increased responsiveness in distributed applications written using sockets, and apply this knowledge in the mini-project on implementing a parallel file server using both multithreading and sockets. An analogous approach can also be used to combine MPI and multithreading, so as to improve the performance of distributed MPI applications. Distributed actors serve as yet another example of combining distribution and multithreading. A notable property of the actor model is that the same high-level constructs can be used to communicate among actors running in the same process and among actors in different processes; the difference between the two cases depends on the application configuration, rather the application code. Finally, we will learn about the reactive programming model,and its suitability for implementing distributed service oriented architectures using asynchronous events....
Reading
6 vidéos (Total 44 min), 7 lectures, 2 quiz
Video6 vidéos
4.2 Multithreaded Servers6 min
4.3 MPI and Threading7 min
4.4 Distributed Actors8 min
4.5 Distributed Reactive Programming7 min
Demonstration: Parallel File Server using Multithreading and Sockets3 min
Reading7 lectures
4.1 Lecture Summary5 min
4.2 Lecture Summary5 min
4.3 Lecture Summary10 min
4.4 Lecture Summary5 min
4.5 Lecture Summary5 min
Mini Project 4: Multi-Threaded File Server15 min
Exit Survey10 min
Quiz1 exercice pour s'entraîner
Module 4 Quiz30 min
Heures pour terminer
20 minutes pour terminer

Continue Your Journey with the Specialization "Parallel, Concurrent, and Distributed Programming in Java"

The next two videos will showcase the importance of learning about Parallel Programming and Concurrent Programming in Java. Professor Vivek Sarkar will speak with industry professionals at Two Sigma about how the topics of our other two courses are utilized in the field....
Reading
2 vidéos (Total 10 min), 1 lecture
Video2 vidéos
Industry Professional on Concurrency - Dr. Shams Imam, Software Engineer, Two Sigma3 min
Reading1 lecture
Our Other Course Offerings10 min
4.4
Avantage de carrière

83%

a bénéficié d'un avantage concret dans sa carrière grâce à ce cours
Promotion de carrière

50%

a obtenu une augmentation de salaire ou une promotion

Meilleurs avis

par DHSep 17th 2017

Great course. The first programming assignment was challenging and well worth the time invested, I would recommend it for anyone that wants to learn parallel programming in Java.

par FFJan 24th 2018

Excellent course! Vivek is an excellent instructor as well. I appreciate having taken the opportunity to learn from him.

Enseignant

Avatar

Vivek Sarkar

Professor
Department of Computer Science

À propos de Rice University

Rice University is consistently ranked among the top 20 universities in the U.S. and the top 100 in the world. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy....

À propos de la Spécialisation Parallel, Concurrent, and Distributed Programming in Java

Parallel, concurrent, and distributed programming underlies software in multiple domains, ranging from biomedical research to financial services. This specialization is intended for anyone with a basic knowledge of sequential programming in Java, who is motivated to learn how to write parallel, concurrent and distributed programs. Through a collection of three courses (which may be taken in any order or separately), you will learn foundational topics in Parallelism, Concurrency, and Distribution. These courses will prepare you for multithreaded and distributed programming for a wide range of computer platforms, from mobile devices to cloud computing servers. To see an overview video for this Specialization, click here! For an interview with two early-career software engineers on the relevance of parallel computing to their jobs, click here. Acknowledgments The instructor, Prof. Vivek Sarkar, would like to thank Dr. Max Grossman for his contributions to the mini-projects and other course material, Dr. Zoran Budimlic for his contributions to the quizzes, Dr. Max Grossman and Dr. Shams Imam for their contributions to the pedagogic PCDP library used in some of the mini-projects, and all members of the Rice Online team who contributed to the development of the course content (including Martin Calvi, Annette Howe, Seth Tyger, and Chong Zhou)....
Parallel, Concurrent, and Distributed Programming in Java

Foire Aux Questions

  • Une fois que vous êtes inscrit(e) pour un Certificat, vous pouvez accéder à toutes les vidéos de cours, et à tous les quiz et exercices de programmation (le cas échéant). Vous pouvez soumettre des devoirs à examiner par vos pairs et en examiner vous-même uniquement après le début de votre session. Si vous préférez explorer le cours sans l'acheter, vous ne serez peut-être pas en mesure d'accéder à certains devoirs.

  • Lorsque vous vous inscrivez au cours, vous bénéficiez d'un accès à tous les cours de la Spécialisation, et vous obtenez un Certificat lorsque vous avez réussi. Votre Certificat électronique est alors ajouté à votre page Accomplissements. À partir de cette page, vous pouvez imprimer votre Certificat ou l'ajouter à votre profil LinkedIn. Si vous souhaitez seulement lire et visualiser le contenu du cours, vous pouvez accéder gratuitement au cours en tant qu'auditeur libre.

  • No. The lecture videos, demonstrations and quizzes will be sufficient to enable you to complete this course. Students who enroll in the course and are interesting in receiving a certificate will also have access to a supplemental coursebook with additional technical details.

  • Multicore Programming in Java: Parallelism and Multicore Programming in Java: Concurrency cover complementary aspects of multicore programming, and can be taken in any order. The Parallelism course covers the fundamentals of using parallelism to make applications run faster by using multiple processors at the same time. The Concurrency course covers the fundamentals of how parallel tasks and threads correctly mediate concurrent use of shared resources such as shared objects, network resources, and file systems.

D'autres questions ? Visitez le Centre d'Aide pour les Etudiants.