À propos de ce Spécialisation
Cours en ligne à 100 %

Cours en ligne à 100 %

Commencez dès maintenant et apprenez aux horaires qui vous conviennent.
Planning flexible

Planning flexible

Définissez et respectez des dates limites flexibles.
Niveau intermédiaire

Niveau intermédiaire

Heures pour terminer

Approx. 6 mois pour terminer

7 heures/semaine recommandées
Langues disponibles

Anglais

Sous-titres : Anglais...

Compétences que vous acquerrez

Apache HadoopRecommender SystemsMapreduceApache Spark
Cours en ligne à 100 %

Cours en ligne à 100 %

Commencez dès maintenant et apprenez aux horaires qui vous conviennent.
Planning flexible

Planning flexible

Définissez et respectez des dates limites flexibles.
Niveau intermédiaire

Niveau intermédiaire

Heures pour terminer

Approx. 6 mois pour terminer

7 heures/semaine recommandées
Langues disponibles

Anglais

Sous-titres : Anglais...

Fonctionnement du Spécialisation

Suivez les cours

Une Spécialisation Coursera est une série de cours axés sur la maîtrise d'une compétence. Pour commencer, inscrivez-vous directement à la Spécialisation ou passez en revue ses cours et choisissez celui par lequel vous souhaitez commencer. Lorsque vous vous abonnez à un cours faisant partie d'une Spécialisation, vous êtes automatiquement abonné(e) à la Spécialisation complète. Il est possible de terminer seulement un cours : vous pouvez suspendre votre formation ou résilier votre abonnement à tout moment. Rendez-vous sur votre tableau de bord d'étudiant pour suivre vos inscriptions aux cours et vos progrès.

Projet pratique

Chaque Spécialisation inclut un projet pratique. Vous devez réussir le(s) projet(s) pour terminer la Spécialisation et obtenir votre Certificat. Si la Spécialisation inclut un cours dédié au projet pratique, vous devrez terminer tous les autres cours avant de pouvoir le commencer.

Obtenir un Certificat

Lorsque vous aurez terminé tous les cours et le projet pratique, vous obtiendrez un Certificat que vous pourrez partager avec des employeurs éventuels et votre réseau professionnel.

how it works

Cette Spécialisation compte 5 cours

Cours1

Big Data Essentials: HDFS, MapReduce and Spark RDD

4.1
223 notes
65 avis
Have you ever heard about such technologies as HDFS, MapReduce, Spark? Always wanted to learn these new tools but missed concise starting material? Don’t miss this course either! In this 6-week course you will: - learn some basic technologies of the modern Big Data landscape, namely: HDFS, MapReduce and Spark; - be guided both through systems internals and their applications; - learn about distributed file systems, why they exist and what function they serve; - grasp the MapReduce framework, a workhorse for many modern Big Data applications; - apply the framework to process texts and solve sample business cases; - learn about Spark, the next-generation computational framework; - build a strong understanding of Spark basic concepts; - develop skills to apply these tools to creating solutions in finance, social networks, telecommunications and many other fields. Your learning experience will be as close to real life as possible with the chance to evaluate your practical assignments on a real cluster. No mocking, a friendly considerate atmosphere to make the process of your learning smooth and enjoyable. Get ready to work with real datasets alongside with real masters! Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Cours2

Big Data Analysis: Hive, Spark SQL, DataFrames and GraphFrames

3.9
70 notes
14 avis
No doubt working with huge data volumes is hard, but to move a mountain, you have to deal with a lot of small stones. But why strain yourself? Using Mapreduce and Spark you tackle the issue partially, thus leaving some space for high-level tools. Stop struggling to make your big data workflow productive and efficient, make use of the tools we are offering you. This course will teach you how to: - Warehouse your data efficiently using Hive, Spark SQL and Spark DataFframes. - Work with large graphs, such as social graphs or networks. - Optimize your Spark applications for maximum performance. Precisely, you will master your knowledge in: - Writing and executing Hive & Spark SQL queries; - Reasoning how the queries are translated into actual execution primitives (be it MapReduce jobs or Spark transformations); - Organizing your data in Hive to optimize disk space usage and execution times; - Constructing Spark DataFrames and using them to write ad-hoc analytical jobs easily; - Processing large graphs with Spark GraphFrames; - Debugging, profiling and optimizing Spark application performance. Still in doubt? Check this out. Become a data ninja by taking this course! Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Cours3

Big Data Applications: Machine Learning at Scale

3.9
48 notes
12 avis
Machine learning is transforming the world around us. To become successful, you’d better know what kinds of problems can be solved with machine learning, and how they can be solved. Don’t know where to start? The answer is one button away. During this course you will: - Identify practical problems which can be solved with machine learning - Build, tune and apply linear models with Spark MLLib - Understand methods of text processing - Fit decision trees and boost them with ensemble learning - Construct your own recommender system. As a practical assignment, you will - build and apply linear models for classification and regression tasks; - learn how to work with texts; - automatically construct decision trees and improve their performance with ensemble learning; - finally, you will build your own recommender system! With these skills, you will be able to tackle many practical machine learning tasks. We provide the tools, you choose the place of application to make this world of machines more intelligent. Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Cours4

Big Data Applications: Real-Time Streaming

4.7
3 notes
There is a significant number of tasks when we need not just to process an enormous volume of data but to process it as quickly as possible. Delays in tsunami prediction can cost people’s lives. Delays in traffic jam prediction cost extra time. Advertisements based on the recent users’ activity are ten times more popular. However, stream processing techniques alone are not enough to create a complete real-time system. For example to create a recommendation system we need to have a storage that allows to store and fetch data for a user with minimal latency. These databases should be able to store hundreds of terabytes of data, handle billions of requests per day and have a 100% uptime. NoSQL databases are commonly used to solve this challenging problem. After you finish this course, you will master stream processing systems and NoSQL databases. You will also learn how to use such popular and powerful systems as Kafka, Cassandra and Redis. To get the most out of this course, you need to know Hadoop and SQL. You should also have a working knowledge of bash, Python and Spark. Do you want to learn how to build Big Data applications that can withstand modern challenges? Jump right in!...

Enseignants

Avatar

Pavel Klemenkov

Chief Data Scientist
NVIDIA
Avatar

Ivan Mushketyk

Software Engineer, ConsenSys
Avatar

Evgeny Frolov

Data Scientist, PhD Student @Skoltech
Computational and Data Intensive Science and Engineering
Avatar

Ilya Trofimov

Principal Data Scientist
Yandex
Avatar

Ivan Puzyrevskiy

Technical Team Lead
Avatar

Alexey A. Dral

Founder and Chief Executive Officer
BigData Team
Avatar

Pavel Mezentsev

Senior Data Scientist
PulsePoint inc
Avatar

Vladislav Goncharenko

DCAM MIPT, Skoltech
Avatar

Artyom Vybornov

Lead software engineer at Rambler&Co

Partenaires du secteur

Industry Partner Logo #0

À propos de Yandex

Yandex is a technology company that builds intelligent products and services powered by machine learning. Our goal is to help consumers and businesses better navigate the online and offline world....

Foire Aux Questions

  • Oui ! Pour commencer, cliquez sur la carte du cours qui vous intéresse et inscrivez-vous. Vous pouvez vous inscrire et terminer le cours pour obtenir un Certificat partageable, ou vous pouvez accéder au cours en auditeur libre afin d'en visualiser gratuitement le contenu. Si vous vous abonnez à un cours faisant partie d'une Spécialisation, vous êtes automatiquement abonné(e) à la Spécialisation complète. Visitez votre tableau de bord d'étudiant(e) pour suivre vos progrès.

  • Ce cours est entièrement en ligne : vous n'avez donc pas besoin de vous présenter physiquement dans une salle de classe. Vous pouvez accéder à vos vidéos de cours, lectures et devoirs en tout temps et en tout lieu, par l'intermédiaire du Web ou de votre appareil mobile.

  • Cette Spécialisation n'est pas associée à des crédits universitaires, mais certaines universités peuvent décider d'accepter des Certificats de Spécialisation pour des crédits. Vérifiez-le auprès de votre établissement pour en savoir plus.

  • 6 months

  • - Programming experience in Python. It is required to complete programming assignments.

    - Unix basics. As the technologies covered throughout the specialization operate in Unix environment, we expect you to have basic understanding of the subject. Things like processes and files assumed to be familiar for the learner.

    - Basic linear algebra and probability theory. To grasp the “Big Data Applications: Machine Learning at Scale” course, you should be familiar with math primer or should complete an introductory course on machine learning.

  • It is expected to take course from the first to the last.

  • You will be able to present your portfolio project (Capstone project) to potential employers.

D'autres questions ? Visitez le Centre d'Aide pour les Etudiants.