Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn scalable data management, evaluate big data technologies, and design effective visualizations.
This Specialization covers intermediate topics in data science. You will gain hands-on experience with scalable SQL and NoSQL data management solutions, data mining algorithms, and practical statistical and machine learning concepts. You will also learn to visualize data and communicate results, and you’ll explore legal and ethical issues that arise in working with big data. In the final Capstone Project, developed in partnership with the digital internship platform Coursolve, you’ll apply your new skills to a real-world data science project.
Syllabus
Course 1: Data Manipulation at Scale: Systems and Algorithms
- Offered by University of Washington. Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making --- we ... Enroll for free.
Course 2: Practical Predictive Analytics: Models and Methods
- Offered by University of Washington. Statistical experiment design and analytics are at the heart of data science. In this course you will ... Enroll for free.
Course 3: Communicating Data Science Results
- Offered by University of Washington. Important note: The second assignment in this course covers the topic of Graph Analysis in the Cloud, ... Enroll for free.
Course 4: Data Science at Scale - Capstone Project
- Offered by University of Washington. In the capstone, students will engage on a real world project requiring them to apply skills from the ... Enroll for free.
- Offered by University of Washington. Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making --- we ... Enroll for free.
Course 2: Practical Predictive Analytics: Models and Methods
- Offered by University of Washington. Statistical experiment design and analytics are at the heart of data science. In this course you will ... Enroll for free.
Course 3: Communicating Data Science Results
- Offered by University of Washington. Important note: The second assignment in this course covers the topic of Graph Analysis in the Cloud, ... Enroll for free.
Course 4: Data Science at Scale - Capstone Project
- Offered by University of Washington. In the capstone, students will engage on a real world project requiring them to apply skills from the ... Enroll for free.
Courses
-
Statistical experiment design and analytics are at the heart of data science. In this course you will design statistical experiments and analyze the results using modern methods. You will also explore the common pitfalls in interpreting statistical arguments, especially those associated with big data. Collectively, this course will help you internalize a core set of practical and effective machine learning methods and concepts, and apply them to solve some real world problems. Learning Goals: After completing this course, you will be able to: 1. Design effective experiments and analyze the results 2. Use resampling methods to make clear and bulletproof statistical arguments without invoking esoteric notation 3. Explain and apply a core set of classification methods of increasing complexity (rules, trees, random forests), and associated optimization methods (gradient descent and variants) 4. Explain and apply a set of unsupervised learning concepts and methods 5. Describe the common idioms of large-scale graph analytics, including structural query, traversals and recursive queries, PageRank, and community detection
-
Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making --- we are drowning in it. Extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales. In this course, you will learn the landscape of relevant systems, the principles on which they rely, their tradeoffs, and how to evaluate their utility against your requirements. You will learn how practical systems were derived from the frontier of research in computer science and what systems are coming on the horizon. Cloud computing, SQL and NoSQL databases, MapReduce and the ecosystem it spawned, Spark and its contemporaries, and specialized systems for graphs and arrays will be covered. You will also learn the history and context of data science, the skills, challenges, and methodologies the term implies, and how to structure a data science project. At the end of this course, you will be able to: Learning Goals: 1. Describe common patterns, challenges, and approaches associated with data science projects, and what makes them different from projects in related fields. 2. Identify and use the programming models associated with scalable data manipulation, including relational algebra, mapreduce, and other data flow models. 3. Use database technology adapted for large-scale analytics, including the concepts driving parallel databases, parallel query processing, and in-database analytics 4. Evaluate key-value stores and NoSQL systems, describe their tradeoffs with comparable systems, the details of important examples in the space, and future trends. 5. “Think” in MapReduce to effectively write algorithms for systems including Hadoop and Spark. You will understand their limitations, design details, their relationship to databases, and their associated ecosystem of algorithms, extensions, and languages. write programs in Spark 6. Describe the landscape of specialized Big Data systems for graphs, arrays, and streams
-
Important note: The second assignment in this course covers the topic of Graph Analysis in the Cloud, in which you will use Elastic MapReduce and the Pig language to perform graph analysis over a moderately large dataset, about 600GB. In order to complete this assignment, you will need to make use of Amazon Web Services (AWS). Amazon has generously offered to provide up to $50 in free AWS credit to each learner in this course to allow you to complete the assignment. Further details regarding the process of receiving this credit are available in the welcome message for the course, as well as in the assignment itself. Please note that Amazon, University of Washington, and Coursera cannot reimburse you for any charges if you exhaust your credit. While we believe that this assignment contributes an excellent learning experience in this course, we understand that some learners may be unable or unwilling to use AWS. We are unable to issue Course Certificates for learners who do not complete the assignment that requires use of AWS. As such, you should not pay for a Course Certificate in Communicating Data Results if you are unable or unwilling to use AWS, as you will not be able to successfully complete the course without doing so. Making predictions is not enough! Effective data scientists know how to explain and interpret their results, and communicate findings accurately to stakeholders to inform business decisions. Visualization is the field of research in computer science that studies effective communication of quantitative results by linking perception, cognition, and algorithms to exploit the enormous bandwidth of the human visual cortex. In this course you will learn to recognize, design, and use effective visualizations. Just because you can make a prediction and convince others to act on it doesn’t mean you should. In this course you will explore the ethical considerations around big data and how these considerations are beginning to influence policy and practice. You will learn the foundational limitations of using technology to protect privacy and the codes of conduct emerging to guide the behavior of data scientists. You will also learn the importance of reproducibility in data science and how the commercial cloud can help support reproducible research even for experiments involving massive datasets, complex computational infrastructures, or both. Learning Goals: After completing this course, you will be able to: 1. Design and critique visualizations 2. Explain the state-of-the-art in privacy, ethics, governance around big data and data science 3. Use cloud computing to analyze large datasets in a reproducible way.
-
In the capstone, students will engage on a real world project requiring them to apply skills from the entire data science pipeline: preparing, organizing, and transforming data, constructing a model, and evaluating results. Through a collaboration with Coursolve, each Capstone project is associated with partner stakeholders who have a vested interest in your results and are eager to deploy them in practice. These projects will not be straightforward and the outcome is not prescribed -- you will need to tolerate ambiguity and negative results! But we believe the experience will be rewarding and will better prepare you for data science projects in practice.
Taught by
Bill Howe