Main Page
Table of contents
This is the course website for Data 188, “Introduction to Deep Learning”, Spring 2026, UC Berkeley.
Course Description
From the UC Berkeley course catalog:
This course is an introduction to deep learning (also known as “deep neural networks”), and will cover the fundamental techniques that power deep learning algorithms, as well as exploring the intuitions and various “rules of thumb” behind successful deep learning methods. Topics include: neural network architectures, backpropagation, convolutional neural networks, sequence models (such as the transformer model), applications to computer vision and natural language processing, and more.
Acknowledgements
This course is based off of a previous course, Data C182 Fall 2024, designed by instructors Naveen Ashish and Eric Kim.
This course also adapts material from CMU’s “Deep Learning Systems” course (10-414/714, Fall 2025). Thanks to: Professor Zico Kolter, Professor Tianqi Chen, and Professor Tim Dettmers.
Useful Course Links
Important: In this course, we will use Edstem to post announcements and important information. It is the student’s responsibility to actively monitor the Ed for any important announcements.
Note: lecture starts during Week 01 (2026-01-20). Discussion section and office hours will not start until Week 02 (2026-01-26).
Textbooks
We will be using the following textbooks, all fortunately freely available online:
- "Deep Learning: Foundations & Concepts" by Christopher Bishop & Hugh Bishop. The free-to-use online version is at Bishop Book. Further, UC Berkeley students can access the PDF version via this link (CalNet login required): PDF.
- "Deep Learning" by Ian Goodfellow and Yoshua Bengio and Aaron Courville. Free online link is here.
- (optional) Dive into Deep Learning D2LAI is an excellent interactive online textbook and set of resources for Deep Learning ! (a PDF version of the entire book is also available online)
Lectures
Lectures are Tuesdays and Thursdays, 3:30PM - 5PM, online via Zoom. Lecture slides are provided via this website, and lecture videos are provided via the bCourses “Media Gallery”.
Lecture Slides, Readings
Regarding reading: The provided reading links are optional and are provided for students interested in additional resources. The content in the reading will not always be “fair game” for exams: if something isn’t covered in lecture/discussion/assignments, then it won’t be in scope for exams. The Bishop textbook doesn’t always cleanly map to our lectures, but I’ve done my best to line things up. Note that the Bishop textbook sometimes takes a different approach to concepts (eg Bishop is sometimes relatively probability-heavy) that we won’t always follow in our course.
- Lecture 01 [Week 1, 2026-01-20] Introduction
- Reading: (Bishop) Ch1.1
- Lecture 02 [Week 1, 2026-01-22] ML Refresher, Softmax Regression
- Reading: (Bishop) Ch 1.2, Ch 5.4.4, Ch 7.2
- Lecture 03 [Week 2, 2026-01-27] Manual Neural Networks
- Reading: (Bishop) Ch 1.2, Ch 1.3, Ch 4.1, Ch. 6.2, 6.3, 6.4
- Lecture 04 [Week 2, 2026-01-29] Automatic Differentiation
- Reading: (Bishop) Ch 8.1, 8.2
- Lecture 05 [Week 3, 2026-02-03] Optimization
Discussion Sections
You can attend any discussion section you like. However, if there are less crowded sections that fit your schedule, those offer more opportunities for you to interact with your TA. See this calendar to view the available discussion section times and locations.
Section Notes
- Discussion 01 (Week 2): Matrices, vectors, and gradients. Section Notes, Solutions
Homeworks
All homeworks are graded for accuracy. See the course syllabus for more info about homework policy. For assignment due dates, see Gradescope.
Tip: For Colab notebooks, it’s recommended to save your own copy of the notebook by doing “File -> Save a copy in Drive” before running any cells.
-
(optional) Python Colab Tutorial.
-
Homework 0: ML warmup, softmax classification.