Completed
1.2.6 Signed Integers: 2's complement
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Computation Structures (Spring 2017)
Automatically move to the next video in the Classroom when playback concludes
- 1 1.2.1 What is Information?
- 2 1.2.2 Quantifying Information
- 3 1.2.3 Entropy
- 4 1.2.4 Encoding
- 5 1.2.5 Fixed-length Encodings
- 6 1.2.6 Signed Integers: 2's complement
- 7 1.2.7 Variable-length Encoding
- 8 1.2.8 Huffman's Algorithm
- 9 1.2.9 Huffman Code
- 10 1.2.10 Error Detection and Correction
- 11 1.2.11 Error Correction
- 12 1.2.12 Worked Examples: Quantifying Information
- 13 1.2.12 Worked Examples: Two's Complement Representation
- 14 1.2.12 Worked Examples: Two's Complement Addition
- 15 1.2.12 Worked Examples: Huffman Encoding
- 16 1.2.12 Worked Examples: Error Correction
- 17 2.2.1 Concrete Encoding of Information
- 18 2.2.2 Analog Signaling
- 19 2.2.3 Using Voltages Digitally
- 20 2.2.4 Combinational Devices
- 21 2.2.5 Dealing with Noise
- 22 2.2.6 Voltage Transfer Characteristic
- 23 2.2.7 VTC Example
- 24 2.2.8 Worked Examples: The Static Discipline
- 25 3.2.1 MOSFET: Physical View
- 26 3.2.2 MOSFET: Electrical View
- 27 3.2.3 CMOS Recipe
- 28 3.2.4 Beyond Inverters
- 29 3.2.5 CMOS Gates
- 30 3.2.6 CMOS Timing
- 31 3.2.7 Lenient Gates
- 32 3.2.8 Worked Examples: CMOS Functions
- 33 3.2.8 Worked Examples: CMOS Logic Gates
- 34 4.2.1 Sum of Products
- 35 4.2.2 Useful Logic Gates
- 36 4.2.3 Inverting Logic
- 37 4.2.4 Logic Simplification
- 38 4.2.5 Karnaugh Maps
- 39 4.2.6 Multiplexers
- 40 4.2.7 Read-only Memories
- 41 4.2.8 Worked Examples: Truth Tables
- 42 4.2.8 Worked Examples: Gates and Boolean Logic
- 43 4.2.8 Worked Examples: Combinational Logic Timing
- 44 4.2.8 Worked Examples: Karnaugh Maps
- 45 5.2.1 Digital State
- 46 5.2.2 D Latch
- 47 5.2.3 D Register
- 48 5.2.4 D Register Timing
- 49 5.2.5 Sequential Circuit Timing
- 50 5.2.6 Timing Example
- 51 5.2.7 Worked Example 1
- 52 5.2.8 Worked Example 2
- 53 6.2.1 Finite State Machines
- 54 6.2.2 State Transition Diagrams
- 55 6.2.3 FSM States
- 56 6.2.4 Roboant Example
- 57 6.2.5 Equivalent States; Implementation
- 58 6.2.6 Synchronization and Metastability
- 59 6.2.7 Worked Examples: FSM States and Transitions
- 60 6.2.7 Worked Examples: FSM Implementation
- 61 7.2.1 Latency and Throughput
- 62 7.2.2 Pipelined Circuits
- 63 7.2.3 Pipelining Methodology
- 64 7.2.4 Circuit Interleaving
- 65 7.2.5 Self-timed Circuits
- 66 7.2.6 Control Structures
- 67 7.2.7 Worked Examples: Pipelining
- 68 7.2.7 Worked Examples: Pipelining 2
- 69 8.2.1 Power Dissipation
- 70 8.2.2 Carry-select Adders
- 71 8.2.3 Carry-lookahead Adders
- 72 8.2.4 Binary Multiplication
- 73 8.2.5 Multiplier Tradeoffs
- 74 8.2.6 Part 1 Wrap-up
- 75 9.2.1 Datapaths and FSMs
- 76 9.2.2 Programmable Datapaths
- 77 9.2.3 The von Neumann Model
- 78 9.2.4 Storage
- 79 9.2.5 ALU Instructions
- 80 9.2.6 Constant Operands
- 81 9.2.7 Memory Access
- 82 9.2.8 Branches
- 83 9.2.9 Jumps
- 84 9.2.10 Worked Examples: Programmable Architectures
- 85 10.2.1 Intro to Assembly Language
- 86 10.2.2 Symbols and Labels
- 87 10.2.3 Instruction Macros
- 88 10.2.4 Assembly Wrap-up
- 89 10.2.5 Models of Computation
- 90 10.2.6 Computability, Universality
- 91 10.2.7 Uncomputable Functions
- 92 10.2.8 Worked Examples: Beta Assembly
- 93 11.2.1 Iterpretation and Compilation
- 94 11.2.2 Compiling Expressions
- 95 11.2.3 Compiling Statements
- 96 11.2.4 Compiler Frontend
- 97 11.2.5 Optimization and Code Generation
- 98 11.2.6 Worked Examples
- 99 12.2.1 Procedures
- 100 12.2.2 Activation Records and Stacks
- 101 12.2.3 Stack Frame Organization
- 102 12.2.4 Compiling a Procedure
- 103 12.2.5 Stack Detective
- 104 12.2.6 Worked Examples: Procedures and Stacks
- 105 13.2.1 Building Blocks
- 106 13.2.2 ALU Instructions
- 107 13.2.3 Load and Store
- 108 13.2.4 Jumps and Branches
- 109 13.2.5 Exceptions
- 110 13.2.6 Summary
- 111 13.2.7 Worked Examples: A Better Beta
- 112 13.2.7 Worked Examples: Beta Control Signals
- 113 14.2.1 Memory Technologies
- 114 14.2.2 SRAM
- 115 14.2.3 DRAM
- 116 14.2.4 Non-volatile Storage; Using the Hierarchy
- 117 14.2.5 The Locality Principle
- 118 14.2.6 Caches
- 119 14.2.7 Direct-mapped Caches
- 120 14.2.8 Block Size; Cache Conflicts
- 121 14.2.9 Associative Caches
- 122 14.2.10 Write Strategies
- 123 14.2.11 Worked Examples: Cache Benefits
- 124 14.2.11 Worked Examples: Caches
- 125 15.2.1 Improving Beta Performance
- 126 15.2.2 Basic 5-Stage Pipeline
- 127 15.2.3 Data Hazards
- 128 15.2.4 Control Hazards
- 129 15.2.5 Exceptions and Interrupts
- 130 15.2.6 Pipelining Summary
- 131 15.2.7 Worked Examples: Pipelined Beta
- 132 15.2.7 Worked Examples: Beta Junkyard
- 133 16.2.1 Even More Memory Hierarchy
- 134 16.2.2 Basics of Virtual Memory
- 135 16.2.3 Page Faults
- 136 16.2.4 Building the MMU
- 137 16.2.5 Contexts
- 138 16.2.6 MMU Improvements
- 139 16.2.7 Worked Examples: Virtual Memory
- 140 17.2.1 Recap: Virtual Memory
- 141 17.2.2 Processes
- 142 17.2.3 Timesharing
- 143 17.2.4 Handling Illegal Instructions
- 144 17.2.5 Supevisor Calls
- 145 17.2.6 Worked Examples: Operating Systems
- 146 18.2.1 OS Device Handlers
- 147 18.2.2 SVCs for Input/Output
- 148 18.2.3 Example: Match Handler with OS
- 149 18.2.4 Real Time
- 150 18.2.5 Weak Priorities
- 151 18.2.6 Strong Priorities
- 152 18.2.7 Example: Priorities in Action!
- 153 18.2.8 Worked Examples: Devices and Interrupts
- 154 19.2.1 Interprocess Communication
- 155 19.2.2 Semaphores
- 156 19.2.3 Atomic Transactions
- 157 19.2.4 Semaphore Implementation
- 158 19.2.5 Deadlock
- 159 19.2.6 Worked Examples: Semaphores
- 160 20.2.1 System-level Interfaces
- 161 20.2.2 Wires
- 162 20.2.3 Buses
- 163 20.2.4 Point-to-point Communication
- 164 20.2.5 System-level Interconnect
- 165 20.2.6 Communication Topologies
- 166 21.2.1 Instruction-level Parallelism
- 167 21.2.2 Data-level Parallelism
- 168 21.2.3 Thread-level Parallelism
- 169 21.2.4 Shared Memory & Caches
- 170 21.2.5 Cache Coherence
- 171 21.2.6 6.004 Wrap-up
- 172 An Interview with Christopher Terman on Teaching Computation Structures