Computation Structures (Spring 2017)

Computation Structures (Spring 2017)

Chris Terman via MIT OpenCourseWare Direct link

21.2.2 Data-level Parallelism

167 of 172

167 of 172

21.2.2 Data-level Parallelism

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Computation Structures (Spring 2017)

Automatically move to the next video in the Classroom when playback concludes

  1. 1 1.2.1 What is Information?
  2. 2 1.2.2 Quantifying Information
  3. 3 1.2.3 Entropy
  4. 4 1.2.4 Encoding
  5. 5 1.2.5 Fixed-length Encodings
  6. 6 1.2.6 Signed Integers: 2's complement
  7. 7 1.2.7 Variable-length Encoding
  8. 8 1.2.8 Huffman's Algorithm
  9. 9 1.2.9 Huffman Code
  10. 10 1.2.10 Error Detection and Correction
  11. 11 1.2.11 Error Correction
  12. 12 1.2.12 Worked Examples: Quantifying Information
  13. 13 1.2.12 Worked Examples: Two's Complement Representation
  14. 14 1.2.12 Worked Examples: Two's Complement Addition
  15. 15 1.2.12 Worked Examples: Huffman Encoding
  16. 16 1.2.12 Worked Examples: Error Correction
  17. 17 2.2.1 Concrete Encoding of Information
  18. 18 2.2.2 Analog Signaling
  19. 19 2.2.3 Using Voltages Digitally
  20. 20 2.2.4 Combinational Devices
  21. 21 2.2.5 Dealing with Noise
  22. 22 2.2.6 Voltage Transfer Characteristic
  23. 23 2.2.7 VTC Example
  24. 24 2.2.8 Worked Examples: The Static Discipline
  25. 25 3.2.1 MOSFET: Physical View
  26. 26 3.2.2 MOSFET: Electrical View
  27. 27 3.2.3 CMOS Recipe
  28. 28 3.2.4 Beyond Inverters
  29. 29 3.2.5 CMOS Gates
  30. 30 3.2.6 CMOS Timing
  31. 31 3.2.7 Lenient Gates
  32. 32 3.2.8 Worked Examples: CMOS Functions
  33. 33 3.2.8 Worked Examples: CMOS Logic Gates
  34. 34 4.2.1 Sum of Products
  35. 35 4.2.2 Useful Logic Gates
  36. 36 4.2.3 Inverting Logic
  37. 37 4.2.4 Logic Simplification
  38. 38 4.2.5 Karnaugh Maps
  39. 39 4.2.6 Multiplexers
  40. 40 4.2.7 Read-only Memories
  41. 41 4.2.8 Worked Examples: Truth Tables
  42. 42 4.2.8 Worked Examples: Gates and Boolean Logic
  43. 43 4.2.8 Worked Examples: Combinational Logic Timing
  44. 44 4.2.8 Worked Examples: Karnaugh Maps
  45. 45 5.2.1 Digital State
  46. 46 5.2.2 D Latch
  47. 47 5.2.3 D Register
  48. 48 5.2.4 D Register Timing
  49. 49 5.2.5 Sequential Circuit Timing
  50. 50 5.2.6 Timing Example
  51. 51 5.2.7 Worked Example 1
  52. 52 5.2.8 Worked Example 2
  53. 53 6.2.1 Finite State Machines
  54. 54 6.2.2 State Transition Diagrams
  55. 55 6.2.3 FSM States
  56. 56 6.2.4 Roboant Example
  57. 57 6.2.5 Equivalent States; Implementation
  58. 58 6.2.6 Synchronization and Metastability
  59. 59 6.2.7 Worked Examples: FSM States and Transitions
  60. 60 6.2.7 Worked Examples: FSM Implementation
  61. 61 7.2.1 Latency and Throughput
  62. 62 7.2.2 Pipelined Circuits
  63. 63 7.2.3 Pipelining Methodology
  64. 64 7.2.4 Circuit Interleaving
  65. 65 7.2.5 Self-timed Circuits
  66. 66 7.2.6 Control Structures
  67. 67 7.2.7 Worked Examples: Pipelining
  68. 68 7.2.7 Worked Examples: Pipelining 2
  69. 69 8.2.1 Power Dissipation
  70. 70 8.2.2 Carry-select Adders
  71. 71 8.2.3 Carry-lookahead Adders
  72. 72 8.2.4 Binary Multiplication
  73. 73 8.2.5 Multiplier Tradeoffs
  74. 74 8.2.6 Part 1 Wrap-up
  75. 75 9.2.1 Datapaths and FSMs
  76. 76 9.2.2 Programmable Datapaths
  77. 77 9.2.3 The von Neumann Model
  78. 78 9.2.4 Storage
  79. 79 9.2.5 ALU Instructions
  80. 80 9.2.6 Constant Operands
  81. 81 9.2.7 Memory Access
  82. 82 9.2.8 Branches
  83. 83 9.2.9 Jumps
  84. 84 9.2.10 Worked Examples: Programmable Architectures
  85. 85 10.2.1 Intro to Assembly Language
  86. 86 10.2.2 Symbols and Labels
  87. 87 10.2.3 Instruction Macros
  88. 88 10.2.4 Assembly Wrap-up
  89. 89 10.2.5 Models of Computation
  90. 90 10.2.6 Computability, Universality
  91. 91 10.2.7 Uncomputable Functions
  92. 92 10.2.8 Worked Examples: Beta Assembly
  93. 93 11.2.1 Iterpretation and Compilation
  94. 94 11.2.2 Compiling Expressions
  95. 95 11.2.3 Compiling Statements
  96. 96 11.2.4 Compiler Frontend
  97. 97 11.2.5 Optimization and Code Generation
  98. 98 11.2.6 Worked Examples
  99. 99 12.2.1 Procedures
  100. 100 12.2.2 Activation Records and Stacks
  101. 101 12.2.3 Stack Frame Organization
  102. 102 12.2.4 Compiling a Procedure
  103. 103 12.2.5 Stack Detective
  104. 104 12.2.6 Worked Examples: Procedures and Stacks
  105. 105 13.2.1 Building Blocks
  106. 106 13.2.2 ALU Instructions
  107. 107 13.2.3 Load and Store
  108. 108 13.2.4 Jumps and Branches
  109. 109 13.2.5 Exceptions
  110. 110 13.2.6 Summary
  111. 111 13.2.7 Worked Examples: A Better Beta
  112. 112 13.2.7 Worked Examples: Beta Control Signals
  113. 113 14.2.1 Memory Technologies
  114. 114 14.2.2 SRAM
  115. 115 14.2.3 DRAM
  116. 116 14.2.4 Non-volatile Storage; Using the Hierarchy
  117. 117 14.2.5 The Locality Principle
  118. 118 14.2.6 Caches
  119. 119 14.2.7 Direct-mapped Caches
  120. 120 14.2.8 Block Size; Cache Conflicts
  121. 121 14.2.9 Associative Caches
  122. 122 14.2.10 Write Strategies
  123. 123 14.2.11 Worked Examples: Cache Benefits
  124. 124 14.2.11 Worked Examples: Caches
  125. 125 15.2.1 Improving Beta Performance
  126. 126 15.2.2 Basic 5-Stage Pipeline
  127. 127 15.2.3 Data Hazards
  128. 128 15.2.4 Control Hazards
  129. 129 15.2.5 Exceptions and Interrupts
  130. 130 15.2.6 Pipelining Summary
  131. 131 15.2.7 Worked Examples: Pipelined Beta
  132. 132 15.2.7 Worked Examples: Beta Junkyard
  133. 133 16.2.1 Even More Memory Hierarchy
  134. 134 16.2.2 Basics of Virtual Memory
  135. 135 16.2.3 Page Faults
  136. 136 16.2.4 Building the MMU
  137. 137 16.2.5 Contexts
  138. 138 16.2.6 MMU Improvements
  139. 139 16.2.7 Worked Examples: Virtual Memory
  140. 140 17.2.1 Recap: Virtual Memory
  141. 141 17.2.2 Processes
  142. 142 17.2.3 Timesharing
  143. 143 17.2.4 Handling Illegal Instructions
  144. 144 17.2.5 Supevisor Calls
  145. 145 17.2.6 Worked Examples: Operating Systems
  146. 146 18.2.1 OS Device Handlers
  147. 147 18.2.2 SVCs for Input/Output
  148. 148 18.2.3 Example: Match Handler with OS
  149. 149 18.2.4 Real Time
  150. 150 18.2.5 Weak Priorities
  151. 151 18.2.6 Strong Priorities
  152. 152 18.2.7 Example: Priorities in Action!
  153. 153 18.2.8 Worked Examples: Devices and Interrupts
  154. 154 19.2.1 Interprocess Communication
  155. 155 19.2.2 Semaphores
  156. 156 19.2.3 Atomic Transactions
  157. 157 19.2.4 Semaphore Implementation
  158. 158 19.2.5 Deadlock
  159. 159 19.2.6 Worked Examples: Semaphores
  160. 160 20.2.1 System-level Interfaces
  161. 161 20.2.2 Wires
  162. 162 20.2.3 Buses
  163. 163 20.2.4 Point-to-point Communication
  164. 164 20.2.5 System-level Interconnect
  165. 165 20.2.6 Communication Topologies
  166. 166 21.2.1 Instruction-level Parallelism
  167. 167 21.2.2 Data-level Parallelism
  168. 168 21.2.3 Thread-level Parallelism
  169. 169 21.2.4 Shared Memory & Caches
  170. 170 21.2.5 Cache Coherence
  171. 171 21.2.6 6.004 Wrap-up
  172. 172 An Interview with Christopher Terman on Teaching Computation Structures

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.