Throughput is
the rate at which instructions leave the pipeline
total time it takes an instruction to be processed by a stage
the rate at which instructions move to the next register
total time it takes an instruction to be processed by the entire pipeline
Latency is
Pretend the pipeline is a cafeteria line in prison. Who is throughput and who is latency?
Throughput is the security guard concerned with how often people are leaving the line so nobody shanks anybody. Latency is BUBBA watching Skinny Pete make his way through the line so he can shank him after he gets his food.
Throughput is the warden watching the prisoners out in the field and latency is the secretary having an affair with the gardener *gasp*~
Throughput is the rate at which the soap leaves a prisoners hand in the showers and latency is the total time it takes to pick it up
Pipeline registers are placed ❌, those registers store ❌, each stage executes ❌ working on a different instruction
And instruction is ❌ when it is currently being executed by the pipeline. Once the instruction completes the pipeline, it is ❌.
The pipeline instructions are executed in order
Instruction-level parallelism exists between a pair of instructions if
their execution order does not matter
their execution order matters
The pipeline requires some parallelism
Dependencies exist if execution order doesn't matter
Consider the variables a and b. A causal dependency is defined as A<B (Instruction A immediately precedes B) if
B reads value written by A Example: a = 1; b = a;
B write to visible location written by A Example: a = 1; a = 2;
B write to a location read by A Example: b = a; a = 1;
Consider the variables a and b. An output dependency is defined as A<B (Instruction A immediately precedes B) if
Consider the variables a and b. An anti dependency is defined as A<B (Instruction A immediately precedes B) if
❌ parallelism is how the programmer tells the system that two pieces of code can execute in parallel. ❌ parallelism is the system actually executing two pieces of code in parallel.
A pipeline hazard exists when
the processor's execution would violate a data or control dependency
the processor's execution would support a data or control dependency
the processor's execution would cause a data or control dependency
the processor's execution would execute a data or control dependency
We should detect pipeline hazards
Stalling is one way to handle pipeline hazards
A ❌ is holding an instruction for an extra cycle. A ❌ is when a pipeline stage is forced to do nothing.
The only data hazards in the Y86 Pipeline are causal hazards on register file
The only control hazards in the Y86 Pipeline are conditional jumps
To prevent a data hazard by stalling, we can
read registers in decode that are written by instructions in E, M, or W and then stall the instruction in decode until writer is retired
read registers in fetch that are written by instructions in E, M, or W and then stall the instruction in fetch until writer is retired
read registers in execute that are written by instructions in E, M, or W and then stall the instruction in execute until writer is retired
How would we resolve a conditional jump control hazard by stalling?
stall fetch until jump exits execute
stall execute until jump exits decode
stall fetch and execute until jump exits decode
stall fetch, decode, and execute until jump exits memory
stall fetch, decode, execute, and memory until jump exits write back
just stall everything after fetch indefinitely and go finish off a bottle of wine in one go
How would we resolve a return control hazard by stalling?
stall fetch until return exits memory
stall decode until return exits memory
stall fetch and decode until return exits memory
stall fetch, decode, and execute until return exits memory
stall fetch, decode, execute, and memory until return exits memory
return to cpsc313 in the summer after you fail this midterm
Check all the statements that are true about the pipeline-control module
it's a hardware component separate from the 5 stages
examines values across every stage
decides whether stage should stall or bubble
Data forwarding is a mechanism that forwards values from later pipeline stages to earlier ones
Where does data forwarding forward its data to?
D
W
M
E
F
Where does data forward forward its data from?
W - new value from memory or ALU
M - new value read from memory or from ALU
E - new value from ALU
D - new value from registers
F - new value from PC determined instruction
Which of these are data hazards?
register-register hazard
load-use hazard
register-memory hazard
memory-memory hazard
use-use hazard
load-load hazard
Which of these is a register-register hazard?
irmovl $1, %eax addl %eax, %ebx
irmovl $1, %ecx addl %eax, %ebx
How do we handle a register-register hazard with data forwarding?
forward to D from E, M, or W
forward to F from E, M, or W
stall one cycle, then forward to D from E, M, or W
stall one cycle, then forward to F from D, E, M, or W
stall one cycle, then forward to F from E, M, or W
forward to F from D, E, M, or W
Which of these is a load-use hazard?
mrmovl (esi), %eax addl %eax, %ebx
rmmovl %eax, (esi) addl %eax, %ebx
How would we handle a load-use hazard?
Stall use one cycle, forward to D from M or W
Stall use one cycle, forward to D from E or M
Stall use one cycle, forward to E from D, M, or W
Stall use one cycle, forward to E from M or W
Jump prediction is not suitable for resolving conditional-jump hazards
We know whether the jump is taken or not taken once the jump finishes in stage ❌.
valC is the address for the jump as if it were ❌ and valP is the address for the jump as if it were ❌.
When a mis-predicted jump is in M, what should we do?
shootdown D and E to prevent them from doing damage
shootdown F and D to prevent them from doing damage
shootdown M and W to prevent them from doing damage
The homework in this course is much too long
We could avoid stalling in a load-use hazard by forwarding m.valM to the beginning of E
We could have one fewer bubble in a misprediction if we compute when conditional jump is in M, reading m.bch (the branch condition)
In regards to static jump prediction, what could the compiler know?
a jump's taken tendency
for loops, it can decide to use a continue condition or exit condition
for if statements it might be able to spot error tests
what it sees in the program text
The compiler cares about the ISA's jump predictions
How do we optimize handling the return hazard?
Keep a stack of return addresses for future use
Guess the return address based on the value in predPC
Guess the return address based on the value in PC
Guess the return address based on the valP in D
Y86 has indirect jumps
Indirect jumps are needed for polymorphic dispatch
CPI =
totalCycles / instructionRetiredCycles
instructionRetiredCycles / totalCycles
What are the tendencies of deeper pipelines?
reduce clock period
increase CPI
makes stalling harder to avoid
Which of these are attributes of super-scalar?
multiple pipelines that run in parallel
issue multiple instructions on each cycle
instructions execute in parallel and can even bypass each other
if I shut my eyes tight enough, will the midterm disappear?
What does hyper-threading consist of? (Only one of the following is correct)
OS loads multiple runnable threads into CPU, usually from the same process
CPU does fast switching between threads to hide memory latency
What is multi-core?
multiple CPUs per chip, each pipelined, super-scalar, etc
CPU's execute independent threads from possibly different processes
How could Mike do this to us?
Sadism
Also sadism
And sadism
All of the above