CSA IITU PART 2 (235)

Description

Quiz on CSA IITU PART 2 (235), created by Hello World on 20/12/2017.
Hello World
Quiz by Hello World, updated more than 1 year ago
Hello World
Created by Hello World almost 7 years ago
1137
17

Resource summary

Question 1

Question
109. What is a “Kernel” in Cache Memory?
Answer
  • o Execution in the OS that is neither idle nor in synchronization access
  • o Execution or waiting for synchronization variables
  • o Execution in user code

Question 2

Question
108. What is a “Synchronization” in Cache Memory?
Answer
  • o Execution in the OS that is neither idle nor in synchronization access
  • o Execution in user code
  • o Execution or waiting for synchronization variables

Question 3

Question
107. How many main levels of Cache Memory?
Answer
  • 3
  • 2
  • 6
  • 8

Question 4

Question
106. How many size of Cache L3 is true approximately? :
Answer
  • o 3 MB
  • o 256 KB
  • o 256 MB

Question 5

Question
105. How many size of Cache L2 is true approximately? :
Answer
  • o 256 KB
  • o 4 KB
  • o 32 MB

Question 6

Question
104. How many size of Cache L1 is true approximately? :
Answer
  • o 8 KB
  • o 256 KB
  • o 2 MB

Question 7

Question
103. Little’s Law and a series of definitions lead to several useful equations for
Answer
  • o Average length of queue
  • o Average number of tasks in service

Question 8

Question
102. Little’s Law and a series of definitions lead to several useful equations for “Length server” - :
Answer
  • o Average number of tasks in service
  • o Average length of queue

Question 9

Question
101. Little’s Law and a series of definitions lead to several useful equations for “Time system” - :
Answer
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time per task in the queue

Question 10

Question
100. Little’s Law and a series of definitions lead to several useful equations for “Time queue” - :
Answer
  • o Average time per task in the queue
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server

Question 11

Question
99. Little’s Law and a series of definitions lead to several useful equations for “Time server” - :
Answer
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time per task in the queue
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server

Question 12

Question
98. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Think time” - ?:
Answer
  • o The time from the reception of the response until the user begins to enter the next command
  • o The time for the user to enter the command
  • o The time between when the user enters the command and the complete response is displayed

Question 13

Question
97. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “System response time” - ?:
Answer
  • o The time between when the user enters the command and the complete response is displayed
  • o The time for the user to enter the commando The time for the user to enter the command
  • o The time from the reception of the response until the user begins to enter the next command

Question 14

Question
96. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Entry time” - ? :
Answer
  • o The time for the user to enter the command
  • o The time between when the user enters the command and the complete response is displayed
  • o The time from the reception of the response until the user begins to enter the next command

Question 15

Question
95. At storage systems Gray and Siewiorek classify faults what does mean “Environmental faults”? :
Answer
  • o Fire, flood, earthquake, power failure, and sabotage
  • o Faults in software (usually) and hardware design (occasionally)
  • o Devices that fail, such as perhaps due to an alpha particle hitting a memory cell

Question 16

Question
94. At storage systems Gray and Siewiorek classify faults what does mean “Operation faults”? :
Answer
  • o Mistakes by operations and maintenance personnel
  • o Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
  • o Faults in software (usually) and hardware design (occasionally)

Question 17

Question
93. At storage systems Gray and Siewiorek classify faults what does mean “Design faults”? :
Answer
  • o Faults in software (usually) and hardware design (occasionally)
  • o Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
  • o Mistakes by operations and maintenance personnel

Question 18

Question
92. At storage systems Gray and Siewiorek classify faults what does mean “Hardware faults”? :
Answer
  • o Faults in software (usually) and hardware design (occasionally)
  • o Mistakes by operations and maintenance personnel
  • o Devices that fail, such as perhaps due to an alpha particle hitting a memory cell

Question 19

Question
91. What is a RAID 4?
Answer
  • o Many applications are dominated by small accesses
  • o Since the higher-level disk interfaces understand the health of a disk, it’s easy to figure out which disk failed
  • o Also called mirroring or shadowing, there are two copies of every piece of data

Question 20

Question
90. What is a RAID 3?
Answer
  • o Since the higher-level disk interfaces understand the health of a disk, it’s easy to figure out which disk failed
  • o Many applications are dominated by small accesses
  • o Also called mirroring or shadowing, there are two copies of every piece of data

Question 21

Question
89. What is a RAID 2?
Answer
  • o This organization was inspired by applying memory-style error correcting codes to disks
  • o It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,” although the data may be striped across the disks in the array
  • o Also called mirroring or shadowing, there are two copies of every piece of data

Question 22

Question
88. What is a RAID 1?
Answer
  • o Also called mirroring or shadowing, there are two copies of every piece of data
  • o It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,” although the data may be striped across the disks in the array
  • o This organization was inspired by applying memory-style error correcting codes to disks

Question 23

Question
87. What is a RAID 0?
Answer
  • o It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,” although the data may be striped across the disks in the array
  • o Also called mirroring or shadowing, there are two copies of every piece of data
  • o This organization was inspired by applying memory-style error correcting codes to disks

Question 24

Question
86. A virus classification by target includes the following categories, What is a File infector?
Answer
  • o Infects files that the operating system or shell consider to be executable
  • o A typical approach is as follows
  • o The key is stored with the virus
  • o Far more sophisticated techniques are possible

Question 25

Question
85. In Non-Blocking Caches what does mean “Early restart”?
Answer
  • o Fetch the words in normal order, but as soon as the requested word of the block arrives, send it to the processor and let the processor continue execution
  • o Request the missed word first from memory and send it to the processor as soon as it arrives; let the processor continue execution while filling the rest of the words in the block

Question 26

Question
84. In Non-Blocking Caches what does mean “Critical Word First”?
Answer
  • o Fetch the words in normal order, but as soon as the requested word of the block arrives, send it to the processor and let the processor continue execution
  • o Request the missed word first from memory and send it to the processor as soon as it arrives; let the processor continue execution while filling the rest of the words in the block

Question 27

Question
83. Storage Systems, “Higher associativity to reduce miss rate” -
Answer
  • o Obviously, increasing associativity reduces conflict misses
  • o The obvious way to reduce capacity misses is to increase cache capacity
  • o The simplest way to reduce the miss rate is to take advantage of spatial locality and increase the block size

Question 28

Question
82. Storage Systems, “Bigger caches to reduce miss rate” -
Answer
  • o The obvious way to reduce capacity misses is to increase cache capacity
  • o Obviously, increasing associativity reduces conflict misses
  • o The simplest way to reduce the miss rate is to take advantage of spatial locality and increase the block size

Question 29

Question
81. Storage Systems, “Larger block size to reduce miss rate” -
Answer
  • o The simplest way to reduce the miss rate is to take advantage of spatial locality and increase the block size
  • o The obvious way to reduce capacity misses is to increase cache capacity
  • o Obviously, increasing associativity reduces conflict misses

Question 30

Question
80. At Critical Word First for Miss Penalty chose correct sequence of Blocking Cache with Critical Word first “Order of fill”:
Answer
  • o 3,4,5,6,7,0,1,2
  • o 0,1,2,3,4,5,6,7

Question 31

Question
79. At Critical Word First for Miss Penalty chose correct sequence of Basic Blocking Cache “Order of fill”:
Answer
  • o 0,1,2,3,4,5,6,7
  • o 3,4,5,6,7,0,1,2

Question 32

Question
78. What does MAF?
Answer
  • o Miss Address File
  • o Map Address File
  • o Memory Address File

Question 33

Question
77. What does mean MSHR?
Answer
  • o Miss Status Handling Register
  • o Map Status Handling Reload
  • o Mips Status Hardware Register
  • o Memory Status Handling Register

Question 34

Question
76. Non-Blocking Cache Timeline for “Miss Under Miss” the sequence is -?
Answer
  • o CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->Miss Penalty->CPU time
  • o CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
  • o CPU time-Cache Miss-Miss Penalty-CPU time

Question 35

Question
75. Non-Blocking Cache Timeline for “Hit Under Miss” the sequence is -?
Answer
  • o CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
  • o CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->CPU time
  • o CPU time-Cache Miss-Miss Penalty-CPU time

Question 36

Question
74. Non-Blocking Cache Timeline for “Blocking Cache” the sequence is - ?
Answer
  • o CPU time-Cache Miss-Miss Penalty-CPU time
  • o CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
  • o CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->CPU time

Question 37

Question
73. In Multilevel Caches “Misses per instruction” equals =
Answer
  • o misses in cache / number of instructions
  • o misses in cache / accesses to cache
  • o misses in cache / CPU memory accesses

Question 38

Question
72. In Multilevel Caches “Global miss rate” equals =
Answer
  • o misses in cache / CPU memory accesses
  • o misses in cache / accesses to cache
  • o misses in cache / number of instructions

Question 39

Question
71. In Multilevel Caches “Local miss rate” equals =
Answer
  • o misses in cache / accesses to cache
  • o misses in cache / number of instructions
  • o misses in cache / CPU memory accesses

Question 40

Question
70. What is a Conflict?
Answer
  • o misses that occur because of collisions due to less than full associativity
  • o first-reference to a block, occur even with infinite cache
  • o cache is too small to hold all data needed by program, occur even under perfect replacement policy

Question 41

Question
67. At VLIW Multi-Way Branches, which of this solution is true about problem: Long instructions provide few opportunities for branches:
Answer
  • o Allow one instruction to branch multiple directions
  • o Speculative operations that don’t cause exceptions

Question 42

Question
66. What is an ALAT? :
Answer
  • o Advanced Load Address Table
  • o Allocated Link Address Table
  • o Allowing List Address Table
  • o Addition Long Accessibility Table

Question 43

Question
65. h Speculative Execution, which of this solution is true about problem: Possible memory hazards limit code scheduling:
Answer
  • o Hardware to check pointer hazards
  • o Speculative operations that don’t cause exceptions

Question 44

Question
64. At VLIW Speculative Execution, which of this solution is true about problem: Branches restrict compiler code motion?
Answer
  • o Speculative operations that don’t cause exceptions
  • o Hardware to check pointer hazards

Question 45

Question
63. At VLIW by “performance and loop iteration” which time is shorter?
Answer
  • o Software Pipelined
  • o Loop Unrolled

Question 46

Question
62. At VLIW by “performance and loop iteration” which time is longer?
Answer
  • o Loop Unrolled
  • o Software Pipelined

Question 47

Question
61. What is “VLIW”?
Answer
  • o Very Long Instruction Word
  • o Very Light Internal Word
  • o Very Less Interpreter Word
  • o Very Low Invalid Word

Question 48

Question
60. Out-of-Order Control Complexity MIPS R10000 which element is not in Control Logic?
Answer
  • o Integer Datapath
  • o CLK
  • o Free List
  • o Address Queue

Question 49

Question
59. Out-of-Order Control Complexity MIPS R10000 which element is in Control Logic?
Answer
  • o Register name
  • o Instruction cache
  • o Data tags
  • o Data cache

Question 50

Question
58. At VLIW “Superscalar Control Logic Scaling” which parameters are used?
Answer
  • o Width and Lifetime
  • o Width and Height
  • o Time and Cycle
  • o Length and Addition

Question 51

Question
57. What is an IQ?
Answer
  • o Issue Queue
  • o Internal Queue
  • o Interrupt Queue
  • o Instruction Queue

Question 52

Question
56. What is a FL?
Answer
  • o Free List
  • o Free Last
  • o Free Launch
  • o Free Leg

Question 53

Question
55. What is a RT?
Answer
  • o Rename Table
  • o Recall Table
  • o Relocate Table
  • o Remove Table

Question 54

Question
54. Speculating on Exceptions “Recovery mechanism” is -
Answer
  • o Only write architectural state at commit point, so can throw away partially executed instructions after exception
  • o Exceptions are rare, so simply predicting no exceptions is very accurate
  • o An entity capable of accessing objects
  • o None of them

Question 55

Question
1. Speculating on Exceptions “Check prediction mechanism” is -
Answer
  • o Exceptions detected at end of instruction execution pipeline, special hardware for various exception types
  • o Exceptions are rare, so simply predicting no exceptions is very accurate
  • o The way in which an object is accessed by a subject
  • o None of them

Question 56

Question
52. Speculating on Exceptions “Prediction mechanism” is -
Answer
  • o Exceptions are rare, so simply predicting no exceptions is very accurate
  • o Exceptions detected at end of instruction execution pipeline, special hardware for various exception types
  • o Only write architectural state at commit point, so can throw away partially executed instructions after exception
  • o None of them

Question 57

Question
51. What is about Superscalar means “F-D-X-M-W”?
Answer
  • o Fetch, Decode, Execute, Memory, Writeback
  • o Fetch, Decode, Instruct, Map, Write
  • o Fetch, Decode, Excite, Memory, Write
  • o Fetch, Decode, Except, Map, Writeback

Question 58

Question
50. How many stages used in Superscalar (Pipeline)?
Answer
  • 5
  • 4
  • 6
  • 7

Question 59

Question
49. What is a SB?
Answer
  • o Scoreboard
  • o Scorebased
  • o Scalebit
  • o Scaleboard

Question 60

Question
48. What is a PRF?
Answer
  • o Physical Register File
  • o Pending Register File
  • o Pipeline Register File
  • o Pure Register File

Question 61

Question
47. What is a FSB?
Answer
  • o Finished Store Buffer
  • o Finished Stack Buffer
  • o Finished Stall Buffer
  • o Finished Star Buffer

Question 62

Question
46. What is a ROB?
Answer
  • o Reorder Buffer
  • o Read Only Buffer
  • o Reload Buffer
  • o Recall Buffer

Question 63

Question
45. What is a ARF:
Answer
  • o Architectural Register File
  • o Architecture Relocation File
  • o Architecture Reload File
  • o Architectural Read File

Question 64

Question
44. Which of the following formula is true about Issue Queue for “Instruction Ready”:
Answer
  • o Instruction Ready = (!Vsrc0 || !Psrc0)&&(!Vsrc1 || !Psrc1)&& no structural hazards
  • o Instruction Ready = (!Vsrc0 || !Psrc1)&&(!Vsrc1 || !Psrc0)&& no structural hazards
  • o Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc1)&& no structural hazards
  • o Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc0)&& no structural hazards

Question 65

Question
43. How many instructions used in Distributed Superscalar 2 and Exceptions?
Answer
  • 4
  • 3
  • 2
  • 1

Question 66

Question
42. How many issue queue used in Distributed Superscalar 2 and Exceptions:
Answer
  • 4
  • 3
  • 2
  • 1

Question 67

Question
41. How many issue queue used in Centralized Superscalar 2 and Exceptions?
Answer
  • 4
  • 3
  • 2
  • 1

Question 68

Question
40. Little’s Law and a series of definitions lead to several useful equations for “Length queue” -:
Answer
  • o Average length of queue
  • o Average number of tasks in service

Question 69

Question
39. Little’s Law and a series of definitions lead to several useful equations for “Length server” - :
Answer
  • o Average number of tasks in service
  • o Average length of queue

Question 70

Question
38. Little’s Law and a series of definitions lead to several useful equations for “Time system” - :
Answer
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time per task in the queue

Question 71

Question
37. Little’s Law and a series of definitions lead to several useful equations for “Time queue” - :
Answer
  • o Average time per task in the queue
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server

Question 72

Question
36. Little’s Law and a series of definitions lead to several useful equations for “Time server” - :
Answer
  • o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
  • o Average time per task in the queue
  • o Average time/task in the system, or the response time, which is the sum of Time queue and Time server

Question 73

Question
35. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Think time” - ?:
Answer
  • o The time from the reception of the response until the user begins to enter the next command
  • o The time for the user to enter the command
  • o The time between when the user enters the command and the complete response is displayed

Question 74

Question
34. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “System response time” - ?:
Answer
  • o The time between when the user enters the command and the complete response is displayed
  • o The time for the user to enter the command
  • o The time from the reception of the response until the user begins to enter the next command

Question 75

Question
33. What is kernel process?
Answer
  • o Provide at least two modes, indicating whether the running process is a user process or an operating system process
  • o Provide at least five modes, indicating whether the running process is a user process or an operating system process
  • o Provide a portion of the processor state that a user process can use but not write
  • o None of them

Question 76

Question
32. What does DDR stands for?
Answer
  • o Double data rate
  • o Dual data rate
  • o Double data reaction
  • o None of them

Question 77

Question
31. What does DRAM stands for?
Answer
  • o Dynamic Random Access memory
  • o Dual Random Access memory
  • o Dataram Random Access memory

Question 78

Question
30. What does SRAM stands for?
Answer
  • o Static Random Access memory
  • o System Random Access memory
  • o Short Random Accessmemory
  • o None of them

Question 79

Question
29. What is the cycle time?
Answer
  • o The minimum time between requests to memory.
  • o Time between when a read is requested and when the desired word arrives
  • o The maximum time between requests to memory.
  • o None of them

Question 80

Question
28. What is the access time?
Answer
  • o Time between when a read is requested and when the desired word arrives
  • o The minimum time between requests to memory.
  • o Describes the technology inside the memory chips and those innovative, internal organizations
  • o None of them

Question 81

Question
27. Data Hazard:
Answer
  • o An instruction depends on a data value produced by an earlier instruction
  • o An instruction in the pipeline needs a resource being used by another instruction in the pipeline
  • o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction

Question 82

Question
26. Structural Hazard:
Answer
  • o An instruction in the pipeline needs a resource being used by another instruction in the pipeline
  • o An instruction depends on a data value produced by an earlier instruction
  • o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction

Question 83

Question
25. Exploit spatial locality:
Answer
  • o by fetching blocks of data around recently accessed locations
  • o by remembering the contents of recently accessed locations
  • o None of them

Question 84

Question
24. Exploit temporal locality:
Answer
  • o by remembering the contents of recently accessed locations
  • o None of them
  • o by fetching blocks of data around recently accessed locations

Question 85

Question
23. Reduce Miss Rate: High Associativity. Empirical Rule of Thumb:
Answer
  • o Direct-mapped cache of size N has about the same miss rate as a two-way set- associative cache of size N/2
  • o If cache size is doubled, miss rate usually drops by about √2
  • o None of them

Question 86

Question
22. Reduce Miss Rate: Large Cache Size. Empirical Rule of Thumb:
Answer
  • o If cache size is doubled, miss rate usually drops by about √2
  • o Direct-mapped cache of size N has about the same miss rate as a two-way set- associative cache of size N/2
  • o None of them

Question 87

Question
21. Cache Hit -
Answer
  • o Write Through – write both cache and memory, generally higher traffic but simpler to design
  • o write cache only, memory is written when evicted, dirty bit per block avoids unnecessary write backs, more complicated
  • o No Write Allocate – only write to main memory

Question 88

Question
20. Least Recently Used (LRU):
Answer
  • o cache state must be updated on every access
  • o Used in highly associative caches
  • o FIFO with exception for most recently used block(s)

Question 89

Question
19. What is Computer Architecture?
Answer
  • o is the design of the abstraction/implementation layers that allow us to execute information processing applications efficiently using manufacturing technologies
  • o is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users
  • o the programs used to direct the operation of a computer, as well as documentation giving instructions on how to use them

Question 90

Question
18. What is a Bandwidth-Delay Product:
Answer
  • o is amount of data that can be in flight at the same time (Little’s Law)
  • o is time for a single access – Main memory latency is usually >> than processor cycle time
  • o is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle

Question 91

Question
17. What is a Bandwidth:
Answer
  • o a is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle
  • o is time for a single access – Main memory latency is usually >> than processor cycle time
  • o is amount of data that can be in flight at the same time (Little’s Law)

Question 92

Question
16. Control Hazard:
Answer
  • o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
  • o An instruction depends on a data value produced by an earlier instruction
  • o An instruction in the pipeline needs a resource being used by another instruction in the pipeline

Question 93

Question
15. Data Hazard:
Answer
  • o An instruction depends on a data value produced by an earlier instruction
  • o An instruction in the pipeline needs a resource being used by another instruction in the pipeline
  • o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction

Question 94

Question
1. Structural Hazard:
Answer
  • o An instruction in the pipeline needs a resource being used by another instruction in the pipeline
  • o An instruction depends on a data value produced by an earlier instruction
  • o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction

Question 95

Question
13. The formula of “Iron Law” of Processor Performance:
Answer
  • o time/program = instruction/program * cycles/instruction * time/cycle
  • o time/program = instruction/program * cycles/instruction + time/cycle
  • o time/program = instruction/program + cycles/instruction * time/cycle

Question 96

Question
12. Algorithm for Cache MISS:
Answer
  • o Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> Read block of data from main memory -> Replace victim block in cache with new block -> return copy of data from cache
  • o Processor issues load request to cache -> Read block of data from main memory -> return copy of data from cache
  • o Processor issues load request to cache -> Replace victim block in cache with new block -> return copy of data from cache

Question 97

Question
11. Algorithm for Cache HIT:
Answer
  • o Processor issues load request to cache -> Replace victim block in cache with new block -> return copy of data from cache
  • o Processor issues load request to cache -> Read block of data from main memory -> return copy of data from cache
  • o Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> return copy of data from cache

Question 98

Question
9. Capacity -
Answer
  • o cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache lines)
  • o misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)
  • o first-reference to a block, occur even with infinite cache

Question 99

Question
8. Compulsory -
Answer
  • o cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache lines)
  • o first-reference to a block, occur even with infinite cache
  • o misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)

Question 100

Question
7. Average Memory Access Time is equal:
Answer
  • o Hit Time * ( Miss Rate + Miss Penalty )
  • o Hit Time - ( Miss Rate + Miss Penalty )
  • o Hit Time / ( Miss Rate - Miss Penalty )
  • o Hit Time + ( Miss Rate * Miss Penalty )

Question 101

Question
6. Cache MISS:
Answer
  • o No Write Allocate, Write Allocate
  • o Write Through, Write Back

Question 102

Question
5. Cache HIT:
Answer
  • o No Write Allocate, Write Allocate
  • o Write Through, Write Back

Question 103

Question
4. What occurs at Data access when we speak about Common And Predictable Memory Reference Patterns?
Answer
  • o subroutine call
  • o n loop iterations
  • o vector access

Question 104

Question
3. What occurs at Stack access when we speak about Common And Predictable Memory Reference Patterns?
Answer
  • o subroutine call
  • o n loop iterations
  • o vector access

Question 105

Question
2. What occurs at Intruction fetches when we speak about Common And Predictable Memory Reference Patterns?
Answer
  • o n loop iterations
  • o subroutine call
  • o vector access

Question 106

Question
1. - What is a Latency:
Answer
  • o is time for a single access – Main memory latency is usually >> than processor cycle time
  • o is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle
  • o is amount of data that can be in flight at the same time (Little’s Law)
Show full summary Hide full summary

Similar

Product Design
cmbj
Study Planner
indibharat
Memory Key words
Sammy :P
Cory & Manuel_1
cory.jones2010
French -> small but important words for GCSE
georgie_hill
Devices That Create Tension.
SamRowley
Mumbai: Case study of Urbanisation
Hannah Burnett
EXAM 1 - ENABLING FEATURES
kristinephil558
Treaty of Versailles
Krista Mitchell
Medicine Through Time - Keywords
Lara Jackson
New PSCOD Model Test 2018
David Thapa