109. What is a “Kernel” in Cache Memory?
o Execution in the OS that is neither idle nor in synchronization access
o Execution or waiting for synchronization variables
o Execution in user code
108. What is a “Synchronization” in Cache Memory?
107. How many main levels of Cache Memory?
3
2
6
8
106. How many size of Cache L3 is true approximately? :
o 3 MB
o 256 KB
o 256 MB
105. How many size of Cache L2 is true approximately? :
o 4 KB
o 32 MB
104. How many size of Cache L1 is true approximately? :
o 8 KB
o 2 MB
103. Little’s Law and a series of definitions lead to several useful equations for
o Average length of queue
o Average number of tasks in service
102. Little’s Law and a series of definitions lead to several useful equations for “Length server” - :
101. Little’s Law and a series of definitions lead to several useful equations for “Time system” - :
o Average time/task in the system, or the response time, which is the sum of Time queue and Time server
o Average time to service a task; average service rate is 1/Time server traditionally represented by the symbol µ in many queuing texts
o Average time per task in the queue
100. Little’s Law and a series of definitions lead to several useful equations for “Time queue” - :
99. Little’s Law and a series of definitions lead to several useful equations for “Time server” - :
98. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Think time” - ?:
o The time from the reception of the response until the user begins to enter the next command
o The time for the user to enter the command
o The time between when the user enters the command and the complete response is displayed
97. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “System response time” - ?:
o The time for the user to enter the commando The time for the user to enter the command
96. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Entry time” - ? :
95. At storage systems Gray and Siewiorek classify faults what does mean “Environmental faults”? :
o Fire, flood, earthquake, power failure, and sabotage
o Faults in software (usually) and hardware design (occasionally)
o Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
94. At storage systems Gray and Siewiorek classify faults what does mean “Operation faults”? :
o Mistakes by operations and maintenance personnel
93. At storage systems Gray and Siewiorek classify faults what does mean “Design faults”? :
92. At storage systems Gray and Siewiorek classify faults what does mean “Hardware faults”? :
91. What is a RAID 4?
o Many applications are dominated by small accesses
o Since the higher-level disk interfaces understand the health of a disk, it’s easy to figure out which disk failed
o Also called mirroring or shadowing, there are two copies of every piece of data
90. What is a RAID 3?
89. What is a RAID 2?
o This organization was inspired by applying memory-style error correcting codes to disks
o It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,” although the data may be striped across the disks in the array
88. What is a RAID 1?
87. What is a RAID 0?
86. A virus classification by target includes the following categories, What is a File infector?
o Infects files that the operating system or shell consider to be executable
o A typical approach is as follows
o The key is stored with the virus
o Far more sophisticated techniques are possible
85. In Non-Blocking Caches what does mean “Early restart”?
o Fetch the words in normal order, but as soon as the requested word of the block arrives, send it to the processor and let the processor continue execution
o Request the missed word first from memory and send it to the processor as soon as it arrives; let the processor continue execution while filling the rest of the words in the block
84. In Non-Blocking Caches what does mean “Critical Word First”?
83. Storage Systems, “Higher associativity to reduce miss rate” -
o Obviously, increasing associativity reduces conflict misses
o The obvious way to reduce capacity misses is to increase cache capacity
o The simplest way to reduce the miss rate is to take advantage of spatial locality and increase the block size
82. Storage Systems, “Bigger caches to reduce miss rate” -
81. Storage Systems, “Larger block size to reduce miss rate” -
80. At Critical Word First for Miss Penalty chose correct sequence of Blocking Cache with Critical Word first “Order of fill”:
o 3,4,5,6,7,0,1,2
o 0,1,2,3,4,5,6,7
79. At Critical Word First for Miss Penalty chose correct sequence of Basic Blocking Cache “Order of fill”:
78. What does MAF?
o Miss Address File
o Map Address File
o Memory Address File
77. What does mean MSHR?
o Miss Status Handling Register
o Map Status Handling Reload
o Mips Status Hardware Register
o Memory Status Handling Register
76. Non-Blocking Cache Timeline for “Miss Under Miss” the sequence is -?
o CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->Miss Penalty->CPU time
o CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
o CPU time-Cache Miss-Miss Penalty-CPU time
75. Non-Blocking Cache Timeline for “Hit Under Miss” the sequence is -?
o CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->CPU time
74. Non-Blocking Cache Timeline for “Blocking Cache” the sequence is - ?
73. In Multilevel Caches “Misses per instruction” equals =
o misses in cache / number of instructions
o misses in cache / accesses to cache
o misses in cache / CPU memory accesses
72. In Multilevel Caches “Global miss rate” equals =
71. In Multilevel Caches “Local miss rate” equals =
70. What is a Conflict?
o misses that occur because of collisions due to less than full associativity
o first-reference to a block, occur even with infinite cache
o cache is too small to hold all data needed by program, occur even under perfect replacement policy
67. At VLIW Multi-Way Branches, which of this solution is true about problem: Long instructions provide few opportunities for branches:
o Allow one instruction to branch multiple directions
o Speculative operations that don’t cause exceptions
66. What is an ALAT? :
o Advanced Load Address Table
o Allocated Link Address Table
o Allowing List Address Table
o Addition Long Accessibility Table
65. h Speculative Execution, which of this solution is true about problem: Possible memory hazards limit code scheduling:
o Hardware to check pointer hazards
64. At VLIW Speculative Execution, which of this solution is true about problem: Branches restrict compiler code motion?
63. At VLIW by “performance and loop iteration” which time is shorter?
o Software Pipelined
o Loop Unrolled
62. At VLIW by “performance and loop iteration” which time is longer?
61. What is “VLIW”?
o Very Long Instruction Word
o Very Light Internal Word
o Very Less Interpreter Word
o Very Low Invalid Word
60. Out-of-Order Control Complexity MIPS R10000 which element is not in Control Logic?
o Integer Datapath
o CLK
o Free List
o Address Queue
59. Out-of-Order Control Complexity MIPS R10000 which element is in Control Logic?
o Register name
o Instruction cache
o Data tags
o Data cache
58. At VLIW “Superscalar Control Logic Scaling” which parameters are used?
o Width and Lifetime
o Width and Height
o Time and Cycle
o Length and Addition
57. What is an IQ?
o Issue Queue
o Internal Queue
o Interrupt Queue
o Instruction Queue
56. What is a FL?
o Free Last
o Free Launch
o Free Leg
55. What is a RT?
o Rename Table
o Recall Table
o Relocate Table
o Remove Table
54. Speculating on Exceptions “Recovery mechanism” is -
o Only write architectural state at commit point, so can throw away partially executed instructions after exception
o Exceptions are rare, so simply predicting no exceptions is very accurate
o An entity capable of accessing objects
o None of them
1. Speculating on Exceptions “Check prediction mechanism” is -
o Exceptions detected at end of instruction execution pipeline, special hardware for various exception types
o The way in which an object is accessed by a subject
52. Speculating on Exceptions “Prediction mechanism” is -
51. What is about Superscalar means “F-D-X-M-W”?
o Fetch, Decode, Execute, Memory, Writeback
o Fetch, Decode, Instruct, Map, Write
o Fetch, Decode, Excite, Memory, Write
o Fetch, Decode, Except, Map, Writeback
50. How many stages used in Superscalar (Pipeline)?
5
4
7
49. What is a SB?
o Scoreboard
o Scorebased
o Scalebit
o Scaleboard
48. What is a PRF?
o Physical Register File
o Pending Register File
o Pipeline Register File
o Pure Register File
47. What is a FSB?
o Finished Store Buffer
o Finished Stack Buffer
o Finished Stall Buffer
o Finished Star Buffer
46. What is a ROB?
o Reorder Buffer
o Read Only Buffer
o Reload Buffer
o Recall Buffer
45. What is a ARF:
o Architectural Register File
o Architecture Relocation File
o Architecture Reload File
o Architectural Read File
44. Which of the following formula is true about Issue Queue for “Instruction Ready”:
o Instruction Ready = (!Vsrc0 || !Psrc0)&&(!Vsrc1 || !Psrc1)&& no structural hazards
o Instruction Ready = (!Vsrc0 || !Psrc1)&&(!Vsrc1 || !Psrc0)&& no structural hazards
o Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc1)&& no structural hazards
o Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc0)&& no structural hazards
43. How many instructions used in Distributed Superscalar 2 and Exceptions?
1
42. How many issue queue used in Distributed Superscalar 2 and Exceptions:
41. How many issue queue used in Centralized Superscalar 2 and Exceptions?
40. Little’s Law and a series of definitions lead to several useful equations for “Length queue” -:
39. Little’s Law and a series of definitions lead to several useful equations for “Length server” - :
38. Little’s Law and a series of definitions lead to several useful equations for “Time system” - :
37. Little’s Law and a series of definitions lead to several useful equations for “Time queue” - :
36. Little’s Law and a series of definitions lead to several useful equations for “Time server” - :
35. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “Think time” - ?:
34. If we talk about storage systems an interaction or transaction with a computer is divided for first what is an “System response time” - ?:
33. What is kernel process?
o Provide at least two modes, indicating whether the running process is a user process or an operating system process
o Provide at least five modes, indicating whether the running process is a user process or an operating system process
o Provide a portion of the processor state that a user process can use but not write
32. What does DDR stands for?
o Double data rate
o Dual data rate
o Double data reaction
31. What does DRAM stands for?
o Dynamic Random Access memory
o Dual Random Access memory
o Dataram Random Access memory
30. What does SRAM stands for?
o Static Random Access memory
o System Random Access memory
o Short Random Accessmemory
29. What is the cycle time?
o The minimum time between requests to memory.
o Time between when a read is requested and when the desired word arrives
o The maximum time between requests to memory.
28. What is the access time?
o Describes the technology inside the memory chips and those innovative, internal organizations
27. Data Hazard:
o An instruction depends on a data value produced by an earlier instruction
o An instruction in the pipeline needs a resource being used by another instruction in the pipeline
o Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
26. Structural Hazard:
25. Exploit spatial locality:
o by fetching blocks of data around recently accessed locations
o by remembering the contents of recently accessed locations
24. Exploit temporal locality:
23. Reduce Miss Rate: High Associativity. Empirical Rule of Thumb:
o Direct-mapped cache of size N has about the same miss rate as a two-way set- associative cache of size N/2
o If cache size is doubled, miss rate usually drops by about √2
22. Reduce Miss Rate: Large Cache Size. Empirical Rule of Thumb:
21. Cache Hit -
o Write Through – write both cache and memory, generally higher traffic but simpler to design
o write cache only, memory is written when evicted, dirty bit per block avoids unnecessary write backs, more complicated
o No Write Allocate – only write to main memory
20. Least Recently Used (LRU):
o cache state must be updated on every access
o Used in highly associative caches
o FIFO with exception for most recently used block(s)
19. What is Computer Architecture?
o is the design of the abstraction/implementation layers that allow us to execute information processing applications efficiently using manufacturing technologies
o is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users
o the programs used to direct the operation of a computer, as well as documentation giving instructions on how to use them
18. What is a Bandwidth-Delay Product:
o is amount of data that can be in flight at the same time (Little’s Law)
o is time for a single access – Main memory latency is usually >> than processor cycle time
o is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle
17. What is a Bandwidth:
o a is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle
16. Control Hazard:
15. Data Hazard:
1. Structural Hazard:
13. The formula of “Iron Law” of Processor Performance:
o time/program = instruction/program * cycles/instruction * time/cycle
o time/program = instruction/program * cycles/instruction + time/cycle
o time/program = instruction/program + cycles/instruction * time/cycle
12. Algorithm for Cache MISS:
o Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> Read block of data from main memory -> Replace victim block in cache with new block -> return copy of data from cache
o Processor issues load request to cache -> Read block of data from main memory -> return copy of data from cache
o Processor issues load request to cache -> Replace victim block in cache with new block -> return copy of data from cache
11. Algorithm for Cache HIT:
o Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> return copy of data from cache
9. Capacity -
o cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache lines)
o misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)
8. Compulsory -
7. Average Memory Access Time is equal:
o Hit Time * ( Miss Rate + Miss Penalty )
o Hit Time - ( Miss Rate + Miss Penalty )
o Hit Time / ( Miss Rate - Miss Penalty )
o Hit Time + ( Miss Rate * Miss Penalty )
6. Cache MISS:
o No Write Allocate, Write Allocate
o Write Through, Write Back
5. Cache HIT:
4. What occurs at Data access when we speak about Common And Predictable Memory Reference Patterns?
o subroutine call
o n loop iterations
o vector access
3. What occurs at Stack access when we speak about Common And Predictable Memory Reference Patterns?
2. What occurs at Intruction fetches when we speak about Common And Predictable Memory Reference Patterns?
1. - What is a Latency: