(a) One way the processor can do this

(a) The Central Processing Unit (CPU) is the chip that
is made up of millions of transistors and is considered the ‘brain’ of a
computer as it is the component that handles program instructions and data. The
Control Unit (CU) and Arithmetic Logic Unit (ALU) are two components of the
CPU. The ALU carries out logical operations whilst the CU gets, decodes and
executes instructions from memory. For
example a microcontroller (e.g. found in gadgets such as home appliances) does
not contain a microprocessor but does have a CPU, a certain amount of random access
memory, read only memory and other components all implemented on a chip.

A microprocessor is the circuitry that contains the
CPU and other processors such as the Graphics Processor Unit (GPU). All of the
CPUs’ functions are incorporated on an integrated circuit (IC), which can
contain more than a single CPU (two CPUs for dual-core technologies). The
function of the microprocessor is to control the logic of nearly every digital
device. Data is carried between units as buses interconnect the processor to
memory and I/O.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now


(b) Computer components have to send data from and
to the processor and require the attention of the processor when they must do
this. The processor needs to balance data transfers it gets from many sections
of the computer to ensure that they are managed in an organised manner. One way
the processor can do this is by letting the devices request for tasks to be
handled when the attention of the processor is needed, which is the basis for
the concept of interrupts.

An interrupt refers to a signal to the processor from
the computers’ hardware or a program that indicates an event requires urgent
attention. The interrupt alerts the processor to stop performing its current
task and perform the interrupt task instead. Once this task is finished, the
processor resumes what it was doing before.  


(c) Dynamic random access memory (DRAM) is a method
of storage widely utilised as the computer systems’ main memory. It is a sort
of random access memory which stores every bit of data inside an integrated
circuit in separate capacitors, which can either be discharged or charged;
these two conditions depict the values of a bit (0 and 1). A DRAM memory cell
is dynamic in the sense that it must be refreshed or have a new electronic
charge within each small amount of milliseconds to recompense for capacitor
charge leaks.  A diagram of a DRAM memory
cell is shown below.


The advantage of DRAM memory is the simplicity of
its structure of memory cells as one capacitor and transistor are needed for every
bit, in comparison to multiple transistors in SRAM. This enables DRAM to achieve
really high densities, which makes it more inexpensive per bit.

Static random access memory (SRAM) stores bits of
data in its memory so long as power is supplied. Unlike DRAM memory that retains
bits in cells that have a transistor and capacitor, it is not required of SRAM
to be refreshed periodically as the transistors within the cell would continue
to keep the data. A diagram of a SRAM memory cell is shown below.


Since SRAM is static because the memory
is not refreshed continuously like DRAM, SRAM performs faster and is used in
the processor, but is costlier.


(d) The CPU fetches instructions and data directly
from cache memory, which is situated on the processor chip. Cache memory needs
to be loaded in from main memory (RAM). However, RAM is volatile thus contents
that can be recovered even after the power of a computer is off needs to be
kept on permanent storage. These levels of memory make up the memory hierarchy;
an example of this representation is shown below.

Memory hierarchy affects performance in computer
algorithm predictions, low level programming constructs that involve locality
of reference and architectural design. Computer storage is separated into
different levels which are ordered from largest, slowest and cheapest storage
devices to the smallest, fastest and costlier storage devices. As shown in the
diagram, level 5 (L5) is the slowest and largest type of memory (remote
secondary storage) and level 0 (L0) is the fastest and smallest type of memory


(e) A bus is a data connection that links computer
components, for instance a bus allows the central processing unit to communicate
with the main memory. Buses can be found internally on the processor where they
carry program instructions and data around the central processing unit. They
also connect the processor to main memory (RAM), and connect other memory
devices (SATA) and many I/O devices.

A simple basic bus (early structure) is constructed using
sets of parallel wires, providing a shared channel to transfer data over many
wires at the same time. For instance a 32 bit bus structure contains 32 wires (an
example of a common parallel bus is a PCI).



(i)   1110101





       111 111          carries


(ii)    110111

           10011    *










 (i)  Conversion of hexadecimal numbers C7 and A1 to
decimal and 8-bit binary

to decimal conversion:

 Since C16
= 1210 = 11002, 716
= 710 = 01112,

 then C716 = 110001112, which is 19910.

 Since 116
= 110 = 00012, A = 1010 = 10102,

 then 1A16
= 000110102
, which is 2610.

 Hexadecimal to binary conversion:

 Since C16
= 1210 = 11002, 716
= 710 = 01112,

 then C716 = 110001112

 Since 116
= 110 = 00012, A = 1010 = 10102,

 then 1A16
= 000110102


 (ii) Conversion
of decimal numbers 103 and 245 to hexadecimal and 8-bit binary

Decimal to hexadecimal

64 32 16 8 4 2 1


0    1  
1   0  0 1 1 1


64 + 32 + 4 + 2 + 1 = 103, therefore 10310 = 011001112

Each hexadecimal value is equivalent to 4 bits, so
since 01102
= 616 and 01112 = 716,
then 10310
= 6716.


64 32 16 8 4 2 1


1    1  
1   1  0 1 0 1


128 + 64 + 32 + 16 + 4 + 1 = 245, therefore 24510 = 111101012

Each hexadecimal value is equivalent to 4 bits, so
since 11112
= F16 and 01012 = 516,
then 24510
= F516.


Decimal to binary conversion:

64 32 16 8 4 2 1


1   0  0 1 1 1


64 + 32 + 4 + 2 + 1 = 103, therefore 10310 = 011001112.


64 32 16 8 4 2 1


1   1  0 1 0 1


128 + 64 + 32 + 16 + 4 + 1 = 245, therefore 24510 = 111101012





 128 64 32 16 8 4 2 1


 0    1  
0   0  1 0 0 1


64 + 8 + 1 = 73, therefore
7310 = 010010012

The value -73 can be represented in 8-bit 2’s
complement by flipping the bits (changing 1 to 0, or 0 to 1) without changing the
least significant bit (which is 1). So if the bits are flipped and the least significant
bit is kept the same the result would be 10110111, which is the 8-bit 2’s
complement binary representation of -73.

 {ii} A binary
value representation that begins with 0 is positive; if it begins with 1 it is
negative. Therefore 11100011 represent a negative value in 8-bit 2’s

If every bit starting after the least significant
bit is flipped, the outcome would be 00011101 which represent the positive
number 29, thus 11100011 represents -29. 



(i) ASCII encoding works well for the English
language as Western characters are represented by specific numbers, with values
from 0-127 being assigned to every letter and control code. However, it does
not assign numbers for characters that are not used for English text such as
for the different characters used in Chinese or Arabic text.

To resolve this issue, ‘code pages’ where defined to
use the unspecified space of values 128-255 in ASCII that did not explicitly specify
what those values represent. Therefore, each value from 128-255 was mapped to
various characters that were required for other languages. However 128 more
characters are not enough for the whole world, so the code pages depended on
the country (Thai code page, French code page, etc.).

There would not be a problem with data exchange if
both sender and receiver were using the same code page (e.g. character number
135 on the sender’s machine was the identical to character number 135 on the
receiver’s machine). This would not be possible if the code pages were not the
same (Hebrew sender, Russian receiver). The character assigned to the number
135 differed in Hebrew and Russian. This method of global data exchange was
prone to errors, which led to the development of Unicode.

The Unicode standard assigns each character with a
code point, for instance ‘A’ has the code point 0+0041 (which is the
hexadecimal code point, the code point for this character in decimal is 65). Individual
characters for every language were mapped to a code point and left space for
approximately 1 million code points, which was suffice for every known


 (ii) The
high-order bits in UTF-8 are important as the starting bits of the first byte
shows the number of bytes that are used to encode a value. The Arabic Alef for
example has the Unicode code point U+0627 and uses 2-bytes for encoding. In
2-byte encodings the first 3 bits of the first byte are ‘110’ and the first 2
bits of the second byte are ’10’, which means that 11 bits are used to encode
the character. The starting bits (‘110′ and ’10’) are used as part of the UTF-8

(iii) The binary sequence 11100001 has the starting
‘1110’ bits, meaning that the character uses 3 byte for encoding.



(i) Rounding error refers to the differentiation
between an exact value and a rounded-off value. The rounded quantity is
depicted by an integer with a set amount of permitted digits, with the endmost
digit that is set to the numerical value creating the slightest distinction
between the actual and the rounded quantity.


(iii) Floating point arithmetic is not precise, so simple
values such as 0.2 cannot be exactly depicted by binary floating point numerals.
Furthermore the restricted precision of floating point values means that minor
modifications in the arrangement of operations can modify the outcome. Different
central processing unit architectures and compilers store results temporarily
at distinct precisions. If you were to carry out a calculation and make a
comparison of the outcome against an expected value, it is very likely that you
will not get the exact outcome you intended.












(a) An interpreter is similar to a compiler as it is
a tool used to translate code; however an interpreter reads code and
immediately executes it. High-level instructions are translated into an intermediate
form that is then executed.

(b) Pipelining is the overlapped and continual
movement of instructions to the central processing unit. Without pipelining, a
processor will acquire the instruction from memory, execute the task required
and fetch the next instruction from the memory, and so on. Throughout the instruction
fetching process, the arithmetic section of the processor is not performing as
it has to wait until it obtains the next instruction. This issue is solved by
the concept of pipelining because while the processor is carrying out
arithmetic tasks, the computer architecture enables upcoming instructions to be
fetched, keeping them in a buffer near the processor until every instruction
task can be executed. The process of fetching instructions is continual, and
the outcome is the rise in the amount of instructions that can be executed within
a given period of time.

With the implementation of pipelining when there are
multiple instructions in a MIPs program, the processor can start performing the
next instruction before the previous one is completed in order to maximise the
efficiency in the application.

There are five phases of the MIPs pipelining process:

 1. IF: Instruction
Fetching- obtains instruction from memory.

 2. ID:
Instruction Decoding- opcode is translated into control signals

 3. EX:
Execution- carries out ALU operation

 4. MEM: Memory-
access memory if required

 5. WB:
Writeback-  register file is updated


 (c) The
function of a processor involves processing data, which is stored in and
retrieved from memory. Storing data and reading it from the memory slows down the
performance of the processor, because of the complex operations involved with
sending data requests to the memory unit and retrieving it. Therefore, to
improve the performance of processor operations, processors have memory storage
locations known as registers.

Registers provide storage for data elements that are
going to be processed without needing to access the main memory. A finite
amount of registers are implemented into the central processing unit chip.

The MIPs instruction bne $t0 $t1 loop (branch if not equal) is a conditional statement
meaning if the values in the registers $t0 and $t1 are equal, the commands
under the loop label must be

The branch if not equal instruction has an impact on
the effectiveness of pipelining because two cycles are wasted waiting for the
branch decision. One strategy to overcome this problem is to continue the
fetching process under the presumption that the branch is not taken, otherwise cancel
unwanted ones if branch is taken.

Author: admin


I'm Mia!

Don't know how to start your paper? Worry no more! Get professional writing assistance from me.

Check it out