Computer Organization and Architecture | <
Home About Notes Follow Dash

welcome!


Welcome to pecelleleosk.blogspot.my! We provide information and notes about Computer Organization and Architecture for you to access just by your fingertips! Easy and no stress! Enjoy your study :D

recent entries


Credits

© Designed by x Base code by y z


Chapter 12:Memory Organization 2(External Memory)
Tuesday, 6 December 2016 • 19:51 • 0 comments




Type of External Memory
v  Magnetic Disk
§  RAID
§  Removable
v  Optical
§  CD-ROM
§  CD-Recordable (CD-R)
§  CD-R/W
§  DVD
v  Magnetic Tape

Magnetic Disk
Definition
A magnetic disk is a storage device that uses a magnetization process to write, rewrite and access data. It is covered with a magnetic coating and stores data in the form of tracks, spots and sectors. Hard disks, zip disks and floppy disks are common examples of magnetic disks.
v  Disk substrate coated with magnetizable material (iron oxide…rust)
v  Substrate used to be aluminium
v  Now glass
  Ø  Improved surface uniformity
v  Increases reliability
  Ø  Reduction in surface defects
v  Reduced read/write errors
  Ø  Lower flight heights (See later)
  Ø  Better stiffness
  Ø  Better shock/damage resistance





Read and Write Mechanisms
v  Recording & retrieval via conductive coil called a head
v  May be single read/write head or separate ones
v  During read/write, head is stationary, platter rotates
v  Write
Ø  Current through coil produces magnetic field
Ø  Pulses sent to head
Ø  Magnetic pattern recorded on surface below
v  Read (traditional)
Ø  Magnetic field moving relative to coil produces current
Ø  Coil is the same for read and write
v  Read (contemporary)
Ø  Separate read head, close to write head
Ø  Partially shielded magneto resistive (MR) sensor
Ø  Electrical resistance depends on direction of magnetic field
Ø  High frequency operation
v  Higher storage density and speed

Data Organization and Formatting
v  Concentric rings or tracks
Ø  Gaps between tracks
Ø  Reduce gap to increase capacity
Ø  Same number of bits per track (variable packing density)
Ø  Constant angular velocity
v  Tracks divided into sectors
v  Minimum block size is one sector
v  May have more than one sector per block




Disk Data Layout

Disk Velocity
v  Bit near centre of rotating disk passes fixed point slower than bit on outside of disk
v  Increase spacing between bits in different tracks
v  Rotate disk at constant angular velocity (CAV)
Ø  Gives pie shaped sectors and concentric tracks
Ø  Individual tracks and sectors addressable
Ø  Move head to given track and wait for given sector
Ø  Waste of space on outer tracks
v  Lower data density
v  Can use zones to increase capacity
Ø  Each zone has fixed bits per track
Ø  More complex circuitry


Disk Layout Method Diagram



Tracks and Cylinders


Memory Access Methods
v  Sequential
Ø  Start at the beginning and read through in order
Ø  Access time depends on location of data and previous location
Ø  e.g. tape
v  Direct
Ø  Individual blocks have unique address
Ø  Access is by jumping to vicinity plus sequential search
Ø  Access time depends on location and previous location
Ø  e.g. disk

Media Access Methods
v  An access method is  a set of rules governing how the network notice share the transmission medium. The rules for sharing among computer are similar to the rules for sharing among the humans in that they both boil down to a pair of fundamental philosophies:
  • Contention: (CSMA/CD Carrier Sense Multiple Access with Collision Detection, CSMA/CA Carrier Sense Multiple Access with Collision Avoidance)
  • Token passing
  • Demand Priority

Access Method (2)
v  Random
Ø  Individual addresses identify locations exactly
Ø  Access time is independent of location or previous access
Ø  e.g. RAM
v  Associative
Ø  Data is located by a comparison with contents of a portion of the store
Ø  Access time is independent of location or previous access
Ø  e.g. cache

Memory Hierarchy
Definition
In computer architecture the memory hierarchy is a concept used to discuss performance issues in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference. The memory hierarchy in computer storage separates each of its levels based on response time.
v  Registers
§  In CPU
v  Internal or Main memory
  • May include one or more levels of cache
  •   “RAM”

v  External memory
  •          Backing store


Diagram




Performance
Access Time
v  Time between presenting the address and getting the valid data
v  Time interval between the instant at which an instruction control unit initiates a call for data or a request to store data, and the instant at which delivery of the data is completed or the storage is started.
Configuration
v  Formatted Capacity, GB 250
v  Sector Size, Byte 1024
v  Data heads 10
v  Data disks 5
v  Performance
v  Rotational speed, RPM 5400
v  Disk transfer rate, MB/sec 100
v  Controller overhead, μsec 30
v  Seek time, ms 20
v  Figure : Hard Disk Specification
Example:
Average disk access time is the total times taken for average seek time + average rotational delay + transfer time + controller overhead +queuing delay
= 20ms + (0.5/5400) + (1MB/100MB/s) + 30μs
= 20ms + 5.6ms + 10ms + 0.03ms = 35.63ms
However, the manufacturer advertised average seek times are not the actual average seek time.
Let say the measured seek time is 50% of the advertised average seek time, the average access time :
= 10ms + 5.6ms + 10ms + 0.03ms = 25.63ms



v  However, the manufacturer advertised average seek times are not the actual average seek time.
v  Let say the measured seek time is 50% of the advertised average seek time, the average access time :
     = 10ms + 5.6ms + 10ms + 0.03ms = 25.63ms .
There are 2 factors that mislead the manufacturer advertised seek time
v  based on all possible seeks
v  Locality and OS scheduling lead to smaller actual average seek times
v  Memory Cycle time
  •  Time may be required for the memory to “recover” before next access
  • Cycle time is access + recovery

v  Transfer Rate
  •      Rate at which data can be moved


RAID
v  Redundant Array of Independent Disks
v  Redundant Array of Inexpensive Disks
v  6 levels in common use
v  Not a hierarchy
v  Set of physical disks viewed as single logical drive by O/S
v  Data distributed across physical drives
v  Can use redundant capacity to store parity information



RAID 0 - Striping
v  No redundancy
v  Data striped across all disks
v  Round Robin striping
v  Increase speed
  •  Multiple data requests probably not on same disk
  •   Disks seek in parallel
  •   A set of data is likely to be striped across multiple disks

Advantages
  • RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
  • All storage capacity is used, there is no overhead.
  • The technology is easy to implement.
Disadvantages
  • RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

RAID 1 - Mirroring
v  Mirrored Disks
v  Data is striped across disks
v  2 copies of each stripe on separate disks
v  Read from either
v  Write to both
v  Recovery is simple
  • Swap faulty disk & re-mirror
  • No down time

v  Expensive

Advantages
  • RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
  • In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive.
  • RAID 1 is a very simple technology.
Disadvantages
  • The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.
  • Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping.

RAID 2
v  Disks are synchronized
v  Very small stripes
  •     Often single byte/word


v  Error correction calculated across corresponding bits on disks
v  Multiple parity disks store Hamming code error correction in corresponding positions
v  Lots of redundancy
  •  Expensive
  •   Not used

v  This uses bit level striping. i.e Instead of striping the blocks across the disks, it stripes the bits across the disks.
v  In the above diagram b1, b2, b3 are bits. E1, E2, E3 are error correction codes.
v  You need two groups of disks. One group of disks are used to write the data, another group is used to write the error correction codes.
v  This uses Hamming error correction code (ECC), and stores this information in the redundancy disks.
v  When data is written to the disks, it calculates the ECC code for the data on the fly, and stripes the data bits to the data-disks, and writes the ECC code to the redundancy disks.
v  When data is read from the disks, it also reads the corresponding ECC code from the redundancy disks, and checks whether the data is consistent. If required, it makes appropriate corrections on the fly.
v  This uses lot of disks and can be configured in different disk configuration. Some valid configurations are 1) 10 disks for data and 4 disks for ECC 2) 4 disks for data and 3 disks for ECC
v  This is not used anymore. This is expensive and implementing it in a RAID controller is complex, and ECC is redundant now-a-days, as the hard disk themselves can do this.

Advantages
  • The rate at which data is transferred is very high.
  • Single bit errors can be detected and corrected very easily.
  • Multiple bit errors can be detected.
Disadvantages
  • Multiple bits error may also occur, which cannot be corrected.
  • The logic behind the error bit is complex and tiresome to compute.
  • Raid 3 gives better performance and lower price.

RAID 3
Similar to RAID 2
v  Only one redundant disk, no matter how large the array
v  Simple parity bit for each set of corresponding bits
v  Data on failed drive can be reconstructed from surviving data and parity info
v  Very high transfer rates
v  This uses byte level striping. i.e Instead of striping the blocks across the disks, it stripes the bytes across the disks.
v  In the above diagram B1, B2, B3 are bytes. p1, p2, p3 are parities.
v  Uses multiple data disks, and a dedicated disk to store parity.
v  The disks have to spin in sync to get to the data.
v  Sequential read and write will have good performance.
v  Random read and write will have worst performance.
v  This is not commonly used.

Advantages
  • ·         High throughput for transferring large amounts of data
  • ·         Resistant to disk failure and breakdown, which leads to RAID 3's main disadvantages (below).

Disadvantages:
  • The configuration may be too much if a small file transfer is the only requirement.
  • Disk failures may significantly decrease throughput.



RAID 4
v  Each disk operates independently
v  Good for high I/O request rate
v  Large stripes
v  Bit by bit parity calculated across stripes on each disk
v  Parity stored on parity disk
v  This uses block level striping.
v  In the above diagram B1, B2, B3 are blocks. p1, p2, p3 are parities.
v  Uses multiple data disks, and a dedicated disk to store parity.
v  Minimum of 3 disks (2 disks for data and 1 for parity)
v  Good random reads, as the data blocks are striped.
v  Bad random writes, as for every write, it has to write to the single parity disk.
v  It is somewhat similar to RAID 3 and 5, but a little different.
v  This is just like RAID 3 in having the dedicated parity disk, but this stripes blocks.
v  This is just like RAID 5 in striping the blocks across the data disks, but this has only one parity disk.
v  This is not commonly used.


RAID 5
  vTwo parity calculations
  v  Stored in separate blocks on different disks
  v  User requirement of N disks needs N+2
  v  High data availability
§  Three disks need to fail for data loss
§  Significant write penalty

Advantages
  • Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
  • If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.
Disadvantages
  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever.

RAID 6
v  Two parity calculations
v  Stored in separate blocks on different disks
v  User requirement of N disks needs N+2
v  High data availability
  •  Three disks need to fail for data loss
  •   Significant write penalty

v  Just like RAID 5, this does block level striping. However, it uses dual parity.
v  In the above diagram A, B, C are blocks. p1, p2, p3 are parities.
v  This creates two parity blocks for each data block.
v  Can handle two disk failure
v  This RAID configuration is complex to implement in a RAID controller, as it has to
v  calculate two parity data for each data block.

Advantages
  • Like with RAID 5, read data transactions are very fast.
  • If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5.
Disadvantages
  • Write data transactions are slowed down due to the parity that has to be calculated.
  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. Rebuilding an array in which one drive failed can take a long time.













0 Comments:

Post a Comment


|