SEB 070021 - TUTORIAL 5

Monday, September 15, 2008

0 comments  

MAGNETIC DISK




IBM introduced the first magnetic disk, the RAMAC, in 1955; it held 5 megabytes and rented for $3,200 per month. Magnetic disks are platters coated with iron oxide, like tape and drums. An arm with a tiny wire coil, the read/write (R/W) head, moves radially over the disk, which is divided into concentric tracks composed of small arcs, or sectors, of data.

DESCRIPTION

MEDIA MECHANICS – Multiple fixed disk
DRIVE MECHANIS – Excellent
POSITION ERROR SIGNALS – Mediocre SNR / Multiplexed with Data
SAMPLE RATE – Low/ Medium
VERTICAL POSITION – Air Bearings : Near field, No focus loop, Multiple small
Heads
TRACKING LOOP – Single medium-high frequency
SPINDLE LOOP – Low frequency
SPINDLE MODE – Constant Angular Velocity (CAV)
TRACKS – Circular
APPLICATIONS – Mostly random access


ADVANTAGES

~ Hard disk space is relatively cheap, as low as 13p per GB
~ Hard disks store data without the need for a constant electricity supply
~ Hard disks allow data to be stored in one place which is more convenient than using DVDs for example.
~ Hard drives have a read/write speed in comparison to optical media (cds) although very low in comparison to RAM


OPTICAL DISK






An optical disk is a compact disk or CD. The formatting of the optical disk will dictate whether it is a DVD, CD, read-only or rewritable. The optical disk is so named because its technology is based on light. As the disk spins, a laser beam follows a spiraling trail of pits and lands in the plastic material of the disk. The pits reflect light differently than the lands, while a device translates the reflective difference to bits of "on/off" or 1 and 0. The bits form bytes that carry the digital code of the data stored on the optical disk.

DESCRIPTION

MEDIA MECHANICS – Single removable disk
DRIVE MECHANIS – Mediocre
POSITION ERROR SIGNALS – Excellent SNR / continuously available
SAMPLE RATE – High
VERTICAL POSITION – No air bearings : far field, focus loop, single large
heads
TRACKING LOOP – Coarse (low frequency) and fine ( high frequency)
SPINDLE LOOP – Low frequency
SPINDLE MODE – Constant Angular Velocity (CAV) and Constant Linear
Velocity (CLV)
TRACKS – Predominantly spiral, some circular
APPLICATIONS – Mostly streaming files


ADVANTAGES

~ Compact, lightweight, durable and digital,
~ Mass storage capacity (on the order of gigabytes) . A double-layered and double-sided DVD optical disk holds up to 15.9 gigabytes (GB) of data. The optical disk also provides a minimum of 650 megabytes (MB) of data storage.
~ Mountable/uncountable storage units
~ Long media life
~ High data stability

FLASH MEMORY


DESCRIPTION
Flash memory refers to a particular type of EEPROM, or Electronically Erasable Programmable Read Only Memory. It is a memory chip that maintains stored information without requiring a power source. Flash memory can erase its data in entire blocks, making it a preferable technology for applications that require frequent updating of large amounts of data as in the case of a memory stick. Inside the flash chip, information is stored in cells. A floating gate protects the data written in each cell. Tunneling electrons pass through a low conductive material to change the electronic charge of the gate in "a flash," clearing the cell of its contents so that it can be rewritten. This is how flash memory gets its name.

ADVANTAGES

Flash memory used as a hard drive has many advantages over a traditional hard drive. It's nonvolatile or solid state, meaning there are no moving parts. It's also silent, much smaller than a traditional hard drive, and highly portable with a much faster access time. That said, the price of flash memory continues to drop as capacity continues to rise, making it a prize candidate for an ever-broadening set of applications.


MAGNETO OPTICAL DISK



DESCRIPTION

Magneto-optical (MO) drives are generally understood to be the one of the most popular methods of data storage for many businesses. Using cartridges that are very similar in appearance to the older 3.5 inch diskettes that were popular in the 1990’s, the magneto-optical drive provides a means of storing huge amounts of data in a very small storage unit. Unlike the older diskettes, the cartridges that fit into an MO drive typically hold up to several gigabytes of information with no problem.
Part of what helps to make the magneto-optical drive so efficient is that configuration of the device. MO drives make use of a dual method to read and write data to the cartridge. Both lasers and a head configured with read and write ability scan and save the data onto the cartridge. The amount of time required to scan and process information with a magneto-optical drive is actually much quicker than some older methods of backing up files, such as tape backups or even the use of CDR backups. Because the capacity on a single cartridge is so great, there is no need to use multiple units to capture and store the data, even when copying huge data files.


ADVANTAGES

Another advantage to the cartridges used with a magneto-optical drive is the fact that they can be erased and reused multiple times, just like most other types of storage devices. For companies that choose to copy key data files on a daily basis and keep them in storage for only a limited amount of time, this means the older cartridges can be wiped and used again, cutting down on expenses associated with archiving important data.

SEB070021 - TUTORIAL 4

Wednesday, August 27, 2008

0 comments  








New Page 1
































WINDOWS

DIFFERENCE

LINUX

Only those parts of the program and data that are
currently in active use need to be held in physical RAM. Other parts are
then held in a swap file (as it’s called in Windows 95/98/ME:
Win386.swp) or page file (in Windows NT versions including Windows
2000 and XP: pagefile.sys). When a program tries to access some address that
is not currently in physical RAM, it generates an interrupt, called a Page Fault. This asks the system to retrieve the 4 KB page containing
the address from the page file (or in the case of code possibly from the
original program file). This — a valid page fault — normally happens quite
invisibly. Sometimes, through program or hardware error, the page is not
there either. The system then has an ‘Invalid Page Fault’ error.


If there is pressure on space in RAM, then
parts of code and data that are not currently needed can be ‘paged out’ in
order to make room — the page file can thus be seen as an overflow area to
make the RAM behave as if it were larger than it is.


 


PAGE FAULTS

Once an executable image has been memory mapped
into a process' virtual memory it can start to execute. As only the very
start of the image is physically pulled into memory it will soon access an
area of virtual memory that is not yet in physical memory. When a process
accesses a virtual address that does not have a valid page table entry, the
processor will report a page fault to Linux.

The page fault describes the virtual address where the page fault
occurred and the type of memory access that caused the fault. Linux must
find the area of memory in which the page fault occurred in. This is done
through the
vm_area_struct kernel data structure. As searching
through the
vm_area_struct data structures is critical to the
efficient handling of page faults, these are linked together in an AVL (Adelson-Velskii
and Landis) tree structure. (An AVL tree structure is a balanced binary
search tree where the height of the two subtrees (children) of a node
differs by at most one, thus optimizing searches.) If there is no

vm_area_struct
data structure for this faulting virtual address, this
process has accessed an illegal virtual address. Linux will signal the
process, sending a
SIGSEGV signal and if the process does not have
a handler for that signal it will be terminated.


Linux next checks the type of page fault that occurred against the types
of accesses allowed for this area of virtual memory. If the process is
accessing the memory in an illegal way, say writing to an area that it is
only allowed to read from, it is also signalled with a memory error.


Now that Linux has determined that the page fault is legal, it must deal
with it.


Linux must differentiate between pages that are in the swap file and
those that are part of an executable image on a disk somewhere. It does this
by using the page table entry for this faulting virtual address.


If the page's page table entry is invalid but not empty, the page fault
is for a page currently being held in the swap file. For Alpha AXP page
table entries, these are entries which do not have their valid bit set but
which have a non-zero value in their PFN field. In this case the PFN field
holds information about where in the swap (and which swap file) the page is
being held. How pages in the swap file are handled is described later in
this chapter.


Not all
vm_area_struct
data structures have a set of virtual
memory operations and even those that do may not have a nopage
operation. This is because by default Linux will fix up the access by
allocating a new physical page and creating a valid page table entry for it.
If there is a nopage operation for this area of virtual memory,
Linux will use it.


The generic Linux nopage operation is used for memory mapped
executable images and it uses the page cache to bring the required image
page into physical memory.


However the required page is brought into physical memory, the process'
page tables are updated. It may be necessary for hardware specific actions
to update those entries, particularly if the processor uses translation look
aside buffers. Now that the page fault has been handled it can be dismissed
and the process is restarted at the instruction that made the faulting
virtual memory access.


 


There is a great deal of myth surrounding this
question. Two big fallacies are:


  • The file should be a fixed size so that it does not get fragmented,
    with minimum and maximum set the same

  • The file should be 2.5 times the size of RAM (or some other multiple)



Both are wrong in a modern, single-user system. A machine using Fast User
switching is a special case, discussed.


Windows will expand a file that starts out too small and may shrink it
again if it is larger than necessary, so it pays to set the initial size as
large enough to handle the normal needs of your system to avoid
constant changes of size. This will give all the benefits claimed for a
‘fixed’ page file. But no restriction should be placed on its further
growth. As well as providing for contingencies, like unexpectedly opening a
very large file, in XP this potential file space can be used as a
place to assign those virtual memory pages that programs have asked for, but
never brought into use. Until they get used — probably never — the file need
not come into being. There is no downside in having potential space
available.


For any given workload, the total need for virtual addresses will not
depend on the size of RAM alone. It will be met by the sum of RAM and
the page file. Therefore in a machine with small RAM, the extra amount
represented by page file will need to be larger — not smaller — than that
needed in a machine with big RAM. Unfortunately the default settings for
system management of the file have not caught up with this: it will assign
an initial amount that may be quite excessive for a large machine, while at
the same leaving too little for contingencies on a small one.


How big a file will turn out to be needed depends very much on your
work-load. Simple word processing and e-mail may need very little — large
graphics and movie making may need a great deal. For a general workload,
with only small dumps provided for , it is suggested that a sensible
start point for the initial size would be the greater of (a)
100 MB or (b) enough to bring RAM plus file to about 500 MB. EXAMPLE: Set the Initial page file size to 400 MB on a
computer with 128 MB RAM; 250 on a 256 MB computer; or 100 MB for larger
sizes.


But have a high Maximum size — 700 or 800 MB or even more if there is
plenty of disk space. Having this high will do no harm. Then if you find the
actual pagefile.sys gets larger (as seen in Explorer), adjust the initial
size up accordingly. Such a need for more than a minimal initial page file
is the best indicator of benefit from adding RAM: if an initial size set,
for a trial, at 50MB never grows, then more RAM will do nothing for the
machine's performance.


Bill James MS MVP has a convenient tool, ‘WinXP-2K_Pagefile’, for
monitoring the actual usage of the Page file, which can be downloaded . The value
seen for ‘Peak Usage’ over several days makes a good guide for setting the
Initial size economically.


Note that these aspects of Windows XP have changed significantly from
earlier Windows NT versions, and practices that have been common there may
no longer be appropriate. Also, the ‘PF Usage’ (Page File in Use)
measurement in Task Manager | Performance for ‘Page File
in Use’ include those potential uses by pages that have not been
taken up. It makes a good indicator of the adequacy of the ‘Maximum’ size
setting, but not for the ‘Initial’ one, let alone for any need for more RAM.


PAGE SIZE

Most modern operating systems have their main
memory divided into pages. It allows better utilization of memory. A page is
a fixed length block of main memory, that is contiguous in both physical
memory addressing and virtual memory addressing. Kernel swap and allocates
memory using pages

Whether getpagesize() is present as a
Linux system call depends on the
architecture. If it is, it returns the kernel symbol PAGE_SIZE, which is
architecture and machine model dependent. Generally, one uses binaries that
are architecture but not machine model dependent, in order to have a single
binary distribution per architecture. This means that a user program should
not find PAGE_SIZE at compile time from a header file, but use an actual
system call, at least for those architectures (like sun4) where this
dependency exists. Here libc4, libc5, glibc 2.0 fail because their
getpagesize() returns a statically derived value, and does not use a
system call. Things are OK in glibc 2.1.


 


To maintain peak performance, Windows XP is much like its
predecessors in that it pays to slam in the RAM. Indeed, running low on
physical RAM is one of the most common reasons why Windows computers crawl
rather than operate to their full potential.


When a Windows XP computer runs low on RAM, it begins a
process called paging - or memory swapping if you’re used to using a Windows
9*/Me based operating system. The paging process involves moving blocks (or
pages) of data out of physical memory and onto disk.


A small amount of paging is perfectly normal on most
computers, but unnecessary paging should be avoided at all cost. Excessive
paging, sometimes called thrashing, becomes a problem when the hard disk
goes into overdrive as it tries to shuffle data to and from RAM.


Hence, the best way to avoid thrashing of the disk is to
install plenty of RAM. Fine-tuning your virtual memory settings is also a
good way to boost system performance and we’ll show you how to do just that
in a moment. First, though, let’s cover the basics in a little more detail.


There are only a few ways to deal with
thrashing when it occurs:




  • Increase the
    amount of RAM in the system to eliminate the cause of thrashing




  • Reduce the amount
    of RAM needed by reconfiguring the applications, removing unneeded system
    services (like network protocols that aren't being used), or running fewer
    applications at a time


  • Try to optimize
    the paging file's activity


 


 


THRASHING

"In a
multiprogramming environment, allocated memory pages of a program will
become replacement candidates if they have not been accessed for a certain
period of time under two conditions: (1) the program does not need to access
these pages; and (2) the program is conducting page faults (as a sleeping
process) so that it is not able to access the pages although it might have
done so without the page faults. We call the LRU pages generated by
condition (1) true LRU pages, and those by condition (2) false LRU pages.
These false LRU pages are produced by the time delay of page faults, not by
the access delay of the program. Thus, the LRU principle is not held in this
case.


Whenever page faults
occur due to memory shortage in a multiprogramming environment, false LRU
pages of a program can be generated, which will weaken the ability of the
program to achieve its working set. For example, if a program does not
access the already obtained memory pages on the false LRU condition, these
pages may become replacement candidates (LRUpages) when the memory space is
being demanded by other interacting programs. When the program is ready to
use these pages in its execution turn, these LRU pages may have been
replaced to satisfy requested allocations of other programs. The program
then has to ask the virtual memory system to retrieve these pages by
replacing LRU pages of others, possibly generating false LRU pages for other
programs. The false LRU pages may be cascaded among the interacting
programs, eventually causing system thrashing."


The factor related to thrashing :-


 - the
size of memory system

- the number of processes

- the dynamic memory demands

- the page replacement scheme


 


A program instruction on an Intel 386 or later
CPU can address up to 4GB of memory, using its full 32 bits. This is
normally far more than the RAM of the machine. (The 32nd exponent of 2 is
exactly 4,294,967,296, or 4 GB. 32 binary digits allow the representation of
4,294,967,296 numbers — counting 0.) So the hardware provides for
programs to operate in terms of as much as they wish of this full 4GB space
as Virtual Memory, those parts of the program and data which are
currently active being loaded into Physical Random Access Memory
(RAM). The processor itself then translates (‘maps’) the virtual addresses
from an instruction into the correct physical equivalents, doing this on the
fly as the instruction is executed. The processor manages the mapping in
terms of pages of 4 Kilobytes each - a size that has implications for
managing virtual memory by the system.

 


DEFINITION

The term "Virtual Memory" is used to describe a method by
which the physical RAM of a computer is not directly addressed, but is
instead accessed via an indirect "lookup". On the Intel platform, paging is
used to accomplish this task.


Paging, in CPU specific terms, should not be confused
with swap. These terms are related, but paging is used to refer to virtual
to physical address translation. The author encourages readers to find the
Intel Manuals online or order them in print for a deeper understanding of
the Intel paging system. (Note - In Intel documents, the term virtual
address, as used in the kernel code, is replaced with linear address).


To accomplish address translation (paging) the CPU needs
to be told:


a) where to find the address translation information.
This is accomplished by pointing the CPU to a lookup table called a 'page
table'. b) to activate paging mode. This is accomplished by setting a
specific flag in a control register.


Kernel use of virtual memory begins very early on in the
boot process. head.S contains code to create provisional page tables and get
the kernel up and running, however that is beyond this overview.


Every physical page of memory up to 896MB is mapped
directly into the kernel space. Memory greater than 896MB (High Mem) is not
permanently mapped, but is instead temporarily mapped using kmap and
kmap_atomic.


The descriptions of virtual memory will be broken into
two distinct sections; kernel paging and user process paging.


 




http://www.compulink.co.uk/~davedorn/computing/windows/xpvirtualmemory.htm




http://people.richland.edu/dkirby/172vmo.htm




http://www.aumha.org/win5/a/xpvm.php



http://linux-mm.org/VirtualMemory



http://linux.die.net/man/2/getpagesize




http://www.cyberciti.biz/faq/linux-check-the-size-of-pagesize/


http://linux-mm.org/SystemThrashing


 





SEB070021 - TUTORIAL 3

Tuesday, August 5, 2008

0 comments  





An operating system is a set of programs that lies between applications software and the computer hardware. Conceptually the operating system software is an intermediary between the hardware and the applications software. The operating system is the core software component of your computer. It performs many functions on an interface between your computer and the outside world.


There are many functions of operating system :-

1. The platform to work properly with high resource utilization and in a very fast mode. System tools (programs) used to monitor computer performance or maintain parts of the system.

2. A set of libraries or functions which programs may use to perform specific tasks especially relating to interfacing with computer system components.

3. Manage the computer resources, such as the central processing unit, memory, disk drives, and printers.

4. Establish a user interface.The operating system makes these interfacing functions along with its other functions operate smoothly and these functions are mostly transparent to the user.

5.Execute and provide services for applications software. Input/output management, that is, co-ordination and assignment of the different output and input device while one or more programs are being executed.

6. Processor management that is assignment of processor to different tasks being performed by the computer system.

7. Memory management that is allocation of main memory and other storage areas to the system programs as well as user programmers and data.

8. File management that is the storage of file of various storage devices to another. It also allows all files to be easily changed and modified through the use of text editors or some other files manipulation routines.

9. Establishment and enforcement of a priority system. That is, it determines and maintains the order in which jobs are to be executed in the computer system.

10. Automatic transition from job to job as directed by special control statements.

11. Coordination and assignment of compilers, assemblers, utility programs, and other software to the various user of the computer system.

12. Facilities easy communication between the computer system and the computer operator (human). It also establishes data security and integrity.

13. To give the user a GUI (graphical user interface) basically it means that instead of using dos commands to move from folder to folder they have a program that will let them see an icon of the folder so they can click on it.

14. Time-sharing system
~ With multiprogramming, the overall system is quite efficient. However a problem remains. That is those jobs that come late in the batch job list won’t get chance to run until the jobs before them have completed, thus their users have to wait a long time to obtain the results. Some programs may even need interaction with users, which requires the processor to switch to these programs frequently. In such a system, multiple users simultaneously access the system through terminals, with the operating system interleaving the execution of each user program in a short burst of computation.

15. The operating system is also in charge of traffic controller or manage the data that is coming into the computer (input by way of the keyboard or mouse) and going out of the computer (output by way of printer or screen display). It directs the flow of data to and from the external devices and also takes care of control routing information along the bus to be processed by the processor.

16. The operating system also is the maintenance mechanic of the system. It checks the system for failures that will cause problems in processing. Messages will appear on the screen when there is a problem. Sometimes operating systems will have built in messages for quick fixes to the problem, or will refer you to a resource to get more information. A typical message that one would see is "System Failure" or "Your computer has performed an illegal operation". When the computer is turned on, the computer checks all of the storage devices. You can see the system being checked by the lights going off and on at the various drive locations. All of the electronic parts are checked also. If the computer can not do a self fix, it will not let you continue working.



17. Control to the computer hardware . The operating system sits between the programs and the Basic Input Output System (BIOS). The BIOS controls the hardware. All programs that need hardware resources must go through the operating system. The operating system can either access the hardware through the BIOS or through the device drivers.

18. Support for built-in utility programs .The operating system uses utility programs for maintenance and repairs. Utility programs help identify problems, locate lost files, repair damaged files, and backup data. The figure here shows the progress of the Disk Defragmenter, which is found in Programs > Accessories > System Tools.



19. Running applications .These programs load and run applications such as word processors and spreadsheets. When a user requests a program, the operating system locates the application and loads it into the primary memory or RAM of the computer. As more programs are loaded, the operating system must allocate the computer resources.



20. Provides network capability. Each user has a single processor but can share data and resources with other users connected to the network. The OS must be able to detect if more than one user is trying to send messages at the same time. Networks may make use of spooling techniques to share peripherals such as printers. Networks may be Local Area or Wide Area, LAN or WAN. A LAN is wired together within the same site and can take a number of different forms, (topologies), Bus, Star, Ring. One machine on the network often acts as a file server looking after a high capacity storage device. WANS are computers linked via gateways to other computers which are geographically distant. They may make use of dedicated phone lines or go through the normal BT lines using modems. They usually use packet switching techniques to transfer data between nodes.









new life new sem

Friday, July 25, 2008

0 comments  

smile .....smile



hmmm.......new sem, new rum8, new fwen n all da things is new......hahaha not included BoyfriEND k.......no more amour in my life ....focus in study n achieve wat i want .....wat i really want in my life ......my new rum8 is farhana n also one junior ....she is quit nice to us ....so many things had happen on da 1st week of my new sem ....being ditch by fwens .....i dun noe y people still not sustified wif me sume more......i never bother them anymore n y they have to bother me .....oh yea .... i bouught my new bear....hahaha my mom x tau lg ...mati laa kalo dia tau .....so 2nd year is going to b tougher ...so b strong !!!! go shamin go !!! remember ALLAH always wif u ....n dis year im going to b 20 ....im getting old

my new bear


me wif my new rum8 ....our 1st outing

SEB070021 - TUTORIAL 2

Tuesday, July 15, 2008

0 comments  

APPLICATION SOFTWARE
~ a computer program designed to carry out a specialized task for the user such as database management

COMMUNICATION DEVICE
~ an invention to act or process of communicating such as sending messages,orders etc. telephone,television

COMPUTER
~ also called processor, an electronic devised designed to accept data, perform prescribed mathematical and logical operations at high speed and display the results of these operations

DESKTOP COMPUTER
~a personal computer small enough to fit conveniently in an individual workspace

EMBEDDED COMPUTER
~to fix an electronic device into a surrounding device. It act one or few function. Example such as traffic light

GRAPHICAL USER INTERFACE
~a software interface designed to standardize and simplfy the use of a computer prograns, as by using a mouse to manipulate text and images on a display screen featuring icons,windows and menus

HARDWARE
~mechanical or physical equipments necessary for conducting an activities usually distinguished from the theory and design that make an activity possible

INTERNET
~noun an international information network linking computer, accessible to the public via modem links

INSTALLING
~to connect or set in position and prepare for use

NETWORK
~any netlike combination of filaments,lines,veins,passages to distribute widely
~a collection of devices and computers connected together often wirelessly via communication and trans mission media