affiliate marketing

Monday 16 July 2012

CS2251 -DESIGN AND ANALYSIS OF ALGORITHMS UNIT I QUESTIONS


CS2251 -DESIGN AND ANALYSIS OF ALGORITHMS

UNIT –I

  1.                What is an Algorithm?  May/June 2006, Nov/Dec 2008
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time

2.                State the Euclid’s algorithm for finding GCD of two given numbers.
ALGORITHM Euclid (m, n)
// Computes gcd(m,n) by Euclid’s algorithm
//Input   : Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ¹ 0 do
                                    r ß m mod n
                                    m ß n
                                    n ß r
return m.

3.                What are Sequential Algorithms?
The central assumption of the RAM model is that instructions are executed one after another, one operation at a time. Accordingly, algorithms designed to be executed on such machines are called Sequential algorithms.

4.                What are Parallel Algorithms?
The central assumption of the RAM model does not hold for some newer computers that can execute operations concurrently, i.e., in parallel algorithms that take advantage of this capability are called Parallel algorithms.

5.                What is Exact and Approximation algorithm?
The principal decision to choose solving the problem exactly is called exact algorithm.
The principal decision to choose solving the problem approximately is called Approximation algorithm.

6.                What is Algorithm Design Technique?  Nov/Dec 2005
            An algorithm design technique is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing.

7.                Define Pseudo code.

A Pseudo code is a mixture of a natural language and programming language like constructs. A pseudo code is usually more precise than a natural language, and its usage often yields more succinct algorithm descriptions.





8.                Define Flowchart.
A method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps.

9.                Explain Algorithm’s Correctness
To prove that the algorithm yields a required result for every legitimate input in a finite amount of time.
Example: Correctness of Euclid’s algorithm for computing the greatest common divisor stems from correctness of the equality gcd (m, n) = gcd (n, m mod n).

10.            What is Efficiency of algorithm?
            Efficiency of an algorithm can be precisely defined and investigated with mathematical rigor. There are two kinds of algorithm efficiency
1)            Time Efficiency – Indicates how fast the algorithm runs
2)            Space Efficiency – Indicates how much extra memory the algorithm needs.

11.            What is generality of an algorithm?
It is a desirable characteristic of an algorithm. Generality of the problem the algorithm solves is sometimes easier to design an algorithm for a problem posed in more general terms.

12.            What is algorithm’s Optimality?
Optimality is about the complexity of the problem that algorithm solves. What is the minimum amount of effort any algorithm will need to exert to solve the problem in question is called algorithm’s Optimality.

13.            What do you mean by ²Sorting” problem?
                 The sorting problem asks us to rearrange the items of a given list in ascending order (or descending order)

14.            What do you mean by ²Searching” problem?
                The searching problem deals with finding a given value, called a search key, in a given set.

15.            What do you mean by ²Worst case-Efficiency” of an algorithm?
The ²Worst case-Efficiency” of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.
Ex: if you want to sort a list of numbers in ascending order when the numbers are given in descending order. In this running time will be the longest.

16.            What do you mean by ²Best case-Efficiency” of an algorithm?
The ²Best case-Efficiency” of an algorithm is its efficiency for the Best-case input of size n, which is an input(or inputs)  of size n for which the algorithm runs the fastest among all possible inputs of that size.
    Ex: if you want to sort a list of numbers in ascending order when the numbers are given in ascending order. In this running time will be the smallest.




17.            Define the ²Average-case efficiency” of an algorithm?
The ²Average-case efficiency” of an algorithm is its efficiency for the  input of size n,  for which the algorithm runs between the best case and the worst case among all possible inputs of that size.

18.            What do you mean by “Amortized efficiency”?
The “Amortized efficiency” applies not only a single run of an algorithm but rather to a sequence of operations performed on the same data structure. It turns out that in some situations a single operation can be expensive ,but the total  time for an entire sequence of n such operations is always significantly better than the worst case efficiency of that single operation multiplied by n. This is known as “Amortized efficiency”.

19.            How to measure the algorithm’s efficiency?
It is logical to investigate the algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input size.
Example: It will be the size of the list for problems of sorting, searching, finding the list’s smallest element, and most other problems dealing with lists.

20.            What is called the basic operation of an algorithm?
            The most important operation of the algorithm is the operation contributing the most to the total running time is called basic operation of an algorithm.

21.            How to measure an algorithm’s running time?
Let Cop be the time of execution of an algorithm’s basic iteration on a particular computer and let C (n) be the number of times this operation needs to be executed for this algorithm.  Then we can estimate the running time T(n) of a program implementing this algorithm on that computer by the formula
                                    T(n)   ≈  Cop C(n)

22.            Define order of growth.
            The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency.  To compare and rank such orders of growth we use three notations
1)                                                      O (Big oh) notation
2)                                                      Ω (Big Omega) notation &
3)                                                      Θ (Big Theta) notation

23.            Define Big oh notation May/June 2006, April/May 2008
A function t(n) is said to be in O(g(n)) denoted t(n) ε O (g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
                                    T (n) < c g (n) for n > n0

24.            Prove that 100n+5 Î O (n2)?
Clearly 100n+5 £ 100n+n (for all n ³ 5) = 101n£101n2
By choosing n0=5 and c=101 we find that 100n+5ÎO (n2).




25.            Define  Ω notation
A function t(n) is said to be in Ω (g(n)), denoted t(n) Î Ω (g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
                                    T (n) < c g (n) for n > n0

26.            Prove that n3ÎW(n2)?
            Clearly n3 ³ n2 for all n ³ 0.       i.e., we can select c=1 and n0=0.

27.            Define Θ - notation
A function t(n) is said to be in Θ(g(n)), denoted t(n) Î Θ (g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constant c1 and c2 and some non negative integer n0 such that
                        c2 g (n) < t (n) < c1 g(n) for n > n0

28.            Prove that( ½)n(n-1) Î Q(n2)
              1/2n(n-1)=(1/2)n2-1/2n £  1/2 n2 for all  n³0.(we have proved upper inequality) now 1/2n(n-1)=(1/2)n2-1/2n³(1/2)n2-1/2n*1/2n(for all n³2)=1/4 n2 hence we can select c2=1/4,c1=1/2 and n0=2.

29.            What is the use of Asymptotic Notations?
The notations O, W and Q and are used to indicate and compare the asymptotic orders of growth of functions expressing algorithm efficiencies.



No comments:

Post a Comment