2019

Job Sequencing with Deadline


We are given a set of n jobs. Associated with each job I, di>=0 is an integer deadline and pi>=O is profit. For any job i profit is earned iff job is completed by deadline. To complete a job one has to process a job for one unit of time. Our aim is to find feasible subset of jobs such that profit is maximum.

Example

n=4, (p1,p2,p3,p4)=(100,10,15,27),(d1,d2,d3,d4)=(2,1,2,1)
n=4, (p1,p4,p3,p2)=(100,27,15,10),(d1,d4,d3,d2)=(2,1,2,1)

Job Sequencing with Deadline


We have to try all the possibilities, complexity is O(n!). Greedy strategy using total profit as optimization function  to above example. Begin with J=
  • Job 1 considered, and added to J ->J={1}
  • Job 4 considered, and added to J -> J={1,4}
  • Job 3 considered, but discarded because not feasible  ->J={1,4}
  • Job 2 considered, but discarded because not feasible  -> J={1,4}

Final solution is J={1,4} with total profit 127 and it is optimal

Algorithm:

Assume the jobs are ordered such that p[1]>=p[2] >=…>=p[n]  d[i]>=1, 1<=i<=n are the deadlines, n>=1. The jobs n are ordered such that p[1]>=p[2]>=... >=p[n]. J[i] is the ith job in the optimal solution, 1<=i<=k. Also, at termination  d[J[i]]<=d[J[i+1]], 1<=i


JobSequencing(int d[], int j[], int n)


{


d[0] = J[0] = 0; // Initialize.


J[1] = 1; // Include job 1.


int k=1;


for (int i=2; i<=n; i++)


{//Consider jobs in nonincreasing order of p[i]. Find position for i and check feasibility of insertion.


int r = k;


while ((d[J[r]] > d[i]) && (d[J[r]] != r))


r--;


if ((d[J[r]] <= d[i]) && (d[i] > r))


{                // Insert i into J[].


for (int q=k; q>=(r+1); q--)


J[q+1] = J[q];


J[r+1] = i;


k++;


}


}


return (k);


}


Analysis

For loop executes O(n) line. While loop inside the for loop executes at most times and if the condition given inside if statement is true inner for loop executes O(k-r) times.  Hence total time for each iteration of outer for loop is O(k). Thus time complexity is O(n^2) .



Huffman coding

Huffman coding is an algorithm for the lossless compression of files based on the frequency of occurrence of a symbol in the file that is being compressed. In any file, certain characters are used more than others. Using binary representation, the number of bits required to represent each character depends upon the number of characters that have to be represented. Using one bit we
can represent two characters, i.e., 0 represents the first character and l represents the second character. Using two bits we can represent four characters, and so on. Unlike ASCII code, which is a fixed-length code using seven bits per character, Huffman compression is a variable-length coding system that assigns smaller codes for more frequently used characters and larger codes
for less frequently used characters in order to reduce the size of files being compressed and transferred.


For example, in a file with the following data: ‘XXXXXXYYYYZZ. The frequency of "X" is 6, the frequency of "Y" is 4, and the frequency of "Z" is 2. If each character is represented using a fixed-length code of two bits, then the number of bits required to store this file would be 24, i.e., (2 x 6) + (2): 4) + (2): 2] = 24. If the above data were compressed using Huffman compression, the more frequently occurring numbers would be represented by smaller bits, such as: X by the code 0 (1 bit),Y by the code 10 (2 bits) and Z by the code 11 (2 bits}, the size of the file becomes 18, i.e., (h 6) + [2 x 4] + (2 x 2) = 18. In this example, more frequently occurring characters are
assigned smaller codes, resulting in a smaller number of bits in the final compressed file.
Huffman compression was named after its discoverer, David Huffman.


To generate Huffman codes, we should create a binary tree of nodes. Initially, all nodes are leaf nodes, which contain the symbol itself, the weight (frequency of appearance) of the symbol. As a common convention, bit '0' represents following the left child and bit '1' represents following the right child. A finished tree has up to n leaf nodes and  n — l. internal nodes. A Huffman tree that omits unused symbols produces the most optimal code lengths. The process essentially begins with the leaf nodes containing the probabilities of the symbol they represent, then a new node whose children are the 2 nodes with smallest probability is created, such that the new node's probability is equal to the sum of the children's probability. With the previous 2 nodes merged into one node and with the new 11ode being now considered, the procedure is repeated until only one node remains, the Huffman tree. The simplest construction algorithm uses a priority queue where the 11ode with lowest probability is given highest priority:


Example

The following example bases on a data source using a set of five different symbols The
symbol's frequencies are:
Symbol                                               Frequency
A                                                          24
B                                                          12
C                                                          10
D                                                         5
E                                                          8
----> total 186 bit (with 3 bit per code word)

Huffman Coding | DAA

Huffman Coding | DAA

Huffman Coding | DAA


Algorithm

A greedy algorithm can construct Huffman code that is optimal prefix codes. A tree corresponding to optimal codes is constructed in a bottom up manner starting from the |C| leaves and |C|-1 merging operations. Use priority queue Q to keep nodes ordered by frequency. Here the priority queue we considered is binary heap.


HuffmanAlgo(C)


{


n = |C|; Q = C1


For(i=1; i<=n-1; i++)


{


2 = Allocate-Node();


x = Extract-Min(Q);


y = Extract-Min(Q);


left(z) = x; right(z) = y;


f(z) = f(x) + (y);


Insert(Q,2);


}


}

Analysis

We can use BuildHeap(C] to create a priority queue that takes O(11) time. Inside the for loop the expensive operations can be done in Oflogn) time. Since operations inside for loop executes for n-1 time total running time of Huffrnan algorithm is O(11logn).






Fractional Knapsack Problem

Statement: 
A thief has a bag or knapsack that can contain maximum weight W of his loot. There are n items and the weight of ith item is wi and it worth vi. Any amount of item can be put into the bag i.e. xi fraction of item can be collected, where 0<=xi<=1. Here the objective is to collect the items that maximize the total profit earned.

Algorithm:
Take as much of the item with the highest value per weight (vi/wi) as you can. If the item is finished then move on to next item that has highest (vi/wi), continue this until the knapsack is full. v[1 … n] and w[1 … n] contain the values and weights respectively of the n objects sorted in non increasing ordered of v[i]/w[i] . W is the capacity of the knapsack, x[1 … n] is the solution vector that includes fractional amount of items and n is the number of items.

GreedyFracKnapsack(W,n)
{
for(i=1; i<=n; i++)
{
x[i] = 0.0;
}
tempW = W;
for(i=1; i<=n; i++)
{
if(w[i] > tempW) then break;
x[i] = 1.0; tempW -= w[i];
}
if(i<=n)
x[i] = tempW/w[i];
}

Analysis:
We can see that the above algorithm just contain a single loop i.e. no nested loops the running time for above algorithm is O(n). However our requirement is that v[1 … n] and w[1 … n] are sorted, so we can use sorting method to sort it in O(nlogn) time such that the complexity of the algorithm above including sorting becomes O(nlogn).


CLIENT SERVER ARCHITECTURE 

Client/server architecture is a computing model in which the server hosts, delivers and manages most of the resources and services to be consumed by the client. This type of architecture has one or more client computers connected to a central server over a network or internet connection. 
Client server application consists of multiple application system combines or divided but interlinked to make various layers or tier

The application logic tier. 
The application logic tier is where all the “thinking” happens, and it knows what is allowed by your application and what is possible, and it makes other decisions.  This logic tier is also the one that writes and reads data into the data tier.

The data tier.
The data tier is where all the data used in your application are stored.  You can securely store data on this tier, do transaction, and even search through volumes and volumes of data in a matter of seconds.

The presentation tier. 
The presentation tier is the user interface.  This is what the software user sees and interacts with.  This is where they enter the needed information.  This tier also acts as a go-between for the data tier and the user, passing on the user’s different actions to the logic tier.

1-tier architecture
The simplest of Architecture are 1 tier where the Client, Server, and Database all reside on the same machine. Anytime you install a DB in your system and access it to practice SQL queries it is 1 tier architecture. But such architecture is rarely used in production.    

2-tier architecture 
2-tier architecture is used to describe client/server systems in which the client requests resources and the server responds directly to the request, using its own resources. This means that the server does not call on another application in order to provide part of the service
2-tier architecture
2-tier architecture 

3-Tier Architecture
In 3-tier architecture, there is an intermediary level, meaning that the architecture is generally split up between: a client,  the application server (also called middleware), whose task it is to provide the requested resources, but by calling on another server; and the data server, which provides the application server with the data that it requires
3-Tier Architecture
3-Tier Architecture

N-tier architecture
N-tier architecture is also called multi-tier architecture because the software is engineered to have the processing, data management, and presentation functions physically and logically separated.  That means that these different functions are hosted on several machines or clusters, ensuring that services are provided without resources being shared and, as such, these services are delivered at top capacity.  



N-tier architecture
N-tier architecture

Dynamic Programming:

Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions using a memory-based data structure (array, map,etc). Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup. So the next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time. This technique of storing solutions to subproblems instead of recomputing them is called memoization.

Technique is among the most powerful for designing algorithms for optimization problems. Dynamic programming problems are typically optimization problems (find the minimum or maximum cost solution, subject to various constraints). The technique is related to divide-and-conquer, in the sense that it breaks problems down into smaller problems that it solves recursively. However, because of the somewhat different nature of dynamic programming problems, standard divide-and-conquer solutions are not usually efficient. The basic elements that characterize a dynamic programming algorithm are:

·        Substructure:
Decompose your problem into smaller (and hopefully simpler) subproblems. Express the solution of the original problem in terms of solutions for smaller problems.

·        Table-structure:
Store the answers to the sub-problems in a table. This is done because subproblem solutions are reused many times.

·        Bottom-up computation:
Combine solutions on smaller subproblems to solve larger subproblems. (We also discuss a top-down alternative, called memorization)


The most important question in designing a DP solution to a problem is how to set up the subproblem structure. This is called the formulation of the problem. Dynamic programming is not applicable to all optimization problems. There are two important elements that a problem must have in order for DP to be applicable.


Optimal substructure:
(Sometimes called the principle of optimality.) It states that for the global problem to be solved optimally, each subproblem should be solved optimally. (Not all optimization problems satisfy this. Sometimes it is better to lose a little on one subproblem in order to make a big gain on another.)


Polynomially many subproblems:
An important aspect to the efficiency of DP is that the total number of subproblems to be solved should be at most a polynomial number.

   

Max and Min Finding

Here our problem is to find the minimum and maximum items in a set of n elements. We will see two methods here first one is iterative version and the next one uses divide and conquer strategy to solve the problem.

Iterative Algorithm:


MinMax(A,n)

{

max = min = A[0];

for(i = 1; i <n; i++}

{

if(A[i] > max)

max = A[i];

if(A[i] < min}

min = A[i];

}

}


The above algorithm requires 2(n-1) comparison in worst, best, and average cases. The comparison A[i] < min is needed only when A[i] > max is not true. If We replace the content inside the for loop by
if(A[i] >1nax)
max = A[i];
else if(A[i] < min)
m.i_n = A[i];
Then the best case occurs when the elements are in increasing order with (n-1) comparisons and worst case occurs when elements are in decreasing order with 2(n-1) comparisons. For the average case A[i] > max is about half of the time so number of comparisons is 3n/2 — 1.
We can clearly conclude that the time complexity is O[n].

 C Program For Mini Max Search


#include <stdio.h>


#include <stdlib.h>


#include <time.h>


#define SIZE 1000


void MinMax(int *arr, int l, int h, int *max, int *min)


{


    int max1, min1;


    if (l == h) { // case I - only one item


        *max = *min = arr[0];


    }


     else if (l == h - 1) { // case II - when there are two elements


        if (arr[l] > arr[h])


        {


            *max = arr[l];


            *min = arr[h];


        }


        else {


            *max = arr[h];


            *min = arr[l];


        }


    }


     else { // case - III


        int mid = (l + h) / 2; // Divide


        MinMax(arr, l, mid, max , min); // conquer


        MinMax(arr, mid+1, h, &max1 , &min1); //conquer


        if(*max < max1) *max = max1; //combine


        if(*min > min1) *min = min1;  //combine


    }


}


int main(int argc, char const *argv[])


{


    int arr[SIZE], n;


    clock_t start, end;


    double total_time = 0.00;


    int min, max;


    printf("Enter number of elements : ");


    scanf("%d", &n);


    for (int i = 0; i < n; i++)


    {


        arr[i] = rand() % 1000;


    }


    start = clock();


    MinMax(arr, 0, n-1, &max, &min);


    end = clock();


    printf("Min element :  %d\nMax element : %d\n", min, max);


    total_time += (double)(end - start) / CLOCKS_PER_SEC;


    printf("Execution time : %f seconds", total_time);


    return 0;


}

Output

Mini-Max Search


Heap Sort  

A heap is a nearly complete binary tree with the following two properties:
  • Structural property: all levels are full, except possibly the last one, which is filled from left to right
  • Order (heap) property: for any node x, Parent(x) ≥ x
Heap Sort

Array Representation of Heaps

  • A heap can be stored as an array A.
  • Root of tree is A[1]
  • Left child of A[i] = A[2i]
  • Right child of A[i] = A[2i + 1]
  • Parent of A[i] = A[ i/2 ]
  • Heapsize[A] ≤ length[A]
The elements in the subarray A[(n/2+1) .. n] are leaves
Heap Sort

Max-heaps (largest element at root), have the max-heap property:  
  • for all nodes i, excluding the root:    
A[PARENT(i)] ≥ A[i]
Min-heaps (smallest element at root), have the min-heap property:
  • for all nodes i, excluding the root:  
A[PARENT(i)] ≤ A[i]


Adding/Deleting Nodes

New nodes are always inserted at the bottom level (left to right) and nodes are removed from the bottom level (right to left).
Heap Sort

Operations on Heaps

  • Maintain/Restore the max-heap property
    1. MAX-HEAPIFY
  • Create a max-heap from an unordered array
    1. BUILD-MAX-HEAP
  • Sort an array in place
    1. HEAPSORT
  • Priority queues


Heapify Property

Suppose a node is smaller than a child and Left and Right subtrees of i are max-heaps. To eliminate the violation:  
  • Exchange with larger child
  • Move down the tree
  • Continue until node is not smaller than children
Heap Sort


Algorithm


Max-Heapify(A, i, n)

{


l = Left(i)


r = Right(i)


largest=i;


if l ≤ n and A[l] > A[largest]



largest = l


if r ≤ n and A[r] > A[largest]


largest = r


if largest≠i  


exchange (A[i] , A[largest])             


Max-Heapify(A, largest, n)


}



Analysis:  

In the worst case Max-Heapify is called recursively h times, where h is height of the heap and since each call to the heapify takes constant time
Time complexity = O(h) = O(logn)


Building a Heap

Convert an array A[1 … n] into a max-heap (n = length[A]). The elements in the sub-array A[(n/2+1) .. n] are leaves. Apply MAX-HEAPIFY on elements between 1 and n/2⌋.
Heap Sort

Algorithm:  


Build-Max-Heap(A)


n = length[A]  


for i ← n/2 down to 1        


do MAX-HEAPIFY(A, i, n)



Time Complexity:

Running time:  Loop executes O(n) times and complexity of Heapify is O(lgn), therefore complexity of Build-Max-Heap is O(nlogn).
This is not an asymptotically tight upper bound
Heap Sort



Heap

A heap is a complete tree with an ordering-relation R holding between each node and its descendant. Note that the complete tree here means tree can miss only rightmost part of the bottom level. R can be smaller-than, bigger-than. E.g. Heap with degree 2 and R is “bigger than”.

Heap

Heap Sort 

Build a heap from the given set (O(n)) time, then repeatedly remove the elements from the heap (O(n log n)). Implementation Heaps are implemented by using arrays. Insertion and deletion of an element takes O(log n) time. More on this later

Heap Sort

Heap Sort Pseudo Code


Heapsort(A) {


   BuildHeap(A)


   for i <- length(A) downto 2 {


      exchange A[1] <-> A[i]


      heapsize <- heapsize -1


      Heapify(A, 1)


}


BuildHeap(A) {


   heapsize <- length(A)


   for i <- floor( length/2 ) downto 1


      Heapify(A, i)


}


Heapify(A, i) {


   le <- left(i)


   ri <- right(i)


   if (le<=heapsize) and (A[le]>A[i])


      largest <- le


   else


      largest <- i 


   if (ri<=heapsize) and (A[ri]>A[largest])


      largest <- ri


   if (largest != i) {


      exchange A[i] <-> A[largest]


      Heapify(A, largest)


   }


}

C Program For Heap Sort By Generating Random Numbers


#include<stdio.h>
#include <conio.h>
#include <time.h>
#include<stdlib.h>
void Adjust(int Heap_of_Numbers[],int i)  /*Function to arrange the elements in the heap*/
{
    int j;
    int copy;
    int Number;
    int Reference = 1;
    Number=Heap_of_Numbers[0];
    while(2*i<=Number && Reference==1)
    {
        j=2*i;  
        if(j+1<=Number && Heap_of_Numbers[j+1] > Heap_of_Numbers[j])
            j=j+1;
            if( Heap_of_Numbers[j] < Heap_of_Numbers[i])
                Reference=0;
            else
            {
                copy=Heap_of_Numbers[i];
                Heap_of_Numbers[i]=Heap_of_Numbers[j];
                Heap_of_Numbers[j]=copy;
                i=j;
            }
    }    
}
void Make_Heap(int heap[])
{
    int i;
    int Number_of_Elements;
    Number_of_Elements=heap[0];
    for(i=Number_of_Elements/2;i>=1;i--)
        Adjust(heap,i);
}

int main()
{
    int heap[1000];
    int NumberofElements;
    int i;
    int LastElement;
    int CopyVariable;
    time_t t;
    printf("Enter the max number \n");
    scanf("%d",&NumberofElements);
    srand((unsigned) time(&t));
    for(i=0;i<NumberofElements;i++)
    {
        heap[i]=rand()%100+1;
    }
    heap[0]=NumberofElements;
    printf("\n\nTime taken to complete the heapsort %u\n",clock()/CLOCKS_PER_SEC);
    Make_Heap(heap);
    while(heap[0] > 1) /*Loop for the Sorting process*/
    {
        LastElement=heap[0];
        CopyVariable=heap[1];
        heap[1]=heap[LastElement];
        heap[LastElement]=CopyVariable;
        heap[0]--;
        Adjust(heap,1);
    }
    printf("\nSorted Array:\n");/*Printing the sorted Array*/
    for(i=1;i<=NumberofElements;i++)
        printf("%d ",heap[i]);
    return 0;
}


Tree Data Structures

Tree is a collection of nodes. If the collection is empty the tree is empty otherwise it contains a distinct node called root (r) and zero or more sub-trees whose roots are directly connected to the node r by edges. The root of each tree is called child of r, and r the parent. Any node without a child is called leaf. We can also call the tree as a connected graph without a cycle. So there is a path from one node to any other nodes in the tree. The main concern with this data structure is due to the running time of most of the operation require O(logn). We can represent tree as an array or linked list.

Some of the definitions
  • Level h of a full tree has d^(h-1) nodes.
  • The first h levels of a full tree have 1 + d + d^2 + …………………….. d^(h-1) = (d^h -1)/(d-1)

Binary Search Trees
BST has at most two children for each parent. In BST a key at each vertex must be greater than all the keys held by its left descendents and smaller or equal than all the keys held by its right descendents. Searching and insertion both takes O(h) worst case time, where h is height of tree and the relation between height and number of nodes n is given by log n < h+1 <= n. for e.g. height of binary tree with 16 nodes may be anywhere between 4 and 15. When height is 4 and when height is 15? So if we are sure that the tree is height balanced then we can say that search and insertion has O(log n) run time otherwise we have to content with O(n).
Tree Data Structure And Priority Queues
 

AVL Trees
Balanced tree named after Adelson, Velskii and Landis. AVL trees consist of a special case in which the sub-trees of each node differ by at most 1 in their height. Due to insertion and deletion tree may become unbalanced, so rebalancing must be done by using left rotation, right rotation or double rotation.
Tree Data Structure And Priority Queues


Priority Queues  

Priority queue is a queue in which the elements are prioritized. The least element in the priority queue is always removed first. Priority queues are used in many computing applications. For example, many operating systems used a scheduling algorithm where the next process executed is the one with the shortest execution time or the highest priority. Priority queues can be implemented by using arrays, linked list or special kind of tree (I.e. heap).  
  • boolean isEmpty ();
    1. Return true if and only if this priority queue is empty.
  • int size ();
    1. Return the length of this priority queue.  
  • int getLeast ();  
    1. Return the least element of this priority queue. If there are several least elements, return any of them.
  • void clear ();
    1. Make this priority queue empty.
  • void add (int elem);
    1. Add elem to this priority queue.
  • int delete();  
    1. Remove and return the least element from this priority queue. (If there are several least elements, remove the same element that would be returned by getLeast.
Tree Data Structure And Priority Queues


Caesar Cipher

The Caesar cipher is one of the earliest known and simplest ciphers. It is a type of substitution cipher in which each letter in the plaintext is 'shifted' a certain number of places down the alphabet. For example, with a shift of 1, A would be replaced by B, B would become C, and so on. The method is named after Julius Caesar, who apparently used it to communicate with his generals.
More complex encryption schemes such as the Vigenère cipher employ the Caesar cipher as one element of the encryption process. The widely known ROT13 'encryption' is simply a Caesar cipher with an offset of 13. The Caesar cipher offers essentially no communication security, and it will be shown that it can be easily broken even by hand.

C Program For Caesar Cipher [Encryption]


#include<stdio.h>
int main()
{
    char msg[100], ch;
    int i,key;
    printf("Enter a plaintext \n");
    gets(msg);
    printf("Enter key");
    scanf("%d",&key);
    for(i=0;msg[i]!='\0';i++)
    {
        ch=msg[i];
        if(ch>='a'&&ch<='z')
        {
            ch=ch+key;
            if(ch>'z')
            {
                ch=ch-'z'+'a'-1;
            }
            msg[i]=ch;
        }
        else if(ch>='A'&&ch<'Z')
        {
            ch=ch+key;
            if(ch>'Z')
            {
                ch=ch='Z'+'A'-1;
            }
            msg[i]=ch;
        }
    }
    printf("Encrypted message: %s", msg);
    return 0;
}

C Program For Caesar Cipher [Encryption]


#include<stdio.h>
int main()
{
    char msg[100], ch;
    int i,key;
    printf("Enter a Encrypted test \t");
    gets(msg);
    printf("Enter key \t");
    scanf("%d",&key);
    for(i=0;msg[i]!='\0';i++)
    {
        ch=msg[i];
        if(ch>='a'&&ch<='z')
        {
            ch=ch-key;
            if(ch<'a')
            {
                ch=ch+'z'-'a'+1;
            }
            msg[i]=ch;
        }
        else if(ch>='A'&&ch<'Z')
        {
            ch=ch+key;
            if(ch<'A')
            {
                ch=ch+'Z'-'A'+1;
            }
            msg[i]=ch;
        }
    }
    printf("Dycrypted message: %s", msg);
    return 0;
}


Chi-Square Test

        Formalizes this notion of distribution fit
       Oi represents the number of observed data values in the i-th interval.
       pi is the probability of a data value falling in the i-th interval under the hypothesized distribution.
       So we would expect to observe Ei = npi, if we have n observations

C Program For Chi-Square Test
C Program For Chi-Square Test

So the chi-squared statistic is

  
C Program For Chi-Square Test

        So the hypotheses are
       H0: the random variable, X, conforms to the distributional assumption with parameters given by the parameter estimates.
       H1: the random variable does not conform.


C Program For Chi-Square Test

#include<stdio.h>

int main()

{

    int n,i,e,calc=0,z=16.9;

    printf("Total number : ");

    scanf("%d",&n);

    int arr[n],o[10]={0,0,0,0,0,0,0,0,0,0},oo[10],ooo[10];

    printf("Enter %d number : ",n);

    for(i=0;i<n;i++)

    {

        scanf("%d",&arr[i]);

    }

    for(i=0;i<n;i++)

    {

        if(arr[i]<10)

            o[0]++;

        else if(arr[i]<20 && arr[i]>=10)

            o[1]++;

        else if(arr[i]<30 && arr[i]>=20)

            o[2]++;

        else if(arr[i]<40 && arr[i]>=30)

            o[3]++;

        else if(arr[i]<50 && arr[i]>=40)

            o[4]++;

        else if(arr[i]<60 && arr[i]>=50)

            o[5]++;

        else if(arr[i]<70 && arr[i]>=60)

            o[6]++;

        else if(arr[i]<80 && arr[i]>=70)

            o[7]++;

        else if(arr[i]<90 && arr[i]>=80)

            o[8]++;

        else

            o[9]++;

    }

    e=n/10;

    for(i=0;i<10;i++)

    {

        oo[i]=(o[i]-e)*(o[i]-e);

        ooo[i]=oo[i]/e;

        calc=calc+ooo[i];

    }

    if(calc<=z)

    printf("Null hypo accepted. The value are uniformly distributed.");

    else

    printf("Alt hypo accepted. The value are not uniformly distributed.");

    return 0;

}