Computer Networks Lab July-Dec 2014


Objective
In this course, the practical experience and problems on the various topics (mainly as listed below) of Computer Networks will be covered. The course will have a mini-project with the theme 'Usage of Parallel and Distributed Computing'.
Total of 10 laboratory assignments will be covered in the semester. Each assignment will consists of 2-4 laboratory problem. The discussion/progress on the mini-project will be considered in the lab-hours.
 
Course Contents


 Lecture Topic
 Exercises /
Assignments
Introduction: Use of computer networks, network hardware and software  
Layering, reference models and their comparison

Client-server model with discussion on PDC systems


Client-server model with discussion on PDC systems


 Peer to Peer Networks

 
Peer to Peer Computing


 
Physical Layer: Theoretical basis for data communication,

 
Transmission media and impairments

 
Switching systems, Bandwidth.

 
Data Link Layer: Design issues, framing, error detection and correction


Elementary and sliding window protocols

Examples of data link layer protocols
Medium Access Control Sub Layer: Channel allocation problem

Multiple access protocols

Ethernet, data link layer switching

Circuit switching, packet switching

Network Layer: Design issues
Routing algorithms with PDC related concepts


Congestion control, QOS


Internetworking, IP and IP addressing

 Network performance modeling
Algorithmic problems of communication

Broadcast, multicast w.r.t PDC

Transport Layer: Transport service
Elements of transport protocols
TCP and UDP
Application Layer Overview: Email 
DNS
WWW
Peer to Peer Computing with example of Skype
Web Services

Web search engines

Social networks


Cloud based systems
Programming projects using various tools such as IBM BlueMix, MapReduce, Amazon EC2 or MS Azure

Programming projects using various tools such as IBM BlueMix, MapReduce, Amazon EC2 or MS Azure

Programming projects using various tools such as IBM BlueMix, MapReduce, Amazon EC2 or MS Azure

Programming projects using various tools such as IBM BlueMix, MapReduce, Amazon EC2 or MS Azure

Programming projects using various tools such as IBM BlueMix, MapReduce, Amazon EC2 or MS Azure

List of Some of the Mini-Projects:

Project No.1: Generation of Mandelbrot set on a GPU

Design an application which can generate Mandelbrot set by taking the required inputs - coordinates of top left corner and length. Various colouring methods like Histogram Colouring, smooth colouring, colouring using distance estimates would be implemented.
The above module would be used for generation of videos based on Mandelbrot set which  would support zooming and panning into different areas of the set. The video would be specified as a set of instructions using a simple programming language which we would be designing.
Parallelisation of code on GPU will be done using Aparapi. Proper software engineering would be used in the designing of the application.

Team: 11114036 Ranjith, 11114035 Rahul, 11114038 Sanket

Project No.2: Generation of Mandelbrot set on a GPU

Many of us use Google drive or Dropbox or other similar applications.  These systems provide abstraction of cloud based storage. That is a user  can store files in a file system somewhere in the network and retrieve  them in a device-independent and location-independent fashion which
provides seamless access to the user files anywhere in the world and across different devices: desktops, laptops, smartphones, tablets etc.
Team: Anshul Singhal

Project No.3:The travelling salesman problem (TSP) is an NP-hard problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their  pairwise distances, the task is to find the shortest possible route that visits each city exactly
once and returns to the origin city. It is a special case of the travelling purchaser problem. A genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.We look forward to solve TSP using genetic algorithm.

Group Members: Jitin Singla 11114019, Devendra Pratap Singh 11114013, Mayank Chaudhary 11114023, Shivam Mangla 11114040

Project 4: Parallel Computing of Monte Carlo Algorithms

Parallel computing includes such subspecialties as parallel randomized algorithms and parallel simulation. A major issue in parallel computing is how to coordinate communication between the various processors; indeed, some parallel computing environments (such as “vector computing”) require specialized programming to allow the processors to work together in parallel. On the other hand, Monte Carlo algorithms often proceed by averaging large numbers of computed values. It is sometimes straightforward to have different processors compute different values, and then use an appropriate (weighted) average of these values to produce a final answer. Although Monte Carlo is well suited to parallel computation, there are a number of potential problems in the above context. The available computers might run at different speeds; they might have different user loads on them; one or more of them might be down etc. Handling these issues correctly is crucial to the success of parallel Monte Carlo. In addition, Markov chain Monte Carlo algorithms are now very common and parallelizing them presents additional difficulties such as determining appropriate burn-in time. First we will implement the basics of parallel Monte Carlo Algorithm. Then, we will deal with issues related to possible unreliability of some of the computers being used. After that, we will try to solve additional issues (especially burn-in time questions) that arise specifically for parallel Markov chain Monte Carlo algorithms.

Group Members:Dileep, Archit, Nitin


Project 5: Parallel Architecture for Multiple face detection:

Background: Face detection is a very important biometric application in the field of image analysis and computer vision. The basic face detection method is AdaBoost algorithm with a cascading Haar-like feature classifiers based on the framework proposed by Viola and Jones. Use of same will be done on a parallel architecture to speed up process of face detection.

Group Members:Raghavendra Bazari 11114032 , Rahul Meena 11114034, Rahul Kumar 11114033, K. Shiva shankar 11114020


Project 6: CMUSphinx (https://cmusphinx.sourceforge.net) is a speech recognition  toolkit that provides carious libraries for speech recognition. Two of those libraries are PocketSphinx and Sphinx 4. PocketSphinx is a small speech  recognition toolkit written in C that has various utilities related to speech  recognition. CUDA is an application development platform from Nvidia for developing  applications using its graphic cards. The basic aim of the project is to use CUDA and develop a speech recognition application using the principles of parallel programming and software development.

Team: Suyash11211023

Project 7: Implementing following standard problems using parallel programming.

1. Queen’s problem

2. Strassen matrix multiplication

3. Fox parallel matrix multiplication

Team members: Ashish Kumar Singh 11114009, Arun Kumar Ram 11114008, .Arpit Agrawal 11114006, Gunjan Singh 11114016

Project 8: Basic web server handling concurrent requests

The project involves replicating a client­server model that would serve only HTTP requests, acting as a basic web server. The GET and POST requests would render the requested pages and serve an error page if the resource is unavailable. Initially we would create the server with minimal features of request handling(serving pages and logs) and as time permits we will incrementally add more features as parallelizing the requests, security(file­access permissions),.etc.

Group Members : Avinash Wilson Tirkey 11114011, Ravishankar 11114037, Umang Ganvir 11114046, Vikas Verma 11114047


Project 9: Calculating Word Count in several documents using MapReduce paradigm.

Outline: WordCount is a simple application that counts the number of occurrences of each word in a given input set, which can be composed of a large amount of text. Considering the size of large input set it is essential to use parallel programming for efficient execution of the task. We will be using Hadoop MapReduce framework instead of traditional parallel computing practices (using libraries like MPI, OpenMP, CUDA, or pthreads). We will first run our Hadoop MapReduce, application in Standalone mode and then in “Pseudo Distributed” mode. This will create a full fledge Hadoop MapReduce system with multiple processes on a single Sugar node. And we will demonstrate utilization of multi­cores.
Team: Diksha Bhatti 11121005, Nitish Sharma 11114026, Prateek Thakur 11114029, Shubham Kansal 11114043

Project 10: Parallel K-Means Clustering (Machine learning) Based on Map-Reduce (HADOOP)

Data clustering has been received considerable attention in many applications, such as data mining,  document retrieval, image segmentation and pattern classification. The enlarging volumes of  information emerging by the progress of technology, makes clustering of very large scale of data a
challenging task. In order to deal with the problem, many researchers try to design efficient parallel clustering algorithms.  We propose a parallel k-means clustering algorithm based on Map-Reduce, which is a simple yet  powerful parallel programming technique. The proposed algorithm should scale well and efficiently  process large datasets on commodity hardware.
Team: Rohan Kabra (11211012) , Sarvesh Gupta (11211015), Vijay Patidar (11211026), Somnath Asati (11211018)

Project 11: Implementation of Monte Carlo Algorithms for Eigenvalue Problem Using MPI

The problem of evaluating the dominant eigenvalue of real matrices using Monte Carlo numerical methods is considered.Three almost optimal Monte Carlo algorithms are presented:
1) Direct Monte Carlo algorithm (DMC) for calculating the largest eigenvalue of a matrix A. The algorithm uses iterations with the given matrix.
2) Resolvent Monte Carlo algorithm (RMC) for calculating the smallest or the largest eigenvalue. The algorithm uses Monte Carlo iterations with the resolvent matrix and includes parameter controlling the rate of convergence;
3) Inverse Monte Carlo algorithm (IMC) for calculating the smallest eigenvalue. The algorithm uses iterations with inverse matrix. Numerical tests are performed for a number of large sparse test matrices using MPI on a cluster of workstations.

Team: Sanju Meena-11211014, Tarun Bansal 11211024, Utkarsh Agrwal-11211025

Project12:  A cross site (programming) problem recommendation system.
Description : We will be making a cross site recommendation system which will suggest problems to a user based on his activity in different coding platforms. The idea is that next time when the user submit solution to a problem,  we should recommend him some problems based on his submission history,  current problem and whether his solution was correct or not.  The difficult task here is to come up with right set of problems to recommend as well as generate the recommendations fast as the user will be delivered these problems in real time.   We will be targeting Spoj and Codeforces for now. Both sites have a huge collection of problems and a non-parallel approach will suffer due to sheer size of dataset.  Also if we consider n different coding platforms,  there will be nC2 possible recommendation sets thereby making iterative approach very slow.  Our idea is to run all these possible combinations as well as the data collection/processing steps in parallel threads on  a GPU so as to generate recommendations in practical time bounds.

Team members : Pranay Choudhary 11114028, Shagun Sodhani 11114039, Surendra Gadwal 11114045


 


Evaluation (Grading)

    30% Continuous evaluation in every lab every week
   25% Mini-Project
   15% Mid Term
   30% End Term