Assignment 3

In the Data Structures and Algorithms class we learnt to analyze the complexity of an algorithm by counting the number of instructions and using the big-oh notation to capture its order of growth. We talked about $O(n)$, $O(n lg n)$, $O(n^2)$ running times, and so on. This is a pretty good model in most cases. In this class we will see that, in practice, the running time depends on the data access pattern of the algorithm and on the memory hierarchy. When the problem size is small, the running times depends on the cache present in the system; when the problem size is large, it depends on the hard disk and the IO between the main memory and the disk. We'll see that, in some cases, this dependency can be ignored, because it does not affect the running time that much, while in others it matters a lot.

In this assignment you will design an experimental analysis to investigate the performance of a bunch of algorithms:

  1. built-in system quicksort (test_qsort)
  2. built-in system mergesort (test_msort)
  3. accessing an array sequentially (test_seq)
  4. accessing an array randomly (test_random)
  5. traversing a list (test_llist)
More precisely,

Some more details

Let's start with quicksort, and let's assume you are trying to sort an array of N integers. The outcome of the experimental analysis of quicksort is a table summarizing various input sizes and the average running time of quicksort for inputs of that size. Especially for small running times, you need to run many samples and take an average. For longer times, your number of samples can go down. It's probably a good idea to give the number of samples as an input to your code on the command line. The size of the input should also be given on the command line. That is, I should be able to run your code as:

test_qsort 1000 100
meaning: do 100 runs of quicksort on an array of size 1000 elements and print out the average running time. For each run, initialize the array with different random numbers.
Or, more elegantly, look into getopt:
test_qsort -n 1000 -r 100

The timer will give you not only walll time (start to finish), but also the user + system time spent in CPU mode. From here you can infer the time that the process spent in IO mode. Include that along in your table.

As part of the assignment, you'll need to find out what values of the input size $N$ are interesting. Especially when N is small, you want to test with enough values of N so that you can see the effect of the caches. As N gets larger, you want to be careful because the runs will start to take a very long time.

Each one of the test programs take an option -v which stands for verbose. If this flag is present, the program should briefly report on its status. Otherwise, it should print out only the necessary info, something like this:

$ test_qsort -n 1000 -r 2 -v
test_qsort n=1000 r=2 -v
creating array of 1000 x 4B = 4000B
RUN1: initializing.. sorting..done.
RUN2: initializing.. sorting..done.
$
$ test_qsort -n 1000 -r 2
test_qsort n=1000 r=2
creating array of 1000 x 4B = 4000B
$
To process command line options elegantly, look into getopt.

Compile your code with -m64. As N gets large, don't be surprised if your code gets (very slow). It may be useful to keep an eye on it with 'top'.

Timing: when you run experiments for which timing is important, you need to minimize interferences from other users/processes on that machine. Be careful before you run your tests --- they may take a very long time, hang the machines, and generally interfere with what other users are doing. Do not use dover---it's a public machine, ane you don't want to cause any troubles. For small and quick tests use any machine you want, but for the time-consuming tests use only the GIS machines: tuna (512MB), bass (1GB) and the grid (I'll send you a message with how to send job to the grid). Of course, once you start recording timings, use the same machine for consistency.

Common pitfalls: number overflow. Don't forget that an int can store numbers in the range [~-2 billion, ~2 billion]. That is, when you try to allocate an array of size 4 billion bytes, you cannot store this amount in an int. You need to use longs.

Recording the output: As you run more and more experiments, you will start to feel the need to automate the process. One thing that may help is to record all the outputs in a separate file:

$test_qsort 1000 100 >& timings.txt
This stands for write& append to file. When all experiments are done, you'll have all the output recorded in the file, for your perusal.

Scripting: Once you are sure the code runs correctly, you are ready for writing a shell script to run all the experiments. Here is a script to use as an example.

Accessing the grid: to come. The grid nodes can be accesses only through scripts, very much like the one above. Before running on the grid use tuna and bass to make sure your scripts run as expected.

Report: Type your report in Latex. You can use teh following template.

Hand in: Email me the code so that I can test it. Bring to class a hardcopy of the code, and a report summarizing the tests that you wrote, and the conclusions that you reached.


Last modified: Tue Feb 8 15:16:03 EST 2011