The following are some of the smaller, older projects I worked on during my undergrad and early grad years. Mostly C++ (some Pascal as well).

Download Windows executable

Bird animation, one of my first OpenGL projects. Bezier surfaces are built on top of a skeleton. I developed a Bezier patch combiner, automatically preserving continuity between two patches handling most of the overhead in building the model and scripting the animation. 13 vectors are used to control the motion, once you move them, Bezier patches are automatically adjusted. Took me three full days to do it once I learned OpenGL. You can click on the image to download a Win9x/NT executable along with the source code.

AVHRR channel 6 (green band i.e. mostly vegetation activity) visualization on the whole world during April 92 - September 92. Six total datasets, images are interpolated to fill missing gaps. Original size of the visualization is 1024x444, 70 frames. Processed about 4GB of satellite data to build it.

Serial and parallel volume renderer using ray-casting algorithm. It has interpolation and antialiasing. The image on the left is from the head dataset, I tried to isolate brain tissue by setting the color of the brain tissue and some of the soft tissue to green with increased opacity compared to other parts (mostly reddish colors). This version requires regular voxels (brick elements of same size). I implemented the parallel version using MPI C libraries with my C++ code. There is also a version using unstructured tetrahedral elements, it uses an octree structure to speed up ray intersection calculations but it is not necessarily the best way of doing it. Parallelization of the structured one is trivial, much similar to raytracing, but it was trickier to parallelize the one using unstructured tetrahedrals.

Robot arm simulator, You move the mouse around, draw some shapes, and the robot arm tries to follow you and do the same. There is a neural network constantly learning how to adjust the two angles of the arm so that it will point to the location you move your mouse. At the beginning it shakes considerably, after sometime it stabilizes and does a good job of following you. It behaves as a plotter too, type your text and it will try to plot it as best as it can. This shot is taken after a few minutes of training, it gets better with time. I used my own userinterface with my keyboard and mouse interrupt handlers hooked to the actual interrupts to do that, you can record all keystrokes and mouse actions. Once you record a sequence you can clear the screen and play the same sequence again and again during different stages of learning and see how it gets better with time. I also implemented kernel smoothing and analytical solution to compare its performance.

Ray-tracer. When I was writing this program, I had no idea about what OpenGL is and I did not have access to a good library supporting truecolor modes, so I drilled down and written my own graphics library in assembly using VESA Bios extensions. It turned out that my library was about 4 to 5 times faster in putting a pixel in truecolor mode compared to Borland's library which was supporting only up to 16 colors at 640x480 resolution. I like this project not only for the graphics library but also for having the chance to make good use of inheritance and class abstraction in C++ for the first time. I also implemented a parallel version using MPI afterwards, parallelization was trivial, except slight load balancing issues, but it was fun to be able to generate an image in seconds as opposed to minutes.

Quantization and Dithering. I implemented and compared a number of color quantization and dithering methods. Input is a truecolor image with possibly tens of thousands of unique colors, you want to decrease the number of colors considerably to compress the image or to match a particular output device's capabilities; sometimes your palette becomes as small as 4 or 16 colors. First step is the quantization part, it requires building a histogram (I implemented a hash table based on rgb values to speed up histogram operations), and selecting palette entries while minimizing the quantization error. After this we use dithering to build the final image as close to the original image as possible with minimal visible artifacts (patterns, aliasing etc.). I implemented K-means, first k modes, valley descending and simulated annealing, and compared them to my methods: recursive splitting and particle simulation for the quantization part. I compared nearest neighbor and error diffusion methods with my dithering method (a fuzzy dithering using probabilities and kernel functions). The image shows recursive splitting applied to Lena, number of colors are reduced from 256 to 8. I developed a few useful libraries for this project: my image class with all sorts of operations, filter, convolution, thresholding, histogram functions etc., a targa class which knows how to read/write uncompressed truecolor targa files sitting on top of my image class, and some other classes for hashing and set operations.

Related document in PDF format

Function approximation demo, uses different methods to approximate a polynomial of forth degree (regression, genetic algorithms, simulated annealing, neural nets).

I developed my own neural network class for this project, it is a template class, 1 hidden layer, variable input and hidden units, has adjustable momentum and learning factors, basically an easy to use back propagation library for C++.

I used a dynamic learning rate for the neural network training part, if the error is minimized learning rate was increased by a small factor, thus if you are doing good in terms of error, you increase step size. Once your error gets worse a few times, learning factor is scaled down, and you settle around the minima. Turned out to work well for this problem.

Pisti, a simple card game, not particularly smart, but it uses probabilities and keeps track of what it saw in the past to decide on the next move without cheating obviously. It has built in rules and the rest is calculated with probabilities, plays well in most occasions, but does not learn from previous mistakes.

FMG, A simple board game. White has three pieces, black has one (blue one on the center is actually white, selected for movement), White wants to surround black in a corner. If white moves back or a board configuration is repeated, black wins. Any player can be played by computer, white played by computer never looses, black played by computer always wins if white makes a bad move. I did not take any computer graphics courses before this, it took me some time to figure out 3D equations. Although the search tree was not huge like chess, it was time restricted. This was the only program with a very good wins/looses ratio in competition, it won every time it played white, and lost only a few times when it played black against other programs. Graphical user interface was optional and I had fun building a 3d one.

Sort, a demonstration of major sorting algorithms. Times various methods with varying number of records, varying sizes of records, using random order input, sorted, or reverse order input. Has insertion sort, quicksort, merge sort, heap sort, and radix sort. For example the graph on the left shows insertion sort with different number of elements (blue random order, red reverse order, green already sorted).

GRP, Graph Package, interactive demonstration of various algorithms on graphs, nice user interface (I developed it once, and used it in a good deal of programs; some of them are above), you can add / remove / move / connect nodes using just the mouse. Supports directed / undirected, weighted / unweighted graphs. Has minimum spanning tree, dfs, bfs traversal, connected components, transitive closure and more... Handy for studying graph operations visually, it shows every step in slow motion. I have similar programs for binary tree operations, and balanced tree operations as well, but they are in text mode.

Copyright © Bilgehan Uygar Oztekin