Lab03: Superposimetric Image Processing

Quantigraphic sensing is based on the linearity and superposition properties of light, i.e. comparametrics and superposimetrics. Comparametric equations in image processing take the form g(f(q(x,y))) as a function of f(q(x,y)), where we consider f(q) and g=f(kq). In superposimetrics we consider f(q1(x,y) + q2(x,y)), i.e. additivity of lightspace, and more generally, f(k1 q1(x,y) + k2 q2(x,y)).

Superposimetrics is useful for both image analysis and image synthesis, e.g. for both sensing and display (e.g. overlay of augmented reality content).

In this lab you will understand fundamentals of superposimetrics. This includes the fundamental basis upon which superposimetric equations are built.

Let us begin with Part A of the lab by understanding how multiple-exposure photography works, and more generally, the concept of exposures in images.

In Lab 2 you already learned what a camera measures, i.e. the photoquantity, q.

Indeed, the photocell experiment can be done superposimetrically, i.e. by exposure to one light source, then to another light source, and then to both light sources together at the same time. You might have done this in Lab 2 but with the two light sources contributing exactly equally. Here that equal contribution is no longer a requirement, i.e. we generalized what we learned in Lab 2!

The exposure to the first light source is f1 = f(k1 q1).
The exposure to the second light source is f2 = f(k2 q2).
The exposure to both light sources, we will call f3 = f(k1 q1 + k2 q2), while recognizing that these are typically estimates of a true process to which there are various sources of noise, uncertainty, etc..

In Part A of the lab, we will understand how to combine differently illuminated pictures of the same subject matter using the CEMENT = Computer Enhanced Multiple Exposure Numerical Technique;
you can browse wearcam.org/cement/ or download all of it, wearcam.org/cement.tgz

You will post your Part A results to the following Instructable:
https://www.instructables.com/Shooting-for-a-Homepage-Feature-Timelapse-and-Mult/

For Part A, simply find a dark space where there are only 2 sources of light that you have control over. I'll call them lamp 1 and lamp 2. One might be a desk lamp and the other might be a ceiling lamp. For best results they should overlap nicely but still be distinct. Turn on only lamp 1, and then take a picture with only lamp 1 turned on. That picture is called f1. Turn lamp 1 off and then turn on lamp 2 and take a second picture with only lamp 2 on. That second picture is called f2. Then turn on both lamps and take a picture with both lamps turned on. That third picture is called f3. Try to set the camera to manual exposure if you can, as this will give best results. If there is no way to do that, try and get a well balanced lighting configuration so the exposures are somewhat "reasonable".

Capturing the set of pictures, as outlined above, 1/5

Now assume a simple law-of-composition on the two pictures as follows: f12 = (f1^p + f2^p)^(1/p). You can combine them using the CEMENT program, adjusting powLookup22.txt for which the default power is 2.2 or you can combine them using code you write yourself. Try a variety of different powers, p. For example, when p=2 you will square each of the pictures, add them, and then take the square root. f12 should look similar to f3 because it is a synthesis of the process of combining the effects of the two lights.

Show some examples of combined images (variations of f12) for various values of p, 2/5

Compute an error such as MSE (mean square error) between f12 and f3 for each value of p that you try. The MSE is just the average of the square of the difference of the images, e.g. a single number across the entire pictures, that on-average shows how different f12 is from f3. What value of p gives you the lowest error?

Plot a graph of error as a function of p, 1/5.

Determine the p value that results in the minimum, 1/5.

Part B:

Part B is an opportunity to understand wearable computing and augmented reality overlays... dashboards, etc..

Use CEMENT to overlay an electrocardiogram and heart rate information onto a video feed. You can record your own ECG waveform together with video, or you can use data captured from the WearCam™/WearComp™ eyeglass and wearable ECG. Here is a single frame of 4k 60p video captured from the WearCam™/WearComp™ eyeglass:

and here is an oscillograph picture (HP1200A cathode ray oscilloscope) and numerals picture (Nixie tubes which are special miniature light bulbs in which there are 10 numeral-shaped filaments in each bulb, one shaped like each numeral from 0 to 9):

Use CEMENT to combine these three images together while trying different exponents, p, and this time, try to find visually (artistically) which value of p works best in terms of creating the best overall picture.

Your result might look something like this:

What value of p resulted in the best or most natural-looking image? 1/5.

Show some example images for various values of p. Feel free to get creative in using CEMENT. 1/5.

Design an augmented reality eyeglass "dashboard", e.g. for real-life physical fitness. Show an electrocardiogram as well a heart rate derived from the electrocardiogram, overlayed onto the video stream. You can use your own electrocardiographic heart monitor and your own video feed, or you can use recorded data from elsewhere here, such as from a February 17th run and swim. 3/5.

To keep processing simple, choose a relatively short video sequence in which the ECG data is good. In ECG the electrodes sometimes fall off during intense physical activity, so it is useful to detect data quality and choose a segment of data that is good.

Dataset for fun and learning

Here's an approximate instance of a Wyckoff set as used in one of the video lectures:
wyckoffsetHDRexample
which you might have fun with, e.g. construct pairwise comparagrams... excellent way to learn more about comparametrics.

You can browse that data in the directory:
wyckoffsetHDRexample

Reference citations:

https://www.instructables.com/Shooting-for-a-Homepage-Feature-Timelapse-and-Mult/
https://projet.liris.cnrs.fr/imagine/pub/proceedings/ICIP-2007/pdfs/0400233.pdf