# Lab03: Superposimetric Image Processing

Lab3 for 2023: do the following Instructable and post your results there: https://www.instructables.com/3D-Metavision-Using-a-2D-Computer-Screen-by-Way-of/ on or before 3pm Wed. March 1st, as we'll review the postings in the class to prepare you for next day's presentations.

• 1/10 Take one or more pairs of images that differ only in exposure. To do this, fixture a camera securely, e.g. on a tripod or firm holding position. Set the camera to manual settings or download an app that allows for manual settings. Shoot at the same ISO and same aperture but vary the shutter speed. Choose subject matter that is stationary (not moving) and lighting that also does not vary or flicker over time. Any reasonable photographic subject will usually work, as long as it has a good range of light and tone. Hint: try k=2, to begin with, but feel free to experiment. If you shoot more than one pair of images, make sure they have the same k value. Include the pictures in your submission. Hint: Check out some image pairs from last year: http://wearcam.org/ece516/wyckoffsetHDRexample/vindex.htm where they are actually just a sequence of images of varying exposure so you can do pairwise comparagrams and average them all together. Check out the YouTube video from previous year's lecture showing this procedure.
• 2/10 Compute the comparagram(s) of the image pair(s) and also include these in your submission. If the comparagram(s) is (are) not well-populated, repeat with different subject matter or use more pairs. If you shoot multiple pairs with the same k value, you can average the comparagrams together to fill out the space better.
• 2/10 Compute the comparagraph and solve, to determine your camera's response function.
• BONUS/10 (optional) Look up or determine your camera's response function if you can find it, and compare to what you computed.
• 1/10 Capture at least two metaveillance photographs of veillance squares as per the Instructable, each one with the screen a different distance from your 1-pixel test camera (see the Instructable).
• 2/10 Combine the two pictures using the law of composition shown in the Instructable. Include the combined result in your submission.
• 2/10 Repeat for multiple exposures and combine the multiple exposures to generate a densely populated metaveillograph.
• BONUS/10 (optional) Animate the veillance flux, like in this example. There's an 8-step process to doing multiple exposure photography here.
• BONUS/10 (optional) Capture one or more pairs of pictures that differ only in illumination and compute the superposigram. From this superposigram, determine your response function and compare the superposimetric analysis with comparametric analysis.

## For your reference, here's last year's (2022) lab which is very similar:

Quantigraphic sensing is based on the linearity and superposition properties of light, i.e. comparametrics and superposimetrics. Comparametric equations in image processing take the form g(f(q(x,y))) as a function of f(q(x,y)), where we consider f(q) and g=f(kq). In superposimetrics we consider f(q1(x,y) + q2(x,y)), i.e. additivity of lightspace, and more generally, f(k1 q1(x,y) + k2 q2(x,y)).

Superposimetrics is useful for both image analysis and image synthesis, e.g. for both sensing and display (e.g. overlay of augmented reality content).

In this lab you will understand fundamentals of superposimetrics. This includes the fundamental basis upon which superposimetric equations are built.

Let us begin with Part A of the lab by understanding how multiple-exposure photography works, and more generally, the concept of exposures in images.

In Lab 2 you already learned what a camera measures, i.e. the photoquantity, q.

Indeed, the photocell experiment can be done superposimetrically, i.e. by exposure to one light source, then to another light source, and then to both light sources together at the same time. You might have done this in Lab 2 but with the two light sources contributing exactly equally. Here that equal contribution is no longer a requirement, i.e. we generalized what we learned in Lab 2!

The exposure to the first light source is f1 = f(k1 q1).
The exposure to the second light source is f2 = f(k2 q2).
The exposure to both light sources, we will call f3 = f(k1 q1 + k2 q2), while recognizing that these are typically estimates of a true process to which there are various sources of noise, uncertainty, etc..

In Part A of the lab, we will understand how to combine differently illuminated pictures of the same subject matter using the CEMENT = Computer Enhanced Multiple Exposure Numerical Technique;

You will post your Part A results to the following Instructable:
https://www.instructables.com/Shooting-for-a-Homepage-Feature-Timelapse-and-Mult/

For Part A, simply find a dark space where there are only 2 sources of light that you have control over. I'll call them lamp 1 and lamp 2. One might be a desk lamp and the other might be a ceiling lamp. For best results they should overlap nicely but still be distinct. Turn on only lamp 1, and then take a picture with only lamp 1 turned on. That picture is called f1. Turn lamp 1 off and then turn on lamp 2 and take a second picture with only lamp 2 on. That second picture is called f2. Then turn on both lamps and take a picture with both lamps turned on. That third picture is called f3. Try to set the camera to manual exposure if you can, as this will give best results. If there is no way to do that, try and get a well balanced lighting configuration so the exposures are somewhat "reasonable".

Capturing the set of pictures, as outlined above, 1/5

Now assume a simple law-of-composition on the two pictures as follows: f12 = (f1^p + f2^p)^(1/p). You can combine them using the CEMENT program, adjusting powLookup22.txt for which the default power is 2.2 or you can combine them using code you write yourself. Try a variety of different powers, p. For example, when p=2 you will square each of the pictures, add them, and then take the square root. f12 should look similar to f3 because it is a synthesis of the process of combining the effects of the two lights.

Show some examples of combined images (variations of f12) for various values of p, 2/5

Compute an error such as MSE (mean square error) between f12 and f3 for each value of p that you try. The MSE is just the average of the square of the difference of the images, e.g. a single number across the entire pictures, that on-average shows how different f12 is from f3. What value of p gives you the lowest error?

Plot a graph of error as a function of p, 1/5.

Determine the p value that results in the minimum, 1/5.

# Part B:

Part B is an opportunity to understand wearable computing and augmented reality overlays... dashboards, etc..

Use CEMENT to overlay an electrocardiogram and heart rate information onto a video feed. You can record your own ECG waveform together with video, or you can use data captured from the WearCam™/WearComp™ eyeglass and wearable ECG. Here is a single frame of 4k 60p video captured from the WearCam™/WearComp™ eyeglass:

and here is an oscillograph picture (HP1200A cathode ray oscilloscope) and numerals picture (Nixie tubes which are special miniature light bulbs in which there are 10 numeral-shaped filaments in each bulb, one shaped like each numeral from 0 to 9):

Use CEMENT to combine these three images together while trying different exponents, p, and this time, try to find visually (artistically) which value of p works best in terms of creating the best overall picture.

Your result might look something like this:

What value of p resulted in the best or most natural-looking image? 1/5.

Show some example images for various values of p. Feel free to get creative in using CEMENT. 1/5.

Design an augmented reality eyeglass "dashboard", e.g. for real-life physical fitness. Show an electrocardiogram as well a heart rate derived from the electrocardiogram, overlayed onto the video stream. You can use your own electrocardiographic heart monitor and your own video feed, or you can use recorded data from elsewhere here, such as from a February 17th run and swim. 3/5.

To keep processing simple, choose a relatively short video sequence in which the ECG data is good. In ECG the electrodes sometimes fall off during intense physical activity, so it is useful to detect data quality and choose a segment of data that is good.

## Dataset for fun and learning

Here's an approximate instance of a Wyckoff set as used in one of the video lectures:
wyckoffsetHDRexample
which you might have fun with, e.g. construct pairwise comparagrams... excellent way to learn more about comparametrics.

You can browse that data in the directory:
wyckoffsetHDRexample

## Reference citations:

https://www.instructables.com/Shooting-for-a-Homepage-Feature-Timelapse-and-Mult/
https://projet.liris.cnrs.fr/imagine/pub/proceedings/ICIP-2007/pdfs/0400233.pdf