Woodworking: MTG Land Station

I've been playing around with the idea of woodworking for pretty much the whole year now, watching videos on YouTube and tearing through a few books on the topic. I did a few simple projects leaning heavily on the laser cutter to do all the operations, but that isn't really woodworking. I finally decided to put together a land station, that is, a box for people to grab basic lands on those rare times a draft comes together!

The design is fairly simple, four planks of wood for the sides of the box, with 45 degree mitred edges, and four dados (slots) for dividers, plus a flat plank as a bottom. The dimenions of a card are (roughly) 3.5 inches tall, but 2.5 inches wide, and I ran with those for my first attempt. Trying to put a 45 degree miter along a 3.5 inch edge with a hand saw was a losing battle, and the prospect of putting in eight dados with a router plane (something like this) sounded frustrating. After quite a bit of hemming and hawing, I eventually bit the bullet and bought a router, some bits, and a table for it. While there was a sale going on at the time, I certainly had to convince myself that I'm excited for more than this one project.

The dimensions were driven in part by the cheap wood I had access to, namely long planks of quarter-inch thick, 3.5 inch wide pine. I kept the height and stuck with a single thickness to simplify the sawing operations, which are harder than they look. I ended up using a cheap clamping mitre box to establish a perpendicular cut line, and then clamped the piece to a heavy piece of scrap for the remainder of the cut to prevent tear-out (an issue that frustrated me enormously at first.)

My initial design had tolerances that ended up being too tight, and it was going to be impossible to get the cards in and out. The final dimensions of each piece are laid out below. The critical number turned out to be 2.75 inches -  the width of the empty space (measured from divider edge to divider edge, not centers) for each card "lane". If that sounds a little too big, it's because it is, but a small error one way or the other won't prevent cards from getting into or out of the box. In the future I'll probably shave 1/8th inch off that value to reduce card "jiggle" in something like a deck box.

  • 2x sides (L/R) -  3.5" x 5.5" x 0.25"
  • 2x sides (F/B) - 3.5" x 15.25" x 0.25"
  • 1x bottom - 5.5" x 15.25" x 0.25"
  • 4x dividers - 3.5" x 5.25" x 0.25"

The dados to retain the dividers are 1/8th (0.125) inch deep on the front and the back, and were cut with the fence fixed to ensure they ended up aligned. I had to make up a 90 degree jig by clamping some heavy blocks to cut the furthest-in dados, as the fence can only move about 5 inches back from the bit, but it worked well enough. The mitres were put on with a 45 degree router bit, over many passes to carefully creep up on the proper depth. Once all the cuts were done, the front panel was off to the laser.
 


 

The pattern was generated using the vector mana symbols generously posted by Goblin Hero over at Slightly Magic, they had just to be scaled and moved around to fit the panel. I put down some masking tape to prevent resin deposition on the wood, but it also seems to have caused some line-artifacts in the final cut, likely due to "thick" overlaps attenuating the beam. In the future I'll probably avoid using tape and just sand the surface clean afterward, as the residual adhesive also looked to interfere with the dye and oil in a few places. For reference, it was cut on an Epilog Ext36 150W in raster mode at 600 DPI, 100% speed, 70% power, in a single pass.

Cut pieces laid out for dry-fitting

Cut pieces laid out for dry-fitting

After sanding all the sides with a ~250 git sanding sponge, the sides and panels were glued together using titebond and a 90 degree clamp, something I didn't even know existed before needing one. I was able to snugly fit in the divider into the back without glue, and press the front panel on for gluing.

Gluing the front panel

Gluing the front panel (bottom is not attached)

After letting it dry over night, I was ready to dye and finish it. I'd experimented with some scrap wood from the same boards to see how the dye and oil finishes would look, and settled on Transtint golden brown diluted in water, and a Danish Oil finish. I applied the dye carefully, given all the warnings it comes with, and gave it plenty of time to dry. Then the oil finish went on and took all night to set, I opted for a single layer as I wasn't looking for a shiny or silky appearance, just sealed. At this point I finally glued on the bottom and gave it a few hours to set.

Nest of clamps

Nest of clamps

Finished land station

Finished land station

It definitely took a lot more time, effort, and learning to finish this than I anticipated, but I am happy with how it came out. I've already gotten a lot of good suggestions for improving it (e.g. cutting semi-circular access holes at the front of each row so you can always get at the cards, also adding a lid isn't a bad idea), but will probably move on to other projects for the time being. The next on my list is a commander deck box, and after that, a substantially more intricate box for my cube to live in. I've got to spread out that tooling cost somehow!

Bleached Shirts with a Laser Cutter

With a whole week off around Thanksgiving, Ouliana and I finally had time to test out a method for templating bleached shirts we'd seen online. It needs freezer paper, which as far as I can tell is butcher paper with wax on one side only. The plan consists of cutting out your pattern, and ironing the waxed side onto the shirt, applying your bleach-water solution, and peeling off the mask. The twist being that cutting precise patterns is a pain, but a laser should be able to make quick work of it!

For the pattern, I came across this image of Samus from the Metroid games, posted by terrorsmile on DeviantArt. I wanted the pattern to be a true stencil, meaning having at least one totally contiguous region to be the "mask". That took some doing, about an hour of work in GIMP, but I ended up with an inverted stencil that could be cut without producing "islands".  I'm a bit reluctant to share the file, as it's based so closely on someone elses's work, but the process is fairly straightforward (the magic-wand selection tool will immediately show you any "islands" left in your image.)

The other issue that came up was the freezer paper tends to curl (it does come on a roll), so it had to be taped down at the edges to a rigid substrate, scrap acrylic in this case. The second pattern was a manually made stencil of the "doom guy" dolls from the 2016 Doom game that Ouliana made, seen getting ironed on below.

Ironing the freezer paper

Ironing the freezer paper

The positive-stencil of the doom guy did end up having "islands", meaning a few pieces of freezer paper had to be carefully place and individually ironed on. Also, having a positive pattern mean needing to block off the rest of the shirt with extra paper to prevent any stray bleaching. The negative stencil, shown below just after ironing, needed no additional masking.

Samus stencil applied

Samus stencil applied

I opted for a slow and regular application of 50/50 bleach in water solution, spraying a few times, and giving it 5-10 minutes to act and dry, and repeated that roughly three times. The shirts were then rinsed out in the shower, and immediately washed. We noticed that setting the iron too low resulted in poor bonding, so the paper would "pop" off the shirt, and ironing too hot resulted in small beads of wax around the edges of the stencil that remained after peeling away the paper. They can be picked off by hand, but it is a pain. The final result of my shirt attempt is below!

Samus shirt!

Samus shirt!

Dimir EDH Box with Nacre Inlay

It's been a while since I've put up an update, but I haven't been up to nothing. I found out that I have access to a beefy laser cutter through my work, and have been throwing various projects at it over the last few weeks. One of the first things I wanted to try was precision inlay and making a rudimentary deck box. My first target was my Lazaav EDH deck, and it provided a good opportunity to get my feet wet with both laser cutting, inlay, and wood staining. The material itself comes from an 1/4" thick oak project board. I had pondered putting on a few layers of shellac or other finish to smooth it out, but decided that I'd already learned what I'd wanted to, and kinda preferred the woodgrain look.

Showing off the inlay colors

Showing off the inlay colors

The lid fits, but not perfectly

The lid fits, but not perfectly

The ebony stain looks almost exactly like I wanted it to

The ebony stain looks almost exactly like I wanted it to

I didn't both to finish the interior on this one, a final version would definitely have a lining.

I didn't both to finish the interior on this one, a final version would definitely have a lining.

The pattern for the box itself was generated using MakerCase, but I think for later projects I'm going to opt for actually making the joints myself to ensure they fit. There seemed to be some inconsistencies with the laser kerf (similar to the saw-depth for conventional cuts) that prevented perfect mating, and I haven't dialed in the compensation for that.

Over all this served as a proof-of-method and tools for doing nacre inlay on later projects, and bonus points: it still works as a box.

Recognizing Cards - Finding the Match

In the previous two posts in this series (1, 2) I talked about capturing a reliable image of the art on a magic card from a webcam, and how we can hash those images for an effective and mostly efficient comparison. Now we're left with a hash for the card we'd like to match, and an enormous table of hashes for all the possible matches (something like 18,000 cards). The first question is how to compare the hashes in a meaningful way, but thankfully this is made easy by the nature and documentation for phash. The Hamming distance, or number of dissimilar characters in the string, is the metric of choice. This method is demonstrated step-wise below, first for the correct target, then for a random non-target image.

When compared to the actual match, the Hamming distance is 9. Mismatched character are highlighted in red.

When compared to the actual match, the Hamming distance is 9. Mismatched characters are highlighted in red. While the cutoff value might take some fine tuning, for our purposes (and my camera), 9 is a relatively strong match.

Hashing_example_2

Compared to an incorrect image, the Hamming distance is 15. The single matched character is effectively within the noise.

Hamming distance is a reliable metric for this hashing approach, and is relatively easy to compute if the hashes are all already calculated. That being said, we don't really want to do 18,000 x 16 charter-character comparisons in order to determine which card we're looking at. Instead we can use a binary search tree. There is a lot already written on binary trees, but the short version is this: by splitting space up one can iteritively narrow down the possibilities, rather than look at each candidate individually. It does require that you take the time to build the tree, but individual searches become substantially faster.

But wait, the Hamming distance doesn't provide coordinates in some searchable space, it provides a measure describing how two data are related (or not), but this can be thought of as a 16 dimensional metric space. The approach I've gone with is a vantage-point, or VP tree, chosen primarily since Paul Harrison was kind enough to post his Python implementation. The idea behind VP trees is to pick a point in your space, e.g. the hash "1111aaaabbbbcccc", and then break your member-set into two parts: those "nearer" than some cut-off Hamming distance, and those further out. By repeating this process a tree of relations can be built up, with adjacent 'branches' having smaller hamming distances than 'far' branches. This means that a hash you're trying to match can rapidly traverse the tree and only run direct comparison with one or two actual set members. The paper by Kumar et.al has an excellent explanation of how this compares with other binary-tree approaches, and while they were doing image-patch analysis, the content is still incredible relevant and well presented. Figure 2 in that paper, not reproduced here, is perfect for visualizing the structure of VP trees!

I'm still in the process of cleaning up code, but plan to shortly follow up with a video demonstration of the code in action, as well a few snippets of particular interest.

Recognizing Cards - Effective Comparisons with Hashing

In the previous post we got as far as isolating and pre-processing the art from a card placed in front of the camera; now we come to the problem of effectively comparing it with all the possible matches. Given the possible "attacks" against the image we're trying to match, e.g. rotation, color balance, and blur, it's important to choose a comparison method that will be insensitive to the ones we can't control without losing the ability to clearly identify the correct match among thousands of impostors. A bit of googling led me to phash, a perceptual hashing algorithm that seemed ideal for my application. A good explanation of how the algorithm works can be found here, and illustrates how small attacks on the image can be neglected. I've illustrated the algorithm steps below using one of the cards from my testing group, Snowfall.

Illustration of the phash algorithm from left to right. DCT is the discrete cosine transform. Click for full-size.

The basic identification scheme is simple: calculate the hash for each possible card, then calculate the hash for the art we're identifying. These hashes are converted to ASCII strings and stored. For each hash in the collection, calculate the hamming distance (essentially how many characters in the hash string are dissimilar), and that number describes how different they are. The process of searching through a collection of hashes to find the best match in a reasonable amount of time will be the subject of the next post in this series (hint: it involves VP trees.) Obtaining hashes for all the possible card-arts is an exercise in web scrapping and loops, and isn't something I need to dive into here.

One of my first concerns upon seeing the algorithm spelled out was the discarding of color. The fantasy art we're dealing with is, in general, more colorful than most test image sets, so we might be discarding more information for less of a performance gain than usual. To that end, I decided to try a very simple approach, referred to as phash_color below: get a phash from each of the color channels and simply append them end-to-end. While it takes proportionally longer to calculate, I felt it should provide better discrimination. This expanded algorithm is illustrated below. While it is true that the results (far right column) appear highly similar across color channels, distinct improvements to identification were found across the entire corpus of images compared to the simpler (and faster) approach.

The color-aware extension of the phash algorithm. The rows correspond to individual color channels.

The color-aware extension of the phash algorithm. The rows correspond to individual color channels. Click for full-size.

I decided to make a systematic test of it, and chose four cards from my old box and grabbed images, shown below. Some small attempt was made to vary the color content and level of detail across the test images.

The four captured arts for testing the hashing algorithms.

The four captured arts for testing the hashing algorithms. The art itself is the property of Wizards of the Coast.

For several combinations of hash and pre-processing I found what I'm calling the SNR, after 'signal-to-noise ratio'. This SNR is essentially how well the hash matches the image it should, divided by the quality of the next best match. The ideal hash size was found to be 16 by a good deal of trial and error. A gallery of showing the matching strength for the four combinations (original phash, the color version, with equalized histograms, and without pre-processing) are shown below, but the general take-away is that histogram equalization makes matching easier, and including color provides additional protection against false positives.

This slideshow requires JavaScript.

If there is interest I can post the code for the color-aware phash function, but it really is as simple as breaking the image into three greyscale layers and using phash function provided by the imagehash package. Up next: VP trees and quickly determining which card it is we're looking at!

Recognizing Cards - Image Capture

Back in October I posted a short blurb on my first attempts on recognizing Magic cards through webcam imagery. A handful of factors have brought me back around to it, not the least of which is a still un-sorted collection. Also, it happened to be a good excuse to dig into image processing and search trees, things I’ve heard a lot about but never really dug into. Probably the biggest push to get back on this project was a snippet of python I found for live display of the pre-and-post processed webcam frames in real time, here. There is real novelty in seeing your code in action in a very immediate way, and it also eliminated all of the frustration I was having with convincing the camera to stay in focus between captures. At present, the program appears to behave well and recognize cards reliably!

I plan to break my thoughts on this project into a few smaller posts focusing on the specific tasks and problems that came up along the way, so I can devote enough space to the topics I found most interesting.

  • Image Pre-Processing
  • Recognizing Blurry Images: Hashing and Performance
  • Finding Matches: Fancy Binary Trees

I should note here: a lot of the ideas used in this project were taken from code others posted online. Any time I directly used (or was heavily inspired by) a chunk of code, I’ll link out to the original source as well as include a listing at the bottom of each post in this series.

Pre-Processing

The goal here was to take the camera imagery and produce an image that was most likely to be recognized as "similar" by our hashing algorithm. First and foremost, we need to deal with the fact that our camera (1) is not perfect, the white-balance, saturation, and focus of our acquired image may all be different than the image we're comparing with, and (2) the camera captures a lot more than the card alone. Let's focus on the latter problem first, isolating the card from the background.

The method I described in the previous post works sometimes, but not particularly well. It required exactly ideal lighting and a perfectly flat background. The algorithm I ended up settling on is:

  1. Convert a copy of the frame to grey-scale
  2. Store the absolute difference between that frame, and the background (more on that later)
  3. Threshold that difference-image to a binary image
  4. Find the contours present using cv2.findContours()
  5. Only look at the contours with a bounded area greater than 10k pixels (based on my camera)
  6. Find a bounding box for each of these contours and compute the aspect ratio.
  7. Throw out contours with a bounding box aspect ratio less than 0.65 or greater than 1.0
  8. If we've got exactly one contour left in the set, that's our card!

The next problem to tackle is that of perspective and rotation, which thankfully we can tackle simultaneously. In the previous steps we were able to find the contour of the card and the bounding rectangle for that contour, and we can use these.

  • Find the approximate bounding polygon for our contour using cv2.approxPolyDp().
  • If the result has more than four corners, we need to trim out the spurious corners by finding the ones closest to any other corner. These might result from a hand holding the card, for example.
  • Using the width of the bounding box, known aspect ratio of a real card, and the corners of the trapezoid bounding the card, we can construct the perspective transformation matrix.
  • Apply the perspective transform.
Camera input image. Card contour is shown in red, bounding rectangle is shown in green.

Camera input image. Card contour is shown in red, bounding rectangle is shown in green. The text labels are the result of the look-up process I'll explain in the coming posts.

The isolated and perspective-corrected card image.

The isolated and perspective-corrected card image.

Lastly, to isolate the art we simply rely on the consistency of the printed cards. By measuring the cards it was fairly easy to pick out the fractional width and height bounds for the art, and simply crop to those fractions. Now we're left with the first problem: the imperfect camera.  Due to the way we're hashing images, which will be discussed in the next post in this series, we're not terribly worried about image sharpness as the method does not preserve high frequencies. Contrast however, is a big concern. After much experimentation I settled on a very simple histogram equalization. Essentially modifying the image such that the brightest color is white and darkest color is black, without disrupting how the bits in the middle correspond. An example of this is given below.

Sample image showing (cw) the camera capture, the target image, the result of histogram equalizing the input, and the result of equalizing the target.

Sample image showing the camera capture, the target image, the result of histogram equalizing the input, and the result of equalizing the target.

So now we're at the point where we can capture convincing versions of the card art reliably from the webcam. In the next post I'll go over how I chose the hashing algorithm to compare each captured image against all the potential candidates, so we can tell which card we've actually got!

Fungiculture: Oyster Mushrooms

It has been a while since I’ve been able to post an actual update, having gotten a job, moved, and settled in during the interim. Having met with some small success growing a few basil plants indoors, I decided to branch out into mycoculture, or mushroom growing, as the requirements are a bit stricter, supplying a bit of an engineering challenge in getting it right. While it’s totally true that, given my choice of oyster mushrooms (pleurotus ostreatus) for my first attempt, I could have just as well used a plastic bag and a spray bottle, however that would not have scratched my data-gathering/total automation itch. Now let us get to the admittedly over-kill list of parts I ended up using.

During this first week-long time lapse we kept the aquarium light on continuously to provide consistent illumination for the camera, and realized that consistent (and blue) light strongly inhibits mushroom growth, which turns out to be a well-established fact  so for a week we saw very little happen aside from a moderate whitening of the surface of the mycelium-log. After a week of watching, we folded and decided to shut it down for the evening and go to bed. Of course, finally given a break from the light, mushrooms immediately appeared over night, so we excitedly resumed the time-lapse capture the following morning, resulting in the second video.

First week, under constant illumination (not much change)


Second week, without night illumination

By the end of that week we had a full flush of mushrooms to harvest, as shown in the photos below. I probably waited about a half-day too long, given that the cap-edges were just slightly beginning to droop. After cutting them from the base with a kitchen knife, the heights and cap-widths were measured for later comparison, and the rinsed mushrooms were stir-fried with green onion and garlic! They were pretty tasty! Given that I don’t usually eat mushrooms I was surprised by how much I liked them, but the inevitable bias towards something I made myself can only help. I also managed to log the conditions over the first 12 days (missing the last bit of the fruiting stage before harvesting); those data are shown below. The placement of the two sensors probably makes up for the fact that the controller was reporting ~70% RH while the logger reported values around ~60%, but there are some relatively easy calibration tests that can be done.


Mushrooms_humidity Mushrooms_temperature
Plots of the temperature and humidity as a function of time over the first 12 days.Without further delay, I should show off the fruits (kinda) of my labor, the mushrooms!

This slideshow requires JavaScript.

Rather than use this as a one-off experiment, I’ve already got the tank back under humidity control and monitoring, the first beginnings of a new flush of mushrooms are just now showing themselves. Moving forward I might try to simplify the setup, as I’ve read that environmental monitoring and power control are super-easy with cheap single-board computers (e.g. raspberry pi). It’s only coincidence that I’ve been playing with those lately for other project, just 4 years behind the curve!

Publication: Designing spectrum-splitting dichroic filters to optimize current-matched photovoltaics

After an incredibly long wait, and with an incredibly long title, the paper covering my work on thin film filters for solar energy collection is finally in print! You can find the it here as part of OSA's Applied Optics Journal.

If there's interest in a plain-English explanation of what we did and why, I'd definitely take the time to write one up! Just drop a comment below.

Wrapping up

It's been a while since I've updated here, and rather than ambiguously stating that "life's been crazy" I can concretely sum it up as "was busy graduating". Writing the dissertation, handing off projects that are still active, and generally cleaning up the path behind me have taken a sizable chunk of time. None of this is to say that I was particularly itching to dive into new projects after passing my defense. I'm moving out to the bay area finally, joining a handful of friends already out there. Hopefully this new chapter will present just as many (if not more) opportunities for projects worth writing about. For the time being I'll be focusing on the logistics of uprooting myself from Tucson, and minimizing the transplantation shock of a new city and job.

Recognizing Cards - First Attempts

Sorting images has been a problem on my mind for years, but I never had a really good reason to sink time into it. Just recently, I finally found a reason. It occurred to me that it would be useful to have my webcam recognize magic cards by their art, and add them to a database for collection tracking. I've since learned that this wasn't as new an idea as I'd thought, and several complete software packages for this specific application have become available in the last 6 months. Nevertheless, it was an interesting excursion into image processing, admittedly not my home turf. It worked much better than planned too.

To my mind, the problem had three main tasks that I'd not undertaken before, in order of drastically increasing difficulty:

  1. Grabbing camera input from Python
  2. Isolating the card from the background
  3. Usefully comparing images

The first part was mostly just an exercise in googling it and adapting the code to my webcam. Short version, use OpenCV2's VideoCapture method, and be sure to chuck out a few frames while the camera is auto-focusing and tuning the white balance. The second bit proved to be slightly more interesting. Given a photo of a card on a plain white background, canny edge detection can reliably pull out the edge (though I haven't tried this with white-boarded cards yet). The card-background contour is easily picked it; it has the largest area. Once we've cropped the image down to that, we can isolate the art by knowing the ratios used to layout the cards. This process is shown below.

Original image

Original image

Contours

Contours

Isolated card

Isolated card

Isolated art

Isolated art

The third and most difficult step, effectively comparing the images, will have to wait until I've got more time to write. Soon!