A quick update; I cracked out another bleached shirt, this one based on the moogle-in-magitek-armor image by DeviantArt user Camac, straight out of FF6 for the SNES. It took some tweaking to make the pattern contiguous for the laser cutter, but not too much. A quick observation: the brand of the shirt seems to have a big impact on the minimum realizable feature size. We couldn't find the soft fancy v-necks we'd been buying, so settled on a four-pack of Haynes crew-neck shirts. They're comfy enough, but noticeably thinner and bleached substantially faster (think 10-20 seconds rather than 2-4 minutes to transition from black to maroon-pink). I stuck with the same freezer-paper and ironing method I've described previously.
For future patterns on these shirts I'll probably aim for thicker features and a finer spray at a larger distance, as I'm not totally happy with the the blurring on this one. Still, more SNES nostalgia is always good.
This past weekend I was able to take a class on jewelry and metalworking at The Crucible in Oakland; the tuition was covered as a Christmas gift. I thoroughly enjoyed the class, and learned enough to produce the rings shown below with minimal guidance. We covered stamping, sawing, filing, rolling, soldering, and finishing;the metals we worked with were silver, copper, and brass (primarily for cost reasons, gold is even more insanely expensive than I remembered.) The first ring in the gallery involved a lot of sawing and drilling, which I still need to improve with, as well as eight hard solder joints to hold the copper bits inside the silver voids. There were additional pieces generated during the stamping and embossing tutorials, but I'll keep the post here to the rings. My only frustration with the class was how quickly I ran out of ideas. I really wish I had walked in with a pile of concepts rather than just one or two! Definitely looking to obtain some of the tooling to continue this sort of work at home, and combine it with my lapidary / faceting aspirations.
Silver and copper ring - Roughly a DNA double-helix
My second real project in wood, a commander-sized deckbox, represented a step up in joint complexity and wood quality. I also had to deal with an unexpected issue, namely wood movement. The board of curly maple was purchased months ahead of time, with a couple different ideas in mind, and during that interval it went from a beautifully flat and square board, to a pringles-chip shaped board with slightly off-true edges. Had I been more motivated, I suppose I could have use a hand plane and pared it down to flat. With time at a premium, and a deep conviction that clamping and gluing can do amazing things, I did my best to roll with it.
I also tried to remember to take process photos as I went, not only to share here, but also for my own benefit the next time I kick off a project. The basic design was pretty simple: rabbet joints, rabbet joints everywhere. The four vertical walls of the box all get an eighth-inch deep 3/4 inch tall rabbet along their bottom edge to accommodate a beefy cheery base, and additional rabbets along the vertical edge for the left and right sides. One concern from the outset was the stability of cutting an eighth inch of material away from panels only a quarter inch thick, but going slowly, it did work out. Additionally, an eighth inch deep, quarter inch tall, slot was cut into the back and side panels to allow for a side-in lid. It's worth noting here that the lid had to be subtly tapered by sanding the edges meant to mate with the slots to allow for easy movement.
The raw materials were a 24" x 5" x 0.25" board of curly maple, and a 3/4" thick board of cherry (which I'd previously been using as a backstop to prevent tear-out when sawing). Both boards were wider than my miter box would allow, so I ended up using clamps and the straight edge of other boards to establish the cuts. In the photo below, the saw is neatly guided by straight-edged stock on both sides.
All the pieces
With all the pieces cut, the front panel (notably 0.25" shorter than the others, to permit the lid to slide out) was off to the laser cutter. I should also note, the design is not mine, it was found here, and was simply too cool to pass up. Maple takes laser engraving very well, and even grey-scale depth features were rendered very well.
Unfortunately I forgot to take any photos during the routing step, but they were all executed with a 0.25" flat router bit at medium-low speed. For the finish, I wanted the grain to really pop, so I used the remainder of the board as a test piece (seen far left in the photo below). The top portion of the test piece got two coats of diluted anoline dye, then two coats of Danish oil, while the bottom simply got the oil. I settled on dye+oil again, but in retrospect should probably have gone darker (less dilute) on the dye. The blue-taped areas, aside from the test piece, were to exclude oiling the gluing surfaces.
After a brief dry-fit, gluing and clamping went on for two days. I did have to make a second pass, as a small gap opened up in one of the corners, but after that it looked good.
Gluing and clamping
Finally, here are some photos of the final product!
I probably won't rely so heavily on rabbet joints in the future, but this was super instructive in the difficulties and details of executing them. Also, this came together more quickly than the first project! As I get my basic skills in line, things go a bit faster and smoother, but there's still seemingly infinite room still to grow.
I've been playing around with the idea of woodworking for pretty much the whole year now, watching videos on YouTube and tearing through a few books on the topic. I did a few simple projects leaning heavily on the laser cutter to do all the operations, but that isn't really woodworking. I finally decided to put together a land station, that is, a box for people to grab basic lands on those rare times a draft comes together!
The design is fairly simple, four planks of wood for the sides of the box, with 45 degree mitred edges, and four dados (slots) for dividers, plus a flat plank as a bottom. The dimenions of a card are (roughly) 3.5 inches tall, but 2.5 inches wide, and I ran with those for my first attempt. Trying to put a 45 degree miter along a 3.5 inch edge with a hand saw was a losing battle, and the prospect of putting in eight dados with a router plane (something like this) sounded frustrating. After quite a bit of hemming and hawing, I eventually bit the bullet and bought a router, some bits, and a table for it. While there was a sale going on at the time, I certainly had to convince myself that I'm excited for more than this one project.
The dimensions were driven in part by the cheap wood I had access to, namely long planks of quarter-inch thick, 3.5 inch wide pine. I kept the height and stuck with a single thickness to simplify the sawing operations, which are harder than they look. I ended up using a cheap clamping mitre box to establish a perpendicular cut line, and then clamped the piece to a heavy piece of scrap for the remainder of the cut to prevent tear-out (an issue that frustrated me enormously at first.)
My initial design had tolerances that ended up being too tight, and it was going to be impossible to get the cards in and out. The final dimensions of each piece are laid out below. The critical number turned out to be 2.75 inches - the width of the empty space (measured from divider edge to divider edge, not centers) for each card "lane". If that sounds a little too big, it's because it is, but a small error one way or the other won't prevent cards from getting into or out of the box. In the future I'll probably shave 1/8th inch off that value to reduce card "jiggle" in something like a deck box.
2x sides (L/R) - 3.5" x 5.5" x 0.25"
2x sides (F/B) - 3.5" x 15.25" x 0.25"
1x bottom - 5.5" x 15.25" x 0.25"
4x dividers - 3.5" x 5.25" x 0.25"
The dados to retain the dividers are 1/8th (0.125) inch deep on the front and the back, and were cut with the fence fixed to ensure they ended up aligned. I had to make up a 90 degree jig by clamping some heavy blocks to cut the furthest-in dados, as the fence can only move about 5 inches back from the bit, but it worked well enough. The mitres were put on with a 45 degree router bit, over many passes to carefully creep up on the proper depth. Once all the cuts were done, the front panel was off to the laser.
The pattern was generated using the vector mana symbols generously posted by Goblin Hero over at Slightly Magic, they had just to be scaled and moved around to fit the panel. I put down some masking tape to prevent resin deposition on the wood, but it also seems to have caused some line-artifacts in the final cut, likely due to "thick" overlaps attenuating the beam. In the future I'll probably avoid using tape and just sand the surface clean afterward, as the residual adhesive also looked to interfere with the dye and oil in a few places. For reference, it was cut on an Epilog Ext36 150W in raster mode at 600 DPI, 100% speed, 70% power, in a single pass.
Cut pieces laid out for dry-fitting
After sanding all the sides with a ~250 git sanding sponge, the sides and panels were glued together using titebond and a 90 degree clamp, something I didn't even know existed before needing one. I was able to snugly fit in the divider into the back without glue, and press the front panel on for gluing.
Gluing the front panel (bottom is not attached)
After letting it dry over night, I was ready to dye and finish it. I'd experimented with some scrap wood from the same boards to see how the dye and oil finishes would look, and settled on Transtint golden brown diluted in water, and a Danish Oil finish. I applied the dye carefully, given all the warnings it comes with, and gave it plenty of time to dry. Then the oil finish went on and took all night to set, I opted for a single layer as I wasn't looking for a shiny or silky appearance, just sealed. At this point I finally glued on the bottom and gave it a few hours to set.
Nest of clamps
Finished land station
It definitely took a lot more time, effort, and learning to finish this than I anticipated, but I am happy with how it came out. I've already gotten a lot of good suggestions for improving it (e.g. cutting semi-circular access holes at the front of each row so you can always get at the cards, also adding a lid isn't a bad idea), but will probably move on to other projects for the time being. The next on my list is a commander deck box, and after that, a substantially more intricate box for my cube to live in. I've got to spread out that tooling cost somehow!
With a whole week off around Thanksgiving, Ouliana and I finally had time to test out a method for templating bleached shirts we'd seen online. It needs freezer paper, which as far as I can tell is butcher paper with wax on one side only. The plan consists of cutting out your pattern, and ironing the waxed side onto the shirt, applying your bleach-water solution, and peeling off the mask. The twist being that cutting precise patterns is a pain, but a laser should be able to make quick work of it!
For the pattern, I came across this image of Samus from the Metroid games, posted by terrorsmile on DeviantArt. I wanted the pattern to be a true stencil, meaning having at least one totally contiguous region to be the "mask". That took some doing, about an hour of work in GIMP, but I ended up with an inverted stencil that could be cut without producing "islands". I'm a bit reluctant to share the file, as it's based so closely on someone elses's work, but the process is fairly straightforward (the magic-wand selection tool will immediately show you any "islands" left in your image.)
The other issue that came up was the freezer paper tends to curl (it does come on a roll), so it had to be taped down at the edges to a rigid substrate, scrap acrylic in this case. The second pattern was a manually made stencil of the "doom guy" dolls from the 2016 Doom game that Ouliana made, seen getting ironed on below.
Ironing the freezer paper
The positive-stencil of the doom guy did end up having "islands", meaning a few pieces of freezer paper had to be carefully place and individually ironed on. Also, having a positive pattern mean needing to block off the rest of the shirt with extra paper to prevent any stray bleaching. The negative stencil, shown below just after ironing, needed no additional masking.
Samus stencil applied
I opted for a slow and regular application of 50/50 bleach in water solution, spraying a few times, and giving it 5-10 minutes to act and dry, and repeated that roughly three times. The shirts were then rinsed out in the shower, and immediately washed. We noticed that setting the iron too low resulted in poor bonding, so the paper would "pop" off the shirt, and ironing too hot resulted in small beads of wax around the edges of the stencil that remained after peeling away the paper. They can be picked off by hand, but it is a pain. The final result of my shirt attempt is below!
It's been a while since I've put up an update, but I haven't been up to nothing. I found out that I have access to a beefy laser cutter through my work, and have been throwing various projects at it over the last few weeks. One of the first things I wanted to try was precision inlay and making a rudimentary deck box. My first target was my Lazaav EDH deck, and it provided a good opportunity to get my feet wet with both laser cutting, inlay, and wood staining. The material itself comes from an 1/4" thick oak project board. I had pondered putting on a few layers of shellac or other finish to smooth it out, but decided that I'd already learned what I'd wanted to, and kinda preferred the woodgrain look.
Showing off the inlay colors
The lid fits, but not perfectly
The ebony stain looks almost exactly like I wanted it to
I didn't both to finish the interior on this one, a final version would definitely have a lining.
The pattern for the box itself was generated using MakerCase, but I think for later projects I'm going to opt for actually making the joints myself to ensure they fit. There seemed to be some inconsistencies with the laser kerf (similar to the saw-depth for conventional cuts) that prevented perfect mating, and I haven't dialed in the compensation for that.
Over all this served as a proof-of-method and tools for doing nacre inlay on later projects, and bonus points: it still works as a box.
In the previous two posts in this series (1, 2) I talked about capturing a reliable image of the art on a magic card from a webcam, and how we can hash those images for an effective and mostly efficient comparison. Now we're left with a hash for the card we'd like to match, and an enormous table of hashes for all the possible matches (something like 18,000 cards). The first question is how to compare the hashes in a meaningful way, but thankfully this is made easy by the nature and documentation for phash. The Hamming distance, or number of dissimilar characters in the string, is the metric of choice. This method is demonstrated step-wise below, first for the correct target, then for a random non-target image.
When compared to the actual match, the Hamming distance is 9. Mismatched characters are highlighted in red. While the cutoff value might take some fine tuning, for our purposes (and my camera), 9 is a relatively strong match.
Compared to an incorrect image, the Hamming distance is 15. The single matched character is effectively within the noise.
Hamming distance is a reliable metric for this hashing approach, and is relatively easy to compute if the hashes are all already calculated. That being said, we don't really want to do 18,000 x 16 charter-character comparisons in order to determine which card we're looking at. Instead we can use a binary search tree. There is a lot already written on binary trees, but the short version is this: by splitting space up one can iteritively narrow down the possibilities, rather than look at each candidate individually. It does require that you take the time to build the tree, but individual searches become substantially faster.
But wait, the Hamming distance doesn't provide coordinates in some searchable space, it provides a measure describing how two data are related (or not), but this can be thought of as a 16 dimensional metric space. The approach I've gone with is a vantage-point, or VP tree, chosen primarily since Paul Harrison was kind enough to post his Python implementation. The idea behind VP trees is to pick a point in your space, e.g. the hash "1111aaaabbbbcccc", and then break your member-set into two parts: those "nearer" than some cut-off Hamming distance, and those further out. By repeating this process a tree of relations can be built up, with adjacent 'branches' having smaller hamming distances than 'far' branches. This means that a hash you're trying to match can rapidly traverse the tree and only run direct comparison with one or two actual set members. The paper by Kumar et.al has an excellent explanation of how this compares with other binary-tree approaches, and while they were doing image-patch analysis, the content is still incredible relevant and well presented. Figure 2 in that paper, not reproduced here, is perfect for visualizing the structure of VP trees!
I'm still in the process of cleaning up code, but plan to shortly follow up with a video demonstration of the code in action, as well a few snippets of particular interest.
In the previous post we got as far as isolating and pre-processing the art from a card placed in front of the camera; now we come to the problem of effectively comparing it with all the possible matches. Given the possible "attacks" against the image we're trying to match, e.g. rotation, color balance, and blur, it's important to choose a comparison method that will be insensitive to the ones we can't control without losing the ability to clearly identify the correct match among thousands of impostors. A bit of googling led me to phash, a perceptual hashing algorithm that seemed ideal for my application. A good explanation of how the algorithm works can be found here, and illustrates how small attacks on the image can be neglected. I've illustrated the algorithm steps below using one of the cards from my testing group, Snowfall.
Illustration of the phash algorithm from left to right. DCT is the discrete cosine transform. Click for full-size.
The basic identification scheme is simple: calculate the hash for each possible card, then calculate the hash for the art we're identifying. These hashes are converted to ASCII strings and stored. For each hash in the collection, calculate the hamming distance (essentially how many characters in the hash string are dissimilar), and that number describes how different they are. The process of searching through a collection of hashes to find the best match in a reasonable amount of time will be the subject of the next post in this series (hint: it involves VP trees.) Obtaining hashes for all the possible card-arts is an exercise in web scrapping and loops, and isn't something I need to dive into here.
One of my first concerns upon seeing the algorithm spelled out was the discarding of color. The fantasy art we're dealing with is, in general, more colorful than most test image sets, so we might be discarding more information for less of a performance gain than usual. To that end, I decided to try a very simple approach, referred to as phash_color below: get a phash from each of the color channels and simply append them end-to-end. While it takes proportionally longer to calculate, I felt it should provide better discrimination. This expanded algorithm is illustrated below. While it is true that the results (far right column) appear highly similar across color channels, distinct improvements to identification were found across the entire corpus of images compared to the simpler (and faster) approach.
The color-aware extension of the phash algorithm. The rows correspond to individual color channels. Click for full-size.
I decided to make a systematic test of it, and chose four cards from my old box and grabbed images, shown below. Some small attempt was made to vary the color content and level of detail across the test images.
The four captured arts for testing the hashing algorithms. The art itself is the property of Wizards of the Coast.
For several combinations of hash and pre-processing I found what I'm calling the SNR, after 'signal-to-noise ratio'. This SNR is essentially how well the hash matches the image it should, divided by the quality of the next best match. The ideal hash size was found to be 16 by a good deal of trial and error. A gallery of showing the matching strength for the four combinations (original phash, the color version, with equalized histograms, and without pre-processing) are shown below, but the general take-away is that histogram equalization makes matching easier, and including color provides additional protection against false positives.
If there is interest I can post the code for the color-aware phash function, but it really is as simple as breaking the image into three greyscale layers and using phash function provided by the imagehash package. Up next: VP trees and quickly determining which card it is we're looking at!
Back in October I posted a short blurb on my first attempts on recognizing Magic cards through webcam imagery. A handful of factors have brought me back around to it, not the least of which is a still un-sorted collection. Also, it happened to be a good excuse to dig into image processing and search trees, things I’ve heard a lot about but never really dug into. Probably the biggest push to get back on this project was a snippet of python I found for live display of the pre-and-post processed webcam frames in real time, here. There is real novelty in seeing your code in action in a very immediate way, and it also eliminated all of the frustration I was having with convincing the camera to stay in focus between captures. At present, the program appears to behave well and recognize cards reliably!
I plan to break my thoughts on this project into a few smaller posts focusing on the specific tasks and problems that came up along the way, so I can devote enough space to the topics I found most interesting.
Recognizing Blurry Images: Hashing and Performance
Finding Matches: Fancy Binary Trees
I should note here: a lot of the ideas used in this project were taken from code others posted online. Any time I directly used (or was heavily inspired by) a chunk of code, I’ll link out to the original source as well as include a listing at the bottom of each post in this series.
The goal here was to take the camera imagery and produce an image that was most likely to be recognized as "similar" by our hashing algorithm. First and foremost, we need to deal with the fact that our camera (1) is not perfect, the white-balance, saturation, and focus of our acquired image may all be different than the image we're comparing with, and (2) the camera captures a lot more than the card alone. Let's focus on the latter problem first, isolating the card from the background.
The method I described in the previous post works sometimes, but not particularly well. It required exactly ideal lighting and a perfectly flat background. The algorithm I ended up settling on is:
Convert a copy of the frame to grey-scale
Store the absolute difference between that frame, and the background (more on that later)
Threshold that difference-image to a binary image
Find the contours present using cv2.findContours()
Only look at the contours with a bounded area greater than 10k pixels (based on my camera)
Find a bounding box for each of these contours and compute the aspect ratio.
Throw out contours with a bounding box aspect ratio less than 0.65 or greater than 1.0
If we've got exactly one contour left in the set, that's our card!
The next problem to tackle is that of perspective and rotation, which thankfully we can tackle simultaneously. In the previous steps we were able to find the contour of the card and the bounding rectangle for that contour, and we can use these.
Find the approximate bounding polygon for our contour using cv2.approxPolyDp().
If the result has more than four corners, we need to trim out the spurious corners by finding the ones closest to any other corner. These might result from a hand holding the card, for example.
Using the width of the bounding box, known aspect ratio of a real card, and the corners of the trapezoid bounding the card, we can construct the perspective transformation matrix.
Apply the perspective transform.
Camera input image. Card contour is shown in red, bounding rectangle is shown in green. The text labels are the result of the look-up process I'll explain in the coming posts.
The isolated and perspective-corrected card image.
Lastly, to isolate the art we simply rely on the consistency of the printed cards. By measuring the cards it was fairly easy to pick out the fractional width and height bounds for the art, and simply crop to those fractions. Now we're left with the first problem: the imperfect camera. Due to the way we're hashing images, which will be discussed in the next post in this series, we're not terribly worried about image sharpness as the method does not preserve high frequencies. Contrast however, is a big concern. After much experimentation I settled on a very simple histogram equalization. Essentially modifying the image such that the brightest color is white and darkest color is black, without disrupting how the bits in the middle correspond. An example of this is given below.
Sample image showing the camera capture, the target image, the result of histogram equalizing the input, and the result of equalizing the target.
So now we're at the point where we can capture convincing versions of the card art reliably from the webcam. In the next post I'll go over how I chose the hashing algorithm to compare each captured image against all the potential candidates, so we can tell which card we've actually got!
It has been a while since I’ve been able to post an actual update, having gotten a job, moved, and settled in during the interim. Having met with some small success growing a few basil plants indoors, I decided to branch out into mycoculture, or mushroom growing, as the requirements are a bit stricter, supplying a bit of an engineering challenge in getting it right. While it’s totally true that, given my choice of oyster mushrooms (pleurotus ostreatus) for my first attempt, I could have just as well used a plastic bag and a spray bottle, however that would not have scratched my data-gathering/total automation itch. Now let us get to the admittedly over-kill list of parts I ended up using.
During this first week-long time lapse we kept the aquarium light on continuously to provide consistent illumination for the camera, and realized that consistent (and blue) light strongly inhibits mushroom growth, which turns out to be a well-established fact so for a week we saw very little happen aside from a moderate whitening of the surface of the mycelium-log. After a week of watching, we folded and decided to shut it down for the evening and go to bed. Of course, finally given a break from the light, mushrooms immediately appeared over night, so we excitedly resumed the time-lapse capture the following morning, resulting in the second video.
First week, under constant illumination (not much change)
Second week, without night illumination
By the end of that week we had a full flush of mushrooms to harvest, as shown in the photos below. I probably waited about a half-day too long, given that the cap-edges were just slightly beginning to droop. After cutting them from the base with a kitchen knife, the heights and cap-widths were measured for later comparison, and the rinsed mushrooms were stir-fried with green onion and garlic! They were pretty tasty! Given that I don’t usually eat mushrooms I was surprised by how much I liked them, but the inevitable bias towards something I made myself can only help. I also managed to log the conditions over the first 12 days (missing the last bit of the fruiting stage before harvesting); those data are shown below. The placement of the two sensors probably makes up for the fact that the controller was reporting ~70% RH while the logger reported values around ~60%, but there are some relatively easy calibration tests that can be done.
Plots of the temperature and humidity as a function of time over the first 12 days.Without further delay, I should show off the fruits (kinda) of my labor, the mushrooms!
Rather than use this as a one-off experiment, I’ve already got the tank back under humidity control and monitoring, the first beginnings of a new flush of mushrooms are just now showing themselves. Moving forward I might try to simplify the setup, as I’ve read that environmental monitoring and power control are super-easy with cheap single-board computers (e.g. raspberry pi). It’s only coincidence that I’ve been playing with those lately for other project, just 4 years behind the curve!