What is Freeside?

Freeside is a Georgia nonprofit corporation, organized to develop a community of coders, makers, artists, and researchers in order to promote collaboration and community outreach. Learn more »

Using gaze-tracking to map how surgeons look at diagnostic images

A few years ago, a Freeside collaboration resulted in some published medical research on using 3D Printing in pre-surgery planning.

In our second collaboration, we used gaze tracking to gather data on how surgeons with different levels of experience look at radiographs when diagnosing hallux valgus deformities. The new paper got published in the current issue of the Journal of Foot and Ankle Surgery.



Interestingly enough, we actually came up with the concept for this project during a meetup about interactive art installations. The idea of eye tracking came up and we discussed what we could discover with the technology. So we started to try to figure out how to a study with the free and open-source tools available. We ended up needing:

  • A webcam to look at the user's eyes.
  • ITU Gaze Tracker to calibrate and interpret that data. (However, their website is now down, so I'm not sure how viable this is as part of the toolchain now.)
  • OGAMA - Open Gaze and Mouse Analysis to conduct the study, display and record the data.
  • OpenCV and Pandas in Python to do a bit more image correction
  • Matlab to do more statistical analysis on the data
  • A custom chin rest that we fabricated and used a mouse pad for cushion

We threw together a workstation for about $400 (the laptop + webcam were the main costs) to do the study and started collecting data - 




We showed surgical residents and surgeons with over 7 years of experience a series of 30 radiograph images and asked them to rate the deformity from 0 to 3 in severity. Experts tended to lock onto areas for longer and use their peripheral vision more for diagnosis. Novices would search the image by moving their focus around more and tended to rank the deformity as less severe. 

Our main goal was to demonstrate that this kind of data collection can be done as a proof-of-concept cost-effectively and there's a lot to learn with it. We put together a video to further explain the setup, processes, and findings here if you'd like to learn more! - 



It was a fun project despite a huge number of roadblocks and setbacks with the setups, calibration, data manipulation, Despite the challenges, we came out of it with some really interesting research that demonstrates yet again how awesome it is to have a diverse community of experts and all the tools they need in one place. Support your local hackerspace/makerspace!

Build-Out Recap!

A bunch of great stuff got done at the build-out yesterday. A huge thanks to everyone that came out to pitch in!

Here are some pictures to recap the projects... Randy's team hung the curtain to the workshop to create more of a barrier between the front of the house and back of the house and to control dust levels a bit more. We'll be finishing the top of the wall soon, but the hard part's already done. Karen, Donald, Tom, Violet, and James framed the doorway to the Media Lab and Bio Lab and hung the door for that area. Next step is AC!


Michelle and Mary's team cleaned out project storage and moved the shelves over so that Neils could put the flammability cabinets in that area. That allowed all of us with the help of Adam and Nathan to clean up the workshop and really tidy up. They also sorted out all of the laser cutter raw materials and cut them down to a usable size on the table saw. 






For the portal clouds, JW, Nathan, and Kat rolled an awesome $1 solution for controlling the WS2812 clouds with an attiny and a programming header. The schematics and board layout are included too. We used highlowtech's guide to programming the Attiny85s with the help of an instructables for driving LEDs with them that provided some supplemental information. There was an issue with setting the fuse in the ATTiny to get the timing right that we ended up having to use avrdude to change manually. Maybe that had something to do with us using the internal clock or the ATTiny-10... Anyway, more clouds coming soon :)






Thanks again to everyone who came and I'm looking forward to the next one!

Motobrain: Interesting Investigation Concludes

I've had a problem with the way Motobrain calculated current flows for quite some time. Basically it always read a little higher than I expected it to if the textbooks are to be believed. Furthermore, one half the board always read a about 10% higher than the other half. It is not very unexpected that the "textbook" calculation and real life are a bit out of sync. Still, I wanted to know why the error was inconsistent between the two halves of the unit. That part was a bit unusual.

powerpcbIMG_20140801_114345_271[1]
Normally, the way you go about solving an issue like this is to exclude stuff until the problem is gone. First, I excluded the Power board, the PCB with all the high current flow, heavy copper, and power transistors (shown right). I did this using the test jig (right, below) I designed to test all the Motobrains that come out of the factory. The MCU board (the board with the sensors, microcontroller, and Bluetooth radio) plugs into the jig and is given a series of test signals to confirm it is working. These test signals showed a similar error where the same half the board reported a higher current flow than the other. I concluded that the problem was clearly with the MCU board. Since then, I've spent a couple months looking over the schematic and PCBs for flaws that would explain the issue. Countless measurements showed the "error" somewhere earlier in the signal path than I tested though. As luck would have it, the Motobrain design for the signal path in question has a series of amplifiers. This means that the signals are smaller the earlier in the signal path we go. It got to point where I don't own sensitive enough test equipment to do a useful measurement. I best voltage measurement instrument I own is not sensitive enough to test the signal. Well, the Motobrain MCU board is sensitive enough but I was trying to exclude it so I couldn't use it obviously. Rather than beat my head against the same old wall today, I decided to focus on more practical concerns and calibrate the Motobrain to output accurate results. This means I was going to "fix" the firmware to correct for the nominal read error from the sensor. To do this, I needed I do some precise measurements at series of different calibrated current flows and graph them out. Conveniently both sides of the board show a linear deviation from the true reading which makes it very easy to null out in the firmware. Fixing it was as easy as taking the current measurement and dividing it by 1.11 or 1.22 depending on the side of the Motobrain. I did this and updated the Motobrain on bench. The readings were all accurate and I was pleased. Case closed... then again, while I'm here why not beat my head against the wall some more?!?

Always a glutton for punishment, I decided to compare the fresh readings to the historical data I had collected some weeks ago when first built this Motobrain. Every Motobrain is run through a battery of tests and the results are cataloged and stored electronically in case they may be informative in the future. I figured that if I see some failures in the future I may find a pattern in these data to help explain things. So, I pulled out the file for the Motobrain I had just collected these calibrated current flows from and compared them. What I found surprised me. Like I had observed before, the errors were certainly similar and the same half of the board was high relative to the the other. What surprised me was the magnitude of the error. The errors were much larger on the test jig than they were with the actual Motobrain Power board. It suddenly occurred to me that I did a poor job excluding the Power Board from a role in this issue. The simplest thing to do was to indict the test jig in this error I was seeing. About 3 minutes and a couple of quick measurements later, I was able to confirm the test jig was responsible for introducing all the errors I was seeing on the Motobrain MCU board when in the test jig. This means that the MCU board was actually excluded from guilt in the error reading after all. By pure coincidence, both the Power board and the test jig were introducing the same type of error on the same sensors of the MCU board. Flaws on both the test jig and the Power board that affected the MCU board in the same way was beyond my simplistic expectations. I did that first measurement on the test jig, got the reading I was expecting and stopped looking. In science, this is called "confirmation bias" and I fell for it hook, line, and sinker!

 So, what was the actual problem? I still don't have sensitive enough instruments to be absolutely certain if I don't use the Motobrain MCU board to do the measurement, but now that I trust those sensors again, I have identified the likely issue. It is minor differences in the way I designed the Power board itself. The path the electrons take on half of the Power board is about 900µm (900 micrometers) longer on the half that reads higher. That distance increases the resistance of the flow path by about 66µΩ or 0.000066Ω (also known as half a bee's dick of resistance). The signals we are measuring are extremely small though and every little bit of resistance matters. Normally I don't need to worry about such differences because because I give myself much larger margins of error to worry about but Motobrain's high current capacity obligates me to an extremely low output impedance and means that I do need to be a bit more thoughtful. Oops, my bad.
powerpcb-trace

Props and costuming - Building an Ultron helmet

Hello, Freeside readers, and welcome to my first blog post!

My name is Michelle Sleeper, I am a prop and costume builder in Atlanta, working primarily out of Freeside's space. I have been building costumes and plastic space guns since 2001, and have been a member of Freeside since 2013.

My most recent major project was to upgrade a costume I built last year of the Marvel comic's character, Ultron. The costume owner wanted a new and improved helmet, made of cast resin and full of all sorts of lights. It was a big and ambitious project, and I was very excited to get started.



Here's how we got there.

From the outset we decided that we wanted the master sculpt to be 3D printed - but for those of you familiar with 3D printing, you know that extremely large prints are difficult if not impossible to produce. Most often, you will have to break your model up into many different segments, which you then assemble like a 3D jigsaw puzzle. We opted not to do that, and instead outsourced to a professional 3D printing company based in Florida called TheObjectShop. They have a Zcorp 650, which is a very large printer that prints in a plaster like material, which is then hardened with cyanoacrylate AKA super glue.

The resulting print, while expensive, was absolutely phenomenal.


Like all 3D prints, the surface had a texture to it that was unsuitable for our needs. I set about cleaning up the surface to as smooth as I could get it, a process which took about 2 and a half weeks. The process is simple - spray the piece with filler primer, fill any large problem areas with bondo or spot filler, and use increasingly finer grits of sandpaper - but extremely tedious and time consuming. I started at 80 grit to knock down some of the bigger problem areas, and worked my way up to 800 grit wet sanding. The results were a helmet that was nearly flawless.


Now that our master sculpt was completed, we had to create a 2 part jacket mold out of silicone. This would allow us to produce many different copies in urethane resin later down the line. Urethane resin is lighter weight and more sturdy than the brittle plaster 3D print. These are important factors, considering it would be worn for 6-8 hours a day (if not more) and require a bunch of electronics glued and bolted inside of it.

To create the 2 part mold, first we have to make a parting wall all the way around the helmet, which will be the interfacing layer where the 2 sides of the silicone molds touch. We use the end of our Xacto knife to create little bumps all along the edge, which are registration keys that help the two halves line up properly.


Once the first half of the silicone mold is applied, we flip the whole thing over, remove the parting wall, and apply a coat of releasing agent before we apply the second half of silicone. The releasing agent is absolutely critical - silicone will not stick to anything except other silicone. Without the releasing agent, we would essentially create a big silicone bowl which would be next to impossible to use for our purposes.



Once both halves of the silicone mold were created and fully cured, we created an outer rigid mother mold. This is used to keep the silicone mold held together, once the master is removed and the mold is hollow. It is also applied in two halves, and like the silicone we use a releasing agent when creating the second half.



To make the hollow casting, we use a technique called rotocasting or slush casting. This is where you pour a bit of your urethane resin into the hollow mold and rotate it around so that it evenly coats all of the surfaces with a thin layer. This is done 4-5 times using several small batches of urethane resin, so that we ensure every surface has an even thickness. Because the mold weighs around 10 to 15 pounds before we put a drop of resin into it, and because each layer requires about 5 minutes of tossing it around, I decided to build handles to form into the mother mold. This makes the mother much easier to hold onto during the already strenuous rotocasting process.



After you are finished casting, it's time to remove the mother and the silicone mold. What you are left with is a perfect reproduction of your master sculpt in a much lighter material. The casting process itself is a bit of a learning curve as every mold will be different. Certain areas will come out to be thinner than others, and the exact amount of material you need to use for each batch will depend on a lot of factors. What this means is that the first few castings will tend to be "duds", meaning they are unsuitable for your ultimate purposes - in our case, a wearable costume.


 


However, you can still dress up one of these bad casts and stick it on a mannequin to live in the space!



While we were working on sculpting the master and producing the molds, we were also working on the electronic guts that would go into the helmet. Specifically, there would be a set of LEDs set into laser cut acrylic, and a custom made 8 x 24 LED matrix for the mouth.

The eye LEDs are rather simple - I drew up a 2D design to bridge the width of the helmet's eyes, and then cut that out of 2 layers of opaque white acrylic. The inner layer was made of 6mm acrylic which the LEDs were set into and glued into place, and the outer 3mm layer was flat. The results are menacing glowing red eyes.


The mouth LED matrix, on the other hand, is worthy of it's own individual blog post, which I will be putting up later. The short version is that we found and used an Arduino Micro connected to three MAX7219 chips, which are designed to control an 8 x 8 matrix. The matrix had to be designed and wired up by hand, a process which took about 3 weeks of work. After some trial and error with the MAX7219 board kits we used, the whole thing was put together and worked flawlessly. Here is a test video of the center matrix in our temporary holder.



After the matrix was finished, a cover was laser cut out of 1mm clear acrylic and installed into the mouth. The LEDs were transferred into a similar housing for their permanent installation, and all of the boards were put into craft foam holders for protection and installed into the helmet. The results were nothing short of perfect!



At this point the project was finished and ready to be worn, but like any good project it has sparked a whole host of new ideas and "how to do it better"s.

Until next time!

Want to see more photos? Check out the complete build process on my Facebook page.

The JAM: Joy's Art Machine (First Build Recap)

BackgroundThe JAM (Joy's Art Machine) is a machine that distributes art. This project was fully funded by the Alchemy community. We are on track to collect somewhere between 200-300 pieces of art to distribute, including works by Catlanta and Evereman. We are actively collecting works of art, so if you're interested in contributing, you can email Joy at joyogozelec@gmail.com.

The JAM explores two of the 10 core Burning Man principles: Decommodification and Gifting. We express Gifting by distributing art through the machine. Gifting trees are a familiar sight at Burns, but suffer from accumulating trash or trinkets. By gifting art (a gift in itself) we create a sort of on-demand gifting tree. We express Decommodification by not allowing the JAM to accept money. Instead, art is distributed by the machine on a timer. The machine lights up, and you push a button to receive art.


If you're interested in learning more about the project or want to get involved, check out our Meetup calendar for the next meeting or build, or email me at emptyset@freesideatlanta.org.

Last Saturday I worked with Brian on the frame of the front and back 4x8-ft sheets.  We're using a pressure-treated wood, so the frame is going to hold the structure firm since the sheets have a tendency to warp a little.  Pressure-treated wood is also going to require a little more research to figure out what we need for painting.

We decided to start by marking exactly where we wanted to drill holes for the nuts and bolts to go.  We went with a staggered pattern.  We first created a template using a scrap piece of 8-ft long wood, and then scored the angle iron to make our markings.  Every hole is marked, so for the next Build one or two volunteers can simply drill out all the holes.  Essentially, we're creating a kit to assemble the JAM.

After marking all the steel, we made sure to label each piece with "top", "bottom" and whether it was a left or right part (if you face the "front" or "back" side of the machine).  This will help us keep track of where everything needs to go so nothing gets mixed up in assembly.

Then, we put down the left and right frame steel, and put down a 4x8-ft sheet.  We held everything together and then marked off some angle iron to cut for the top and bottom parts of the frame.  Brian also cut out some tabs so that the pieces would sit flush against the sides.

After this, Brian was ready to weld.  He did a few spot welds with the 4x8-ft sheet in place, and then we remove the sheet and completed the rest of the welding.  We know have two frame parts that look like a bed frame!  The JAM is going to be huge.

In other updates, we've sent out an order from Adafruit for the button, Arduino, and LED light strips.  That should arrive in the next week or so, and then prototyping the controller and timer can begin.  At the next build (tonight!), we have holes to drill in the steel frames, prototyping the carousel, shaving an inch off the "side" panels, and research on the paint.  For those that are artistically inclined, we need some help with vector graphics and creating stencils that we'll be using to paint the sides.  There's a little something for everyone!






JAM: Joy's Art Machine - Design Meetup (Recap)

BackgroundThe JAM (Joy's Art Machine) is a machine that distributes art. This project was fully funded by the Alchemy community. We are on track to collect somewhere between 200-300 pieces of art to distribute, including works by Catlanta and Evereman. We are actively collecting works of art, so if you're interested in contributing, you can email Joy at joyogozelec@gmail.com.

The JAM explores two of the 10 core Burning Man principles: Decommodification and Gifting. We express Gifting by distributing art through the machine. Gifting trees are a familiar sight at Burns, but suffer from accumulating trash or trinkets. By gifting art (a gift in itself) we create a sort of on-demand gifting tree. We express Decommodification by not allowing the JAM to accept money. Instead, art is distributed by the machine on a timer. The machine lights up, and you push a button to receive art.


If you're interested in learning more about the project or want to get involved, check out our Meetup calendar for the next meeting or build, or email me at emptyset@freesideatlanta.org.

First off, thanks to everyone who attended!  We had an impressive attendance of both Freeside members and beyond, and there was much information exchanged and discussion.  Rob brought some pizza, and Joy and I brought some chocolate chip cookie dough hummus, spicy black bean hummus, and beer (of course!)


All apologies if I forget exactly who contributed what to the discussion!  Everyone had excellent ideas and really helped us commit to a workable design.

We spent a little time discussing a couple of key components of the project.  First, we discussed the housing itself.  Since we learned that the JAM won't be consumed in flames at the end of the event like we had planned, this meant we had to rethink the materials.  On the upside, this means that we have a lot more liberty to decorate and paint the walls of the JAM (no worries about fumes released from burning paint.)  Eventually, we settled on using 4x8ft sheets of lauan with some kind of frame backing it (somebody suggested using 3/4in steel square tubing).  Zach suggested that door hinges have been used on similar projects (like the Tardis) at past burns with good success, as it's just a matter of dropping the pins back in to secure two walls.  I'm following up with a few other folks via email to see if we can make a final design.

After much discussion related to the use of boxes, after some brainstorming we concluded that using a carousel-style design would be best, both in terms of loading the machine (cutting down on reloading frequency, due to more number of slots or "wedges") and in terms of being able to avoid using boxes altogether.  One of our members, Don suggested the carousel concept.  For art that was at risk of getting tangled in the machine (ex. felt or knit items) we would simply put them in a plastic bag.  The carousel would rotate and the slot would move over to an opening that would allow the art to drop out.

Since the base of the machine is 4x4ft, if we use something like a wheel with diameter of 3.8ft, then this works out to about 23-24 wedges (if the length of the carousel the wedge takes up is about 6in.)  If we maintain a dispense rate of about one wedge per hour, and slow it down in the early morning hours, we can probably look to reload the machine once per day, which is just about ideal for the Alchemy environment.
I spent some time tonight and added a few things to our Adafruit order: LED strips, Arduino, a big red LED button, power supply and connectors.  This should be enough to independently program the timer and button/lighting mechanism.
We're still not completely sure how exactly to drive the carousel, but we have enough to build a prototype that can be operated manually.  Kevin and Edward, who helped build the prototype for the Infinity Portal, threw their hat in to help construct the carousel.

Stay tuned for an announcement about the next meeting!

Steganography 101

Disclaimer: This is a blog post about a CryptoParty presentation, the contents of which should not be construed as official Freeside statements.  Any opinions presented in this blog post by the author do not in any way represent an official endorsement of these opinions by Freeside Technology Spaces, Inc., nor is intended to reflect the views of Freeside and its membership.

Recently, Freeside hosted a CryptoParty where I gave an introductory presentation on steganography.  Like all my CryptoParty presentations, this wasn't very technical, but I did introduce some (very) basic techniques.

The first tool that everyone should know about is exiftool.  exiftool reads and writes to the metadata section of a variety of image formats.  I showed an excellent illustrated example of Exif metadata in the JPEG format, which has some great diagrams which show how a JPEG file's bytes are laid out.  There's also C# .NET code included to extract and modify this data, if perl's not your thing (Note: perl should not be your thing).
There are many uses for Exif metadata.  The most common use is by camera manufacturers.  You may have heard that digital photography can record data and store it into the photo itself.  This is how and where it happens.  It's not just a timestamp, either.  Your camera, especially a smartphone camera, can store information like GPS, your phone firmware version, the OS it's running, model number, IMEI, and other information that can unique identify your camera as the source of the photo.

Facebook, Google, and other social media use this feature to conveniently place the location of where the photo was taken when you upload it to their service.  This is great when you want to let your friends know that the picture of you standing in front of the Grand Canyon was taken at the location of the Grand Canyon (for those friends of yours that don't know what the Grand Canyon looks like).  It's less awesome when you've called in sick to work on Thursday and post a picture of a cool looking bird on Saturday, especially if you work in Atlanta and that bird was on the outskirts of Panama City.  Your employer can put two and two together.

Thankfully, there are tools to strip out metadata from images.  Consider using some before posting to social media!  There's always opt-out, too (you don't have to post everything to Facebook).

You can use exiftool to extract the information from some of the images in this blog post.  For example, with the "Snakes are Awesome" image, we can run the following command at the terminal:

$ exiftool -l snake.jpg
...
User Comment

...

Note: "$2" was removed when I wrote the value to the image, because $2 is a variable in Bash shell and the command was looking to substitute a value for it (which was nothing).

In this way, you can "hide" a URL in a picture.  It's not very well hidden, but a person or software tuned to detect this sort of thing can fish it out.  Still, it's a great way to communicate a "secret" with others that's not immediately obvious.  There's also no reason the data you store in metadata can't be encrypted.

Text steganography is the next step up in hiding information in plain sight.  For the presentation, I demo'd spammimic, an online tool that takes a string and hides in within spam, a fake PGP signature, or even characters that make it look Russian!  Let's say I want to send the message, "The only limit is yourself" - spammimic can make this look like a spam email:
Dear Friend ; Thank-you for your interest in our publication . If you no longer wish to receive our publications simply reply with a Subject: of "REMOVE" and you will immediately be removed from our club ! This mail is being sent in compliance with Senate bill 1627 ; Title 6 , Section 303 ! This is NOT unsolicited bulk mail [...]
The way that works generally is by taking the characters and mapping them to a known snippet of spam.  Note how the punctuation is always space-punctuation-space.  If you know about spammimic, it's not difficult to write some software to detect and test for this sort of thing.  Now, go through your spam folder and see which ones have hidden messages!

So, computers are basically machines that process strings, so anything you do with text is probably easily suited to reverse engineering and therefore, easily detected by three letter government agencies.

What about images within images, man?

There's a very simple technique to hide a zip file within a JPEG or GIF file.  The reason this works is that JPEG/GIF files are interpreted and identified by the header, whereas zip files are read from the end of the file.  So, in browsers and operating systems, the image will be rendered while the zip file remains obscure.

This technique is not without its drawbacks.  For starters, depending on the data, you can really blow up the size of a JPEG or GIF (which are typically less than 500K in size, which is being generous!)  A single PDF file could be 1-2MB.  So, a naive software detector can simply scrape social media sites like Tumblr and Twitter and put aside images in excess of a certain size threshold.  Still, you have to know to look for that.  Most casual human observers will see a picture and think nothing of it.

Here's how to execute the technique:

$ cat taxiderpy_original.jpg >> taxiderpy.jpg
$ zip secret.zip microsoft-spy.pdf
$ cat secret.zip >> taxiderpy.jpg

$ ls -sh1 taxiderpy*
1.6M taxiderpy.jpg
40K taxiderpy_original.jpg

This does nothing more than use the *nix command cat to append the zip file to the end of the image.  In this case, we have appended a PDF file with Microsoft's menu of services to law enforcement to the back of an image of a taxiderpy polar bear.  As you can see from the output of ls, the file size has increased from 40K to 1.6M.

Note: Blogger was able to detect that something was off about the taxiderpy image when attempting to upload it to this post.  To fetch the actual file, download the original presentation.

Extraction is easy - you simply attempt to unzip the JPEG or GIF.  Note that unzip warns about some extraneous data at the start of the file, which is the image, of course:

$ unzip taxiderpy.jpg
Archive: taxiderpy.jpg
warning [taxiderpy.jpg]: 37425 extra bytes at beginning or within zipfile
(attempting to process anyway)
inflating: microsoft-spy.pdf
$ open microsoft-spy.pdf

There's some more advanced techniques that hold up better to closer scrutiny.  For example, the same technique that professional photographers use to include a watermark can be used to hide a URL or other piece of data in a photo.  Video is another great medium to hide information.  In a complex animation or sequence, you could flash some secret text to the screen in a subtle way.  The "key" that the recipient needs to read the data is the exact frame number.

For more good times, come to the next CryptoParty!  We also archive all the past presentations and information discussed at CryptoParty on our wiki.  I'll be trying to get these into blog post format, to fill in the blanks between the slides, as it were.