Towards an inexpensive open-source desktop CT scanner

rsz_dsc_0803

A bit of a story, and then a lot of pictures — by far the most interesting class I’ve ever taken was Advanced Brain Imaging in grad school. As a hands on lab class, each week we’d have a bit of a lecture on a new imaging technique, and then head off to the imaging lab where one of the grad students would often end up in the Magnetic Resonance Imager (MRI) and we’d see the technique we’d just learned about demonstrated. Before the class I was only aware of the structural images that most folks think of when they think of an MRI, as well as the functional MRI (or fMRI) scans that measure blood oxygenation levels correlated with brain activity and are often used in cognitive neuroscience experiments. But after learning about Diffusion Tensor Imaging, spin-labeling, and half a dozen other techniques, I decided that the MRI is probably one of the most amazing machines that humans have ever built. And I really wanted to build one.

MRI is a spatial extension to nuclear magnetic resonance spectroscopy (NMR), and requires an extremely homogeneous high-intensity magnetic field to function — far more uniform than you can achieve with permanent magnets or electromagnets. For MRI, this uniformity is often accomplished using a superconducting magnet that’s cooled to near absolute zero using liquid helium. This, of course, makes it extremely technically difficult to make your own system. While folks have been able to use large electromagnets for NMR (they average out the magnetic field intensity over the sample by spinning the sample very rapidly while it’s inside the magnet), I haven’t seen anyone demonstrate building an imaging system using an electromagnet. There are some experimental systems that try to use the Earth’s magnetic field, but the few systems I’m aware of are very low resolution, and very slow.

Volumetric biological imaging has two commonly used tools — MRI and Computed Tomography (or CT), sometimes also called Computed Axial Tomography (or “CAT”) scanning — although ultrasound, EEG, and a bunch of other techniques are also available. Fast forward about two years from my brain imaging class (to about three years ago), I had started my first postdoc and happened to be sitting in on a computational sensing / compressed sensing course.

rsz_1dsc_0692

About the same time I happened to be a little under the weather, and stopped into a clinic. I thought I’d torn a muscle rock climbing, but after examining me the doctor at the clinic thought that I might have a serious stomach issue, and urged me to visit an emergency room right away. As a Canadian living abroad, this was my first real contact with the US health care system, and as exciting as getting a CT was (from the perspective of being a scientist interested in medical imaging), from a social perspective it was a very uncomfortable experience. Without really going into details or belaboring the point, universal health care is very important to me, and (what many consider) a basic human right that most of the folks in the developed world have access to. My mom was diagnosed with cancer when I was young, and we spent an awful lot of time in hospitals. Her and my dad still do, after 15 years and more surgeries than anyone can count. It’s frightening to think of where we’d all be if her medical care wasn’t free. And so when a bill showed up a month or so after my emergency room visit for nearly $5,000 (most of which was covered by a health insurance company), I nearly needed a second trip to the emergency room, and I thought a lot about the many folks I knew, including my girlfriend at the time, who didn’t have any form of health insurance and basically couldn’t go to the doctor when they were ill for fear of massive financial damage.

With all of this in mind, knowing the basics of medical imaging, and having just discussed computed tomography and the Radon transform in the class I was sitting in on, I decided that I wanted to try and build an open source CT scanner, and to do it for a lot less than the cost of me getting scanned, by using rapid prototyping methods like laser cutting and 3D printing.

It’s been a few years since I’ve had access to a laser cutter, and they’re one of my favorite and most productive rapid prototyping tools. In the spirit of efforts like the Reprap project, I enjoy exploring non-traditional approaches to design, and designing machines that can be almost entirely 3D printed or laser cut. Fast-forward almost two and a half years to last month, and the local hackerspace happened to have a beautiful laser cutter generously donated. This is the first cutter I’ve had real access to since grad school, and with the CT scanner project waiting for a laser cutter and a rainy day for nearly two years, I immediately knew what I wanted to have a go at designing. On to the details.

rsz_dsc_0732

From a high-level technical standpoint, a computed tomography or CT scanner takes a bunch of absorption images of an object (for example, x-ray images) from a variety of different angles, and then backs out 3D volumetric data from this collection of 2D images taken from different angles. In practice, this is usually done one 2D “slice” at a time, first by rotating an x-ray scanner around an object, taking a bunch of 1D images at tens or hundreds of angles, and then using the Radon transform to compute a given 2D slice from this collection of 1D images. One can then inspect the 2D slices directly to see what’s inside something, or stack the slices to view the object in 3D.

rsz_dsc_0795

Mechanically, this prototype scanner is very similar to the first generation of CT scanners. An object is placed on a moving table that goes through the center of a rotating ring (or “gantry”). Inside the ring there’s an x-ray source, and on the other side a detector, both mounted on linear stages that can move up and down in unison. To scan an object, the table moves the object to the slice of interest, the gantry rotates to a given angle, then scans the source and detector across the object to produce a 1D x-ray image. The gantry then rotates to another angle, and the process repeats, generating another 1D image from a slightly different angle. After generating tens or hundreds of these 1D slices from different angles, one backs out the 2D image of that slice using the Radon transform. The table then moves the object slightly, and the process is repeated for the next slice, and the hundreds of other slices that are often taken in a medical scan. Modern scanners parallelize this task by using a fan-shaped beam of x-rays and hundreds of simultaneous detectors to scan someone in about a minute, but the first generation of scanners could take several minutes per slice, meaning a scan with even tens of slices could take an hour or more.

rsz_dsc_0703

Designing an almost entirely laser-cuttable CT scanner with four axes of motion, one being a large rotary gantry, was a lot of fun and an interesting design challenge. I decided that a good way to rotate the gantry would be to design it as a giant cog that sat atop a system of drive and idler cogs, that could slowly index it to any angle.

rsz_dsc_0762

One of the issues with laser cutting a giant cog is finding something to mate with it that can transfer motion. I’ve press-fit laser cut gears onto motor shafts before (like with the laser cut linear CNC axis, but in my experience they can slip or wear rather quickly, and I like being able to disassemble and reassemble things with ease. I decided to try something new, and designed a laser-cuttable 2.5D timing pulley that mates with the main rotary cog, and securely mounts on a rotary shaft using a captive nut and set screw. On either side of the shaft there’s space for a bushing that connects to the base, and inside the base there’s a NEMA17 stepper from Adafruit that transfers motion to the drive shaft using a belt and timing pulleys.

rsz_dsc_0723

A small lip on the base acts as the other edge of the timing pulley, and helps keep the main rotary axis aligned.

rsz_dsc_0747

Inside the rotary gantry are two linear axes 180 degrees apart — one for the source and the other for the detector. The gantry is about 32cm in diameter, with the bore about 15cm, and the gantry itself is about 8cm thick to contain the linear axes.

rsz_dsc_0751

Each linear axis has a small carriage that contains mounts for either the source or detector, some snap bushings for two aluminum rails, and a compression mount for the timing belt. Each axis also has an inexpensive NEMA14 stepper and an idler pulley. Here, I’m using a very small solid state high-energy particle detector called the Type-5 from Radiation Watch, which can be easily connected to an external microcontroller. This is really very easy to work with, and saves me from having to use a photomultiplier tube and scintillation crystal that I found on eBay from an old decommissioned PET/CT scanner.

rsz_dsc_0752

I’m certain if the symmetry were any more perfect, it would move one to tears. The rotary gantry has to be symmetric to ensure proper balance and smooth rotation. After rotating the gantry 180 degrees, here you can see the other linear axis intended for the source. It currently just contains a mount pattern with 4 bolts, that a source will eventually mount to.

Safety is very important to me. In medical diagnostic imaging it’s often important to have an image as soon as possible, but that’s not the case for scanning non-living objects purely for scientific or educational interest. This chart from XKCD shows the radiation that folks typically absorb from every day adventures like banana-eating and sleeping beside someone to hopping on planes or having a diagnostic x-ray. I’ve designed this scanner to operate on levels slightly above the natural background level, well into the blue (least intense) section of the xkcd graph, and make use of a “check source”, which is an extremely low intensity source used to verify the functionality of a high-energy particle detector. The trade-off for this safety is acquisition time, and it will likely take a day or more to acquire data for even a small object. This aspect of the design is scalable, such that if the scanner were to be used in a research environment in a shielded room, folks braver than I should be able to acquire an image a good deal faster.

rsz_dsc_0758

The sandwich of four plates on either end of the linear axes contain precision mounts for the aluminum shafts, as well as a setscrew with captive nut to hold the shafts in place.

rsz_dsc_0719

The table itself is about 40cm long, and offers nearly 30cm of travel. It uses a light-weight nylon lead screw to index the table, with a NEMA14 drive motor located in the base.

rsz_dsc_0779

To test out the motion and detector, I put together an Arduino sheild with a few Pololu stepper controllers and a connector for the detector. The seeed studios prototype board I had on hand only had space for three stepper controllers, but it was more than enough to test the motion. Each axis runs beautifully — I was sure the rotational axis was going to have trouble moving smoothly given that most of the moving parts were laser cut, but it worked wonderfully on the first try, and moves so fast I had to turn down the speed lest the neighbours fear that I was building a miniture Stargate…

When I solidify all the bits that have to be in the controller, I’ll endeavor to lay out a proper board much like this prototype, but with four stepper controllers, and an SD card slot to store the image data for long scans.

rsz_dsc_0772

For size, here you can see the Arduino and shield together on the scanning table. I’m hoping to start by scanning a carrot, move up to a bell pepper (which has more non-symmetric structure), and work up to an Apple. Since time on commercial machines is very expensive, I think one of the niche applications for a tiny desktop CT scanner might be in time-lapse scans for slowly moving systems. If the resolution and scan speed end up being up to the task, I think it’d be beautiful to plant a fast-sprouting seed in a tiny pot and continually scan it over a week or two to build a 3D volumetric movie of the plant growing, from watching the roots in the pot grow, to the stalk shooting up and unfurling its first leaves. I’m sure the cost of generating that kind of data on a medical system would be astronomical, where the material cost of this prototype is in the ballpark of about $200, although I’m expecting that a source will add about $100 to that figure.

1264585_4716534411740_673623925_o

And finally, here’s a quarter-size acrylic prototype that I designed and cut in an afternoon a few weekends ago, that started the build and brainstorm process. My recently adopted rescue cat ironically loves to hang around the “cat” scanner, and has claimed nearly all of the open mini spectrometers I’ve built as toys to bat around…

Laser cutters are really amazing machines, and it’s really incredible to be able to dream up a machine one morning, spend an afternoon designing it, and have a moving functional prototype cut out and assembled later that evening that you can rapidly iterate from. Since laser cutters are still very expensive, this work wouldn’t have been possible without kind folks making very generous donations to my local hackerspace, and I’m extremely thankful for their community-minded spirit of giving.

thanks for reading!

Sneak Peek: Science Tricorder Mark 5 development pictures

I thought I’d take a moment to snap and share some pictures of the Science Tricorder Mark 5 prototype in its mid-development state. I’ve recently hit a snag with the WiFi, and have a little downtime while I’m waiting for a reply to a support e-mail.

rsz_1dsc_0647

The form factor of the Mark 5 looks much like a smart phone. In fact, it happens to be about the same size as my blackberry, though ultimately it’ll be a little thicker to accommodate the size of some of the larger sensors, like the distance sensor, open mini spectrometer, and a few others. Ultimately I think this form factor is adds a lot in terms of usability over than the folded design — with the Mark 1 and 2 you’d often have to hold the device at an odd angle, with the angle for trying to scan something usually being much different than the angle to see the screen. Here, I’ve moved many of the omnidirectional sensors (that happen to be thin) to the top of the device, and placed the directional sensors (which also tend to be much larger) on the bottom — the idea being that you could make use of the omnidirectional sensors in any position, and use the directional sensors much like you’d take a picture with your smart phone. This also effectively doubles the amount of exterior-facing sensor space, which is fantastic.

rsz_1dsc_0648

Keeping things tractable is one of my central design philosophies, otherwise most of this wouldn’t be possible. This was a lesson that I learned very well with the Mark 2 — designing your own ARM-based motherboard is a lot of fun and you learn a great deal, but it’s also time consuming (even with reference designs), and as a one-person project you have to pick your battles. So in this respect, choosing the computational bits of the Mark 5 was one of the most challenging choices in that it has to balance capability, ease of modification, and implementation time. In terms of capability, it’s important that the Mark 5 have advanced visualization capabilities like the Mark 2, and WiFi capability both to move data out of the device, as well as (eventually) upload the data to a website that would allow folks to share their sensing data. In terms of ease of modification, I’d like folks to be able to modify and reprogram the device as easily as possible, and use it as a vehicle to explore electronics, science, and math as much as to visualize the world. In addition to all this, there are a bunch of pragmatic concerns — power consumption, development tools, product end-of-life, and so forth.

This was a very difficult choice to make, and given that there’s no perfect option, I bounced back and forth quite a bit. On one hand I thought about moving to something Arduino or Chipkit compatible, that would be very easy to program, and fast to develop, but which would sacrifice computational capability. On the other hand, ARM-based surface-on-a-chips would have computational capability, and could run a piece of middleware that would make it easy for folks new to programming to modify, but the development time and development cost would be very high. The Mark 5 would likely have to move to a 4-layer or 6-layer design, which would add a barrier to folks in the open hardware community who might want to contribute, or make derivatives.

In the end, I went back to an idea that I’d considered for the Mark 2, which is to use a small system-on-a-module that contains the time consuming bits — processor, memory, wifi, etc. — and be able to focus my attention on the project-specific bits like the sensors. There are currently not a lot of options for an extremely small system-on-a-module that includes WiFi. For the Mark 2 I had considered using a Verdex by Gumstix, and for the Mark 5 I settled on trying their Overo FireSTORM modules, which include a TI OMAP3730 processor running at 800mhz, 512meg of RAM, 512meg of flash, and onboard wifi and bluetooth. The modules are also very, very small, and run Linux.

After weeks of tinkering I’m still having issues connecting to the WiFi (it’s been very spotty for me, only worked a few times, and most of the time doesn’t detect the WiFi hardware), and while it’s not clear whether it’s a hardware or software issue, from the Overo mailing list it appears as though this is an issue a bunch of other folks have run into. I sent off an e-mail to the Gumstix folks early last week, and hopefully I’ll hear back from them soon with some help. Hopefully after that’s sorted out I can work on the display driver, and start populating the sensors.

rsz_dsc_0683

In addition to the touch display and a bunch of level translators, the top of the board contains an ultra low power PIC microcontroller to act as an interface between the sensors and the Gumstix, much as in the Mark 2.

rsz_dsc_0635

Because I’m still tinkering with the Gumstix, I haven’t yet populated many of the sensors on the Mark 5 so that I can better diagnose any issues that come up. To help prototype the Mark 5’s sensor suite, and also for when I was considering making an Arduino-powered Mark 5, I designed a breakout board that’s essentially just the upper sensor section of the Mark 5. Here only the top sensors are populated, including the magnetometer, inertial measurement unit (consisting of a 3-axis gyro, accelerometer, and internal magnetometer), ambient humidity sensor, ambient temperature and pressure sensor, lightning sensor, and the GPS. Both the lightning sensor and GPS have RF components, which I don’t have a lot of experience with, so it was very comforting to see the GPS acquire a lock and display position information accurate to within a few meters. Interested readers may also notice the footprint for the open mini spectrometer on the left side of the board. The bottom side of the board, not populated or shown here, contains the spectrograph for the open mini spectrometer, camera, distance sensor, low resolution thermal camera, colour sensor, as well as a prototype for a 3D printable linear polarimeter much like the one on the Mark 1. The Mark 5 board itself includes footprints for both a radiation sensor and a gas sensor that didn’t fit on this breakout board.

rsz_dsc_0374

Assembly Pictures
I thought I’d include a few assembly pictures. Here’s one of the solder paste stencils, for the bottom of the board. Among pictures that I’ve taken recently, it’s also one of my favorites.

rsz_dsc_0460

Here, after the solder paste was applied and parts placed, the bottom components are being soldered in a make-shift reflow oven.

rsz_dsc_0470

Fresh from the oven and after cleaning a few solder bridges, the first prototype Science Tricorder Mark 5 board is ready to begin the development process.

thanks for reading!