Source Release — Open Source CT Alpha

openct-source

I’m very happy to announce the first release of the Open Source Computed Tomography (CT) scanner project. This is an early alpha release, and contains all of the source at the projects current stage, including the laser cutter design files for the machine structure, EAGLE source files, and the sample Arduino sketch.

The source is available for download here [zip], and is also available on GitHub. For potential contributors, the TODO file also includes near term project goals at a variety of skill levels, from adding end-stops and designing the official Arduino shield, to designing parallel detectors that decrease scan time, and developing a new source/detector pair for different wavelengths of interest.

I’m excited to see what folks do with this project, both now and as it matures. If you build one, want to contribute to the project, or encounter any issues, please send me a note.

In other news, the Bay Area Maker Faire is coming up in a short two months. With a good amount of progress on the Open Source CT Scanner, I’m going to switch gears for a while back to the Mark 5 Arducorder — I’d love to have the firmware and basic functionality working and demonstrable by then.

Open Source CT in MAKE Magazine

makemagazine_ctscanner400

Very exciting news — the Open Source Desktop CT Scanner is featured in this month’s MAKE Magazine Homebrew Section. I’ve been a great fan of MAKE for years and presented the Science Tricorders at their first Hardware Innovation Workshop, and so it’s very exciting to see the project in this issue.

Source Files: There’s been a lot of interest in having the source files for the alpha version of the scanner, and so I’ll endeavor to have these up within a week or so. I’m in the process of collecting and packaging the source, as well as moving everything to GitHub (including TODO lists) so that it’s much easier for folks to contribute.

I think that the best thing for an open source project is to bootstrap an initial community of users that can grow into a community of contributors, and so I’d like to cut out a few sets of the laser cut parts to send to one or two folks who are interested in building (and ideally contributing) to the project. If you’re interested, please send along a note with your background and how you’d like to contribute, to peter at tricorderproject dot org.

thanks!

Dr. Jansen, or: How I Stopped Worrying and Learned to Love the Barium

pepper_overlay1

After a marathon build session, the first images from the open source CT scanner are here! The story…

The Detector
dsc_0425

Recall that in the last update, the stock Radiation Watch Type 5 silicon photodiode high energy particle detector was found to be calibrated for Cesium, with a detection threshold likely somewhere near 80keV. This was too high to detect the ~22keV emissions of the Cadmium-109 source, and so I put together an external comparator that could adjust the threshold down to the noise floor. After testing the circuit on a protoboard, I designed a tiny board that sits on the back of the Type 5, and through the use of a 10-turn potentiometer allows you to recalibrate the threshold down to the noise floor.

dsc_0417

I designed some mounting plates that could mount to the linear carriages for the source and detector.

dsc_0436

Here, the detector is mounted onto an offset mounting plate, which in turn connects to the detector carriage. The wiring harness breaking out all the detector pins feeds through the center of the carriage to a fixed mount point on the bore that acts as a strain relief. Looks great!

The Source

Even with the upgraded extra-sensitive detector, I was still seeing many fewer detections than I was expecting — albeit about an order of magnitude more than without the enhancement. A kind fellow on the Radiation Watch facebook group made a spice simulation model based on the helpful schematics that the folks at Radiation Watch make available, and his simulations suggested that the noise floor for this circuit is around 30keV. This means that with Cadmium 109, whose primary emissions are around 22keV, I was likely still missing the majority of the emissions, and getting many fewer counts than I was expecting.

Enter the Barium-133. There are a number of radioisotope check sources that are commonly available, but many of them have very high energy emissions in the many hundreds (or thousands) of keV — likely far too high energy to be usefully absorptive for everyday objects. The emission spectra I’ve seen for the tubes in commercial CT scanners tend to have broad spectrum emissions centered around 60-70keV, and the datasheet for the silicon photodiode suggests it’s most sensitive from 10keV to 30keV, where the sensitivity drops off afterwards. A higher detection efficiency means that we can get by with a less intense source, and with check sources that are barely detectable over background a foot away, it’s a battle for signal, and every photon counts.

Barium-133 has primary emissions around the 33keV range, and seems to be one of the few commonly available radioisotopes (aside from Cadmium-109) with such low emissions. To give the system the best possible chance of working, I ordered a 10uCi Ba133 source (up from the 1uCi Cd109 source I was using previously). With the source 10cm away from the detector, with the background rate at 20 counts per minute, the Cd109 source reads about 70 (so a delta of 50), and the Ba133 source reads around 1500 (!), so we’re definitely detecting many more of the lower energy emissions, and this should have a much better signal-to-noise ratio, and decrease the acquisition time required for collecting good data.

dsc_0465

The Ba133 source also comes as a sealed 25mm disc. I designed a sandwich mount for these source discs that contains between 3-6mm of lead shielding at a variety of angles, and a very rough approximation of a lead collimator with a 3mm hole drilled in the front to give some directionality to the source. Testing out a few angles, this appears to have brought the reading down to about 60 cpm at 15cm away, except for directly ahead, where the intensity is about 550cpm at 15cm. Sounds great!

dsc_0477

Putting it all together

dsc_0482

I have to confess, I’m a bit of a late sleeper (and a night owl), but I was so excited about finally putting everything together and collecting the first data, that I woke up early Saturday. After a marathon 13-hour build session, I finished designing and fabricating the source and detector mounts, and putting the bore back together.

dsc_0482

With one of the bore covers removed, these pictures make it a little easier to see the complete linear axis mechanisms that are contained within the bore. You can thank my dad for discouraging my rampant hot glue use at a young age, and encouraging me to design things that were easily serviceable. I’ve given a few students the same talk when I see them wielding a hot glue gun for one of their projects… ;)

dsc_0488

openct2

Putting it all back together — looks beautiful!

And now, the data!

openct1

After the marathon build session, I took the very first data from the instrument — a quick absorption image straight up the center of this apple. Data was low resolution and noisy, but fantastic for the very first data from the instrument.

open_ct_xerocraft2

A very tired, but very pleased person after collecting the first data off the scanner around 1am.

rsz_avocado_picture

I had some time Monday evening to write some basic firmware for collecting images, storing them to an SD card, specifying the size and resolution parameters, the integration time for the detector, and so forth. In probably one of the strangest things I’ve ever done, and feeling very much like Doc Brown, I went to the grocery store and found a few vegetables that have internal structure and might be interesting to scan. I decided to start with the avocado…

I’d previously determined empirically that the optimal integration time for this setup is about 90 seconds per pixel — that tends to give a stable count of around 550cpm +/- 4 cpm. Lower integration times will give proportionately more noise, but be much quicker to scan.

The avocado is about 10cm by 12cm, and so to capture a first test image I set it to a 5mm resolution with a relatively fast 10 second integration time per point (bringing the total acquisition time to 20 x 24 x 10 seconds, or just over an hour).

avacado1b_log10

And it worked! The image is certainly a bit noisy (as expected), but it looks great. The table and the avocado are clearly visible, and the seed might also be in there, but we’ll need a bit higher integration time to see if that’s real structure, or just noise.

avocado_overlay1

Overlaying the scan atop the picture, the scan is a perfect fit!

avocado2_60sec

The integration time for the first image was only 10 seconds per pixel, and so I setup a longer scan with an integration time of 60 seconds per pixel. Beautiful! This still isn’t quite at the empirically determined sweet spot of 90 seconds, but it really cleaned up the noise in the first image.

avocado2_log10_60sec

The same data, with log scaling rather than linear scaling. I’m not entirely certain whether avocado pits are more or less absorptive to 33keV photons than the surrounding avocado, so it’s not clear whether we’re seeing lots of absorption at the center because of the seed, or because there’s 10cm of fruit between the source and detector…

rsz_dsc_0606

But I’d love to see some internal structure. So tonight I put the bell pepper on, which is about the same size as the avocado, and set it to an integration time of 20 seconds.

pepper1_20sec

And the result! It definitely looks like a bell pepper, and you can clearly see the seed bundle inside. Incredibly cool!

pepper1_log10_20sec

The same image, log scaled instead of linear scaled.

pepper_overlay1

And the overlay. Looks beautiful!

What a fantastic few days for the open source CT scanner, and the initial data looks great. There’s still plenty to do — now that the source and detector are working, I can finish designing the Arduino shield with four stepper controllers (two for the linear axes, one for the table, and one for the rotary axis). The source is also currently collimated in only the most liberal of senses, and in practice the detection volume for a given pixel is likely a pyramid that starts from the ~3mm source aperture and meets the ~1cm square detector — so the images should sharpen up a good deal by better controlling the beam shape. Once all of that is working, and I add an accelerometer sensor to the rotational axis to sense it’s angle, I should be able to scan from 180 degrees around the sample, and test the ability of the instrument in computed tomography mode, backing out the internal structure of a given slice from a bunch of 1D images. Very exciting!

Thanks for reading!

The Shape of Things to Come: the Mark 5 Arducorder

rsz_dsc_0358
I thought I’d take a few moments to introduce the next prototype open source science tricorder that I’ve been working on, the Arduino-compatible Mark 5 Arducorder.

I wasn’t having a great deal of luck with the earlier Mark 5 design, and so I decided to start from scratch and create something completely different under the hood. The new design has a larger sensor suite (a few sensors have been updated, and a multi-gas sensor had been added), but it should also be easier to program, easier for folks to tinker with, more modular, and less expensive to produce. It’s also Arduino Due compatible, so the hundred thousand folks out there who love Arduino programming and building simple circuits should feel right at home tinkering.

dsc_0284

Like the Mark 1 and 2, the Mark 5 Arducorder has a separate motherboard and sensor board. I think community building is a huge part of a successful open source design, and in this spirit I’d like it to be as easy as possible for folks to build new sensor boards for their Arducorders, or add expansions that I can’t anticipate. The Arduino folks have been very good with designing their boards to be expandable using “shields” that have standard, easy-to-prototype, 0.1″ headers. Similar to the idea of a shield, I’ve designed the Arducorder to have a 34-pin header for the sensor boards that expose a variety of pins for the I2C, UART, SPI, Analog, PWM, and Digital I/O peripherals, so that there are plenty of pins for expansion and interfacing to most kinds of sensors.

These boards were an interesting challenge to design. Conceptually the Arduino motherboards are fairly simple, but in order to maintain perfect compatibility with the Arduino Due board the routing was a little complex in areas. The whole Mark 5 Arducorder system is small — really small — and having the large easy-to-use sensor connector consumes a lot of real-estate, totaling nearly one quarter of the board area. Because of this I had to move to a 4-layer design, which takes a little longer to get fabbed. Still, the entire Arducorder including the 2.8″ LCD, WiFi module, and sensor board fits in about the same footprint as the original Arduino Due (or, about the size of my Blackberry), so it’s all quite compact and I’m very happy with the footprint.

dsc_0296

In terms of hardware, the prototype Arducorder motherboard currently has the following specifications:

Motherboard

  • Arduino Due compatible, using the Atmel SAM3X8E ARM Cortex-M3 CPU
  • 84MHz CPU Clock, 512KBytes of flash, 96KBytes SRAM
  • External 128KByte SPI SRAM
  • microSD card socket for data, graphics, and so forth
  • 2.8″ TFT LCD display w/touch panel
  • FT800 Graphics and Audio Controller to offload graphical rendering. Supports JPEG decompression.
  • CC3000 802.11b/g WiFi module
  • Two user-selectable input buttons, one on either side
  • microUSB for programming and charging (Due “native port”)
  • Exposes the second programming port through a header, as well as the two erase/reset buttons on the side, to maintain Arduino Due compatibility

Having some mechanism to render quality graphics has been a requirement since the Mark 1 using its external SED1375 graphics controller, and was certainly the case with the Mark 2′s beautiful dual organic LED displays. But graphics of any resolution have always been difficult for microcontroller-powered systems, like the PIC family (used in the Mark 1) or the Atmel microcontrollers used with the Arduino family of boards. Even with a microcontroller fast enough to perform graphics rendering, most microcontrollers don’t have nearly enough memory to support even a single framebuffer for a 320x240x16bpp screen (128k), so any graphics they do render tend to look choppy.

Enter the FT800 Graphics controller, a new product from FTDI, the same folks who make the popular FT232R USB-to-serial converter. The FT800 looks a lot like a modern version of the the 2D tile-based graphics controllers found in handheld gaming systems a few years ago, while also incorporating audio and touch-screen peripherals. The Gameduino 2 is a recent Arduino-powered project that makes use of the FT800, and it shows Gameboy Advanced-era graphics on an Arduino Uno — so I’m confident that an attractive and elegant interface can be crafted on the Arducorder. While it has less graphical capabilities than the original Mark 5 design, it should be much easier for folks to modify — and I’m excited to see the first user interface themes folks come up with.

dsc_0328

In terms of sensing capabilities, the current sensor board has footprints for the following sensors:

Sensor Board
Atmospheric

  • Ambient Temperature and Humidity: Measurement Specialties HTU21D
  • Ambient Pressure: Bosch Sensortec BMP180
  • Multi-gas sensor: SGX-Sensortech MICS-6814

Electromagnetic

  • 3-Axis Magnetometer: Honeywell HMC5883L
  • Lightning sensor: AMS AS3935
  • X-ray and Gamma Ray Detector: Radiation Watch Type 5
  • Low-resolution thermal camera: Melexis MLX90620 16×4
  • Home-built linear polarimeter: 2x TAOS TSL2561
  • Colorimeter: TAOS TCS3472
  • Open Mini Visible Spectrometer v1 using TAOS TSL1401CL 128-pixel detector, with NeoPixel light source

Spatial

  • GPS: Skytraq Venus 638
  • Distance: Maxbotics Ultrasonic Distance Sensor
  • Inertial Measurement Unit: Invensense MPU-9150 9-axis (3-axis accelerometer, gyro, and magnetometer)

Other

  • Microphone: Analog Devices ADMP401

Many of these are new or updated offerings that either offer new sensing modalities that weren’t previously available (like the lightning sensor from AMS), or that improve upon resolution, size, or cost over previous versions. Gas sensing has been on the wishlist for a long while, but many contemporary sensors are large and use power-hungry heating elements — so I’m particularly excited about trying out new line of micro gas sensors from SGX.

One sensor has been temporarily removed — the camera. There’s currently no easy way that I’m aware of to hook up a camera (which is a high bandwidth device) to an Arduino Due, which has limited memory. I’ve replaced the camera with a small board-to-board connector, in the hopes that someone in the open source community will develop a small SPI JPEG camera board with an onboard framebuffer shortly. If not, I’ll have to have a go at it once the rest of the device is functional.

dsc_0342

While the top of the sensor board contains the thin, omnidirectional sensors, the bottom contains many of the larger, directional sensors including the open mini spectrometer and the ultrasonic distance sensor. I’d love to find a shorter alternative to these sensors at some point, as right now they are the determining factor in the Mark 5′s thickness — about an inch.

dsc_0345

Currently I’m testing out the sensor board — some of the sensors are a bit expensive, so I’m iteratively populating the boards, testing, and so forth. I also have to 3D print more open mini spectrometers — my cats absolutely love to play with them, so the bunch I’ve made have vanished into the aether under the couch, never to be seen from again.

dsc_0361

My current TODO list is to verify the basic functionality of the hardware (currently the FT800 as well as a few of the sensors have been tested), write low-level drivers for all the sensors and peripherals, then move on to creating the larger graphical user interface. At first the graphical environment will likely be somewhat modest, but I’d love to recruit the help of some skilled folks from the open source community who would enjoy working on the interface side of things once the prototype hardware is stable. Please feel free to contact me if you’re interested.

Thanks for reading!

Additional Pictures:
dsc_0363

dsc_0371

dsc_0373

dsc_0379

dsc_0383

dsc_0392

dsc_0405

Update on CT scanner: calibrating the radiation detector

rsz_20140118_164737

I thought I’d post a quick update on the open source CT scanner, especially given that there’s been a lot of interest lately. Life has been a little busy over the past few months — in the lab with paper writing, visiting home for the holidays, as well as my mom very recently passing away after her 18 year battle with cancer — so I haven’t been as good at writing updates as I’d planned.

Before getting to the CT scanner, I’d like to put together a proper update on the Mark 5 Science Tricorder shortly. A quick preview — in November I redesigned the Mark 5 from scratch to be less expensive, more modular, easier to modify, and quicker to program — and I call the result the Arducorder. The Mark 5 is now Arduino Due compatible, so the hundred thousand folks out there who can program an Arduino should be able to comfortably pick up and modify the code quickly, while others looking to start writing code for the Arduino could do so over a weekend using the large existing library of books and resources available. The sensor board now attaches using a standard 0.1 inch header, so one should be able to prototype new sensor boards quickly. The system includes a separate graphics processor (the FT800), so the graphics capabilities should be similar to something like a Gameboy Advanced, and the motherboard also includes a WiFi module for wireless connectivity (the CC3000). The boards arrived just as I flew back in from the holidays, and I’ve been building them and writing the drivers and firmware over the past few weekends.

A source and detector for the CT scanner
As I mentioned in my first post, for safety I’ve designed the CT scanner to use a radioisotope x-ray source that’s barely above background levels — the tradeoff for this safety being acquisition time. There are a number of low-intensity radioisotope check sources commercially available, each with different emission spectra. I ended up deciding to use a Cadmium-109 source, which has the lowest energy photons I could find — it’s primary emissions are 22keV x-rays, with a small secondary 88keV emission. I’ve read that 22keV photons are about 50% absorbed by 2cm of tissue, so this seemed like a usefully contrastive figure — too much or too little absorption and you’ll need many more samples to make a useful image, which increases acquisition time. Higher energy emissions from other radioisotopes (in the many hundreds of keV) might be useful for imaging metals, but generally have much less absorption for non-metallic materials that I’d like to image first (like vegetables).

The detector data sheet from the folks who make the Radation Watch Type 5 also suggests that lower energy photons are much easier to detect, with a detection efficiency of around 30% for 22keV photons, where this drops to about 1% for 88keV photons (Note: although further discussions suggest that they might be using a slightly different photodiode than the FSX100-7 2.0 in the Type 5, so the efficiency curve in the datasheet is likely somewhat different). Functionally, this is like having a source that’s 30 times more intense — when every photon counts, if you can count them more efficiently, you can get by with much less exposure, and decrease the acquisition time.

The actual measured count from the Cd-109 source ended up being much less than I was expecting, and after some investigation it looked as though the detector was only measuring the 88keV emissions, which both account for only 4% of the total emissions, and are also detected much less efficiently — reducing the number of counts by a factor of about 2000. After chatting with the Radiation Watch folks, it looks as though the Type 5 is calibrated for Cesium-134 and 137, which emit higher energy photons, and would need to be recalibrated to sense the lower energy emissions of Cd-109.

rsz_20140118_164818

Measuring high energy photons is a constant battle between signal and noise, in part due to the extreme amplification required to sense a single subatomic particle and convert this into a voltage signal with an energy many orders of magnitude larger that can be detected by a microcontroller. To keep the signal-to-noise ratio favorable, the Type 5 has a detection threshold somewhere above the noise floor of the amplifier circuit and below the energy produced when a high-energy Cesium photon hits the detector. To recalibrate the detector for the 22keV Cadmium-109 emissions, this threshold has to be lowered — but it can only be lowered so far, and if the peaks produced by the 22keV photons aren’t larger than the noise, then we’re out of luck and either have to choose a more sensitive detector or a source with higher energy emissions. Thankfully the folks who designed the Type 5 detector were good enough to include a test point after the amplifier and before the threshold comparator that can be used to see the raw signal, and recalibrate if required.

20140118_165304

Above on the oscilloscope, the raw signal is pictured in blue (50mV/div), and the detector output (raw signal after the comparator) that’s fed into a microcontroller is pictured in yellow (2V/div). Here, an 88keV photon from the Cd-109 source has hit the detector, and we’re seeing the yellow line trigger. We can also see that the raw signal for this photon is about an order of magnitude above the noise floor — looking good.

20140118_165130

Here we see a lower energy emission from the Cd-109 source, likely in the 22keV range. These lower energy emissions are just barely above the +/-80mV noise floor, and setting the oscilloscope to trigger at 90mV above has the scope triggering frequently when the Cd-109 source is near the detector, and very little when the source is taken away. So, we’re just barely squeaking in, and appear to be able to measure the ~22keV photons just above the noise floor.

To make a quick ball-park measurement of the signal-to-noise ratio, I measured the number of detections (by hand) above 90mV with the Cd-109 source 10cm away from the detector, as well as the background rate (Cd-109 source in a shielded container), and with a few nearby materials between the source and detector.

  • The background rate is up from about 5 counts per minute (cpm) to around 15. These extra detections are likely a combination of actual lower-energy photons that the detector threshold was missing, as well as electrical noise.
  • The count with the source 10cm from the detector is 80 cpm. This is still less than I was expecting, but it’s very workable as a starting point.
  • With a 1/4 inch acrylic sheet between the source and detector, the detector measures 52 cpm — so it’s absorbing about 40% of the 22keV x-rays.
  • With a 1/8 inch MDF sheet between the source and detector (the same material that the prototype CT scanner is constructed out of), the detector measures 68 cpm — or about 20% absorption.
  • A bottle of water between the source and detector measures about 18 cpm, so nearly 95% absorption.

So, we’re looking very good, and these numbers should give useful contrast and interesting images for many small non-metalic objects.

Next Steps
Rather than modify the comparator on the detector circuit itself, I’ll put together a small board that connects to the raw test point, and has a comparator with a precision potentiometer for calibration. This will both make it a lot easier for folks to replicate, but also reduce the chances of introducing noise into the detector by removing and trying to replace the electrical shielding around the detector.

Because I’m seeing fewer detections than I’d expected, I’ll also make a small mechanical modification and move the source and detector from the outside of the gantry to the inside — reducing their distance from 30cm to somewhere in the 10-15cm range.

Once that’s complete, I should be able to acquire the first images of a few test targets!

Towards an inexpensive open-source desktop CT scanner

rsz_dsc_0803

A bit of a story, and then a lot of pictures — by far the most interesting class I’ve ever taken was Advanced Brain Imaging in grad school. As a hands on lab class, each week we’d have a bit of a lecture on a new imaging technique, and then head off to the imaging lab where one of the grad students would often end up in the Magnetic Resonance Imager (MRI) and we’d see the technique we’d just learned about demonstrated. Before the class I was only aware of the structural images that most folks think of when they think of an MRI, as well as the functional MRI (or fMRI) scans that measure blood oxygenation levels correlated with brain activity and are often used in cognitive neuroscience experiments. But after learning about Diffusion Tensor Imaging, spin-labeling, and half a dozen other techniques, I decided that the MRI is probably one of the most amazing machines that humans have ever built. And I really wanted to build one.

MRI is a spatial extension to nuclear magnetic resonance spectroscopy (NMR), and requires an extremely homogeneous high-intensity magnetic field to function — far more uniform than you can achieve with permanent magnets or electromagnets. For MRI, this uniformity is often accomplished using a superconducting magnet that’s cooled to near absolute zero using liquid helium. This, of course, makes it extremely technically difficult to make your own system. While folks have been able to use large electromagnets for NMR (they average out the magnetic field intensity over the sample by spinning the sample very rapidly while it’s inside the magnet), I haven’t seen anyone demonstrate building an imaging system using an electromagnet. There are some experimental systems that try to use the Earth’s magnetic field, but the few systems I’m aware of are very low resolution, and very slow.

Volumetric biological imaging has two commonly used tools — MRI and Computed Tomography (or CT), sometimes also called Computed Axial Tomography (or “CAT”) scanning — although ultrasound, EEG, and a bunch of other techniques are also available. Fast forward about two years from my brain imaging class (to about three years ago), I had started my first postdoc and happened to be sitting in on a computational sensing / compressed sensing course.

rsz_1dsc_0692

About the same time I happened to be a little under the weather, and stopped into a clinic. I thought I’d torn a muscle rock climbing, but after examining me the doctor at the clinic thought that I might have a serious stomach issue, and urged me to visit an emergency room right away. As a Canadian living abroad, this was my first real contact with the US health care system, and as exciting as getting a CT was (from the perspective of being a scientist interested in medical imaging), from a social perspective it was a very uncomfortable experience. Without really going into details or belaboring the point, universal health care is very important to me, and (what many consider) a basic human right that most of the folks in the developed world have access to. My mom was diagnosed with cancer when I was young, and we spent an awful lot of time in hospitals. Her and my dad still do, after 15 years and more surgeries than anyone can count. It’s frightening to think of where we’d all be if her medical care wasn’t free. And so when a bill showed up a month or so after my emergency room visit for nearly $5,000 (most of which was covered by a health insurance company), I nearly needed a second trip to the emergency room, and I thought a lot about the many folks I knew, including my girlfriend at the time, who didn’t have any form of health insurance and basically couldn’t go to the doctor when they were ill for fear of massive financial damage.

With all of this in mind, knowing the basics of medical imaging, and having just discussed computed tomography and the Radon transform in the class I was sitting in on, I decided that I wanted to try and build an open source CT scanner, and to do it for a lot less than the cost of me getting scanned, by using rapid prototyping methods like laser cutting and 3D printing.

It’s been a few years since I’ve had access to a laser cutter, and they’re one of my favorite and most productive rapid prototyping tools. In the spirit of efforts like the Reprap project, I enjoy exploring non-traditional approaches to design, and designing machines that can be almost entirely 3D printed or laser cut. Fast-forward almost two and a half years to last month, and the local hackerspace happened to have a beautiful laser cutter generously donated. This is the first cutter I’ve had real access to since grad school, and with the CT scanner project waiting for a laser cutter and a rainy day for nearly two years, I immediately knew what I wanted to have a go at designing. On to the details.

rsz_dsc_0732

From a high-level technical standpoint, a computed tomography or CT scanner takes a bunch of absorption images of an object (for example, x-ray images) from a variety of different angles, and then backs out 3D volumetric data from this collection of 2D images taken from different angles. In practice, this is usually done one 2D “slice” at a time, first by rotating an x-ray scanner around an object, taking a bunch of 1D images at tens or hundreds of angles, and then using the Radon transform to compute a given 2D slice from this collection of 1D images. One can then inspect the 2D slices directly to see what’s inside something, or stack the slices to view the object in 3D.

rsz_dsc_0795

Mechanically, this prototype scanner is very similar to the first generation of CT scanners. An object is placed on a moving table that goes through the center of a rotating ring (or “gantry”). Inside the ring there’s an x-ray source, and on the other side a detector, both mounted on linear stages that can move up and down in unison. To scan an object, the table moves the object to the slice of interest, the gantry rotates to a given angle, then scans the source and detector across the object to produce a 1D x-ray image. The gantry then rotates to another angle, and the process repeats, generating another 1D image from a slightly different angle. After generating tens or hundreds of these 1D slices from different angles, one backs out the 2D image of that slice using the Radon transform. The table then moves the object slightly, and the process is repeated for the next slice, and the hundreds of other slices that are often taken in a medical scan. Modern scanners parallelize this task by using a fan-shaped beam of x-rays and hundreds of simultaneous detectors to scan someone in about a minute, but the first generation of scanners could take several minutes per slice, meaning a scan with even tens of slices could take an hour or more.

rsz_dsc_0703

Designing an almost entirely laser-cuttable CT scanner with four axes of motion, one being a large rotary gantry, was a lot of fun and an interesting design challenge. I decided that a good way to rotate the gantry would be to design it as a giant cog that sat atop a system of drive and idler cogs, that could slowly index it to any angle.

rsz_dsc_0762

One of the issues with laser cutting a giant cog is finding something to mate with it that can transfer motion. I’ve press-fit laser cut gears onto motor shafts before (like with the laser cut linear CNC axis, but in my experience they can slip or wear rather quickly, and I like being able to disassemble and reassemble things with ease. I decided to try something new, and designed a laser-cuttable 2.5D timing pulley that mates with the main rotary cog, and securely mounts on a rotary shaft using a captive nut and set screw. On either side of the shaft there’s space for a bushing that connects to the base, and inside the base there’s a NEMA17 stepper from Adafruit that transfers motion to the drive shaft using a belt and timing pulleys.

rsz_dsc_0723

A small lip on the base acts as the other edge of the timing pulley, and helps keep the main rotary axis aligned.

rsz_dsc_0747

Inside the rotary gantry are two linear axes 180 degrees apart — one for the source and the other for the detector. The gantry is about 32cm in diameter, with the bore about 15cm, and the gantry itself is about 8cm thick to contain the linear axes.

rsz_dsc_0751

Each linear axis has a small carriage that contains mounts for either the source or detector, some snap bushings for two aluminum rails, and a compression mount for the timing belt. Each axis also has an inexpensive NEMA14 stepper and an idler pulley. Here, I’m using a very small solid state high-energy particle detector called the Type-5 from Radiation Watch, which can be easily connected to an external microcontroller. This is really very easy to work with, and saves me from having to use a photomultiplier tube and scintillation crystal that I found on eBay from an old decommissioned PET/CT scanner.

rsz_dsc_0752

I’m certain if the symmetry were any more perfect, it would move one to tears. The rotary gantry has to be symmetric to ensure proper balance and smooth rotation. After rotating the gantry 180 degrees, here you can see the other linear axis intended for the source. It currently just contains a mount pattern with 4 bolts, that a source will eventually mount to.

Safety is very important to me. In medical diagnostic imaging it’s often important to have an image as soon as possible, but that’s not the case for scanning non-living objects purely for scientific or educational interest. This chart from XKCD shows the radiation that folks typically absorb from every day adventures like banana-eating and sleeping beside someone to hopping on planes or having a diagnostic x-ray. I’ve designed this scanner to operate on levels slightly above the natural background level, well into the blue (least intense) section of the xkcd graph, and make use of a “check source”, which is an extremely low intensity source used to verify the functionality of a high-energy particle detector. The trade-off for this safety is acquisition time, and it will likely take a day or more to acquire data for even a small object. This aspect of the design is scalable, such that if the scanner were to be used in a research environment in a shielded room, folks braver than I should be able to acquire an image a good deal faster.

rsz_dsc_0758

The sandwich of four plates on either end of the linear axes contain precision mounts for the aluminum shafts, as well as a setscrew with captive nut to hold the shafts in place.

rsz_dsc_0719

The table itself is about 40cm long, and offers nearly 30cm of travel. It uses a light-weight nylon lead screw to index the table, with a NEMA14 drive motor located in the base.

rsz_dsc_0779

To test out the motion and detector, I put together an Arduino sheild with a few Pololu stepper controllers and a connector for the detector. The seeed studios prototype board I had on hand only had space for three stepper controllers, but it was more than enough to test the motion. Each axis runs beautifully — I was sure the rotational axis was going to have trouble moving smoothly given that most of the moving parts were laser cut, but it worked wonderfully on the first try, and moves so fast I had to turn down the speed lest the neighbours fear that I was building a miniture Stargate…

When I solidify all the bits that have to be in the controller, I’ll endeavor to lay out a proper board much like this prototype, but with four stepper controllers, and an SD card slot to store the image data for long scans.

rsz_dsc_0772

For size, here you can see the Arduino and shield together on the scanning table. I’m hoping to start by scanning a carrot, move up to a bell pepper (which has more non-symmetric structure), and work up to an Apple. Since time on commercial machines is very expensive, I think one of the niche applications for a tiny desktop CT scanner might be in time-lapse scans for slowly moving systems. If the resolution and scan speed end up being up to the task, I think it’d be beautiful to plant a fast-sprouting seed in a tiny pot and continually scan it over a week or two to build a 3D volumetric movie of the plant growing, from watching the roots in the pot grow, to the stalk shooting up and unfurling its first leaves. I’m sure the cost of generating that kind of data on a medical system would be astronomical, where the material cost of this prototype is in the ballpark of about $200, although I’m expecting that a source will add about $100 to that figure.

1264585_4716534411740_673623925_o

And finally, here’s a quarter-size acrylic prototype that I designed and cut in an afternoon a few weekends ago, that started the build and brainstorm process. My recently adopted rescue cat ironically loves to hang around the “cat” scanner, and has claimed nearly all of the open mini spectrometers I’ve built as toys to bat around…

Laser cutters are really amazing machines, and it’s really incredible to be able to dream up a machine one morning, spend an afternoon designing it, and have a moving functional prototype cut out and assembled later that evening that you can rapidly iterate from. Since laser cutters are still very expensive, this work wouldn’t have been possible without kind folks making very generous donations to my local hackerspace, and I’m extremely thankful for their community-minded spirit of giving.

thanks for reading!

Sneak Peek: Science Tricorder Mark 5 development pictures

I thought I’d take a moment to snap and share some pictures of the Science Tricorder Mark 5 prototype in its mid-development state. I’ve recently hit a snag with the WiFi, and have a little downtime while I’m waiting for a reply to a support e-mail.

rsz_1dsc_0647

The form factor of the Mark 5 looks much like a smart phone. In fact, it happens to be about the same size as my blackberry, though ultimately it’ll be a little thicker to accommodate the size of some of the larger sensors, like the distance sensor, open mini spectrometer, and a few others. Ultimately I think this form factor is adds a lot in terms of usability over than the folded design — with the Mark 1 and 2 you’d often have to hold the device at an odd angle, with the angle for trying to scan something usually being much different than the angle to see the screen. Here, I’ve moved many of the omnidirectional sensors (that happen to be thin) to the top of the device, and placed the directional sensors (which also tend to be much larger) on the bottom — the idea being that you could make use of the omnidirectional sensors in any position, and use the directional sensors much like you’d take a picture with your smart phone. This also effectively doubles the amount of exterior-facing sensor space, which is fantastic.

rsz_1dsc_0648

Keeping things tractable is one of my central design philosophies, otherwise most of this wouldn’t be possible. This was a lesson that I learned very well with the Mark 2 — designing your own ARM-based motherboard is a lot of fun and you learn a great deal, but it’s also time consuming (even with reference designs), and as a one-person project you have to pick your battles. So in this respect, choosing the computational bits of the Mark 5 was one of the most challenging choices in that it has to balance capability, ease of modification, and implementation time. In terms of capability, it’s important that the Mark 5 have advanced visualization capabilities like the Mark 2, and WiFi capability both to move data out of the device, as well as (eventually) upload the data to a website that would allow folks to share their sensing data. In terms of ease of modification, I’d like folks to be able to modify and reprogram the device as easily as possible, and use it as a vehicle to explore electronics, science, and math as much as to visualize the world. In addition to all this, there are a bunch of pragmatic concerns — power consumption, development tools, product end-of-life, and so forth.

This was a very difficult choice to make, and given that there’s no perfect option, I bounced back and forth quite a bit. On one hand I thought about moving to something Arduino or Chipkit compatible, that would be very easy to program, and fast to develop, but which would sacrifice computational capability. On the other hand, ARM-based surface-on-a-chips would have computational capability, and could run a piece of middleware that would make it easy for folks new to programming to modify, but the development time and development cost would be very high. The Mark 5 would likely have to move to a 4-layer or 6-layer design, which would add a barrier to folks in the open hardware community who might want to contribute, or make derivatives.

In the end, I went back to an idea that I’d considered for the Mark 2, which is to use a small system-on-a-module that contains the time consuming bits — processor, memory, wifi, etc. — and be able to focus my attention on the project-specific bits like the sensors. There are currently not a lot of options for an extremely small system-on-a-module that includes WiFi. For the Mark 2 I had considered using a Verdex by Gumstix, and for the Mark 5 I settled on trying their Overo FireSTORM modules, which include a TI OMAP3730 processor running at 800mhz, 512meg of RAM, 512meg of flash, and onboard wifi and bluetooth. The modules are also very, very small, and run Linux.

After weeks of tinkering I’m still having issues connecting to the WiFi (it’s been very spotty for me, only worked a few times, and most of the time doesn’t detect the WiFi hardware), and while it’s not clear whether it’s a hardware or software issue, from the Overo mailing list it appears as though this is an issue a bunch of other folks have run into. I sent off an e-mail to the Gumstix folks early last week, and hopefully I’ll hear back from them soon with some help. Hopefully after that’s sorted out I can work on the display driver, and start populating the sensors.

rsz_dsc_0683

In addition to the touch display and a bunch of level translators, the top of the board contains an ultra low power PIC microcontroller to act as an interface between the sensors and the Gumstix, much as in the Mark 2.

rsz_dsc_0635

Because I’m still tinkering with the Gumstix, I haven’t yet populated many of the sensors on the Mark 5 so that I can better diagnose any issues that come up. To help prototype the Mark 5′s sensor suite, and also for when I was considering making an Arduino-powered Mark 5, I designed a breakout board that’s essentially just the upper sensor section of the Mark 5. Here only the top sensors are populated, including the magnetometer, inertial measurement unit (consisting of a 3-axis gyro, accelerometer, and internal magnetometer), ambient humidity sensor, ambient temperature and pressure sensor, lightning sensor, and the GPS. Both the lightning sensor and GPS have RF components, which I don’t have a lot of experience with, so it was very comforting to see the GPS acquire a lock and display position information accurate to within a few meters. Interested readers may also notice the footprint for the open mini spectrometer on the left side of the board. The bottom side of the board, not populated or shown here, contains the spectrograph for the open mini spectrometer, camera, distance sensor, low resolution thermal camera, colour sensor, as well as a prototype for a 3D printable linear polarimeter much like the one on the Mark 1. The Mark 5 board itself includes footprints for both a radiation sensor and a gas sensor that didn’t fit on this breakout board.

rsz_dsc_0374

Assembly Pictures
I thought I’d include a few assembly pictures. Here’s one of the solder paste stencils, for the bottom of the board. Among pictures that I’ve taken recently, it’s also one of my favorites.

rsz_dsc_0460

Here, after the solder paste was applied and parts placed, the bottom components are being soldered in a make-shift reflow oven.

rsz_dsc_0470

Fresh from the oven and after cleaning a few solder bridges, the first prototype Science Tricorder Mark 5 board is ready to begin the development process.

thanks for reading!

Sneak peek: 3D-printable mini spectrometer

I thought I’d take a moment to show a sneak peek of a something I’ve been working on for the Mark 5, an inexpensive 3D printable mini spectrometer. (The Mark 5 is going well, by the way — The first prototype is half-built, and I successfully communicated with it’s linux console over USB this weekend!)

DSC_0638-720

This is a prototype Open Mini Spectrometer, a very small, inexpensive, and partially 3d-printable mini visible light spectrometer for embedded systems. Technically it has two components, the detector electronics, and the spectrograph.

DSC_0658-720

Detector:

The detector board contains:

  • a TSL1401CL linear CMOS detector w/128 channels
  • an AD7940 external analog to digital converter (14-bit @ 100kSPS)
  • a small power filter
  • a standard 0.1″ header to easily breadboard the spectrometer or connect it to a microcontroller (including an Arduino)
  • a 4 x 2mm-hole mounting pattern to attach the spectrograph
  • for the stand-alone pcb, 2 x 3mm mounting holes (one on either end)

DSC_0654-720

Spectrograph:

The prototype spectrograph is an experiment in low-cost design, and is almost entirely 3D printed using ABS plastic on an inexpensive desktop 3D printer (such as a Makerbot, though I used an ORD Bot Hadron). I have much more experience designing electronics than I do designing optical systems, and so the spectrograph is designed to be swappable/upgradable as newer designs come to pass (and I expect it to go throught a few iterations). This first spectrograph design has a 3D printed slit, and uses an inexpensive 1000-line/mm diffraction grating of the kind you can find on diffraction grating slides for classroom experiments. I read a paper a while ago on using deconvolution to post-process the data from slit spectrometers and basically sharpen the point-spread function (or PSF) to effectively increase the resolution of the instrument. Inspired by this, I decided to leave out the relay optics between slit-to-grating and from grating-to-detector to see if I could use post-processing to effectively sharpen up the overly broad PSF and have an even simpler and less expensive instrument.

The spectrograph design:

  • contains a ~0.2mm printed slit
  • 400-700nm (approx) spectral range
  • Variable spectral resolution (~3.3nm @400nm, ~1.8nm @ 700nm), not accounting for the PSF
  • 1000 line-per-mm diffraction grating (cut into a 4mm wide strip, and inserted into the spectrograph flush with the slit aperture)
  • 3D printable on an inexpensive printer
  • Very small size — about 1cm wide x 2cm long x 3cm tall.

With a spectrometer you’re often battling for SNR, and have to worry about stray light. Although these pictures don’t show it, the spectrograph has to be spray painted with a flat matte black paint to get any kind of performance.

Example Data:

I connected the open mini spectrometer to an Arduino Uno, and wrote a quick sketch to acquire spectral data and send it serially to a Processing sketch. Let’s have a look at some data collected from the instrument from two light sources — the first a white LED, and the second a red laser diode. The following images include four subplots: (1) the raw detector data from the light source, (2) a baseline measurement to determine the ambient light, (3) the difference of 2 from 1, to arrive at just the light from the light source, and (4) the spectrum re-sampled from variable (1.8-3.3nm) to evenly spaced spectral bins:

screenshot-0

White LED

screenshot-13
Red Laser Diode (~650nm)

Currently it has all the performance you’d expect from a $20 spectrometer with no relay optics — the PSF is quite broad (the FWHM on the laser diode is about 20nm), and although the printed slit is fairly deep there’s still a fair bit of translation on the detector depending on the spatial location of the source. I haven’t had much luck using deconvolution to sharpen the spectra, but I don’t have a great deal of experience with deconvolution on noisy data.

All of that being said, it’s a great first prototype in a functioning state, and with plenty of potential for improvement!

In terms of cost, in small quantity the detector boards have about $20 of parts. If you’d like to use an external ADC, I’ve included a solder jumper to output the raw analog voltage on the CS pin, and this also reduces the cost in small quantities by about $10. The spectrograph can be made for the cost of printing, painting, plus the cost of the diffraction grating. I think the materials cost for me was probably less than $1.

Source Files:
The source files, including the Eagle files for the PCB, the Google Sketchup and STL files for the current spectrograph, as well as sample Arduino and Processing sketches (and example data, in CSV format), are available on Thingiverse. The code portions are released under GPL V3, and everything else under Creative Commons Attribution Share-a-like 3.0 Unported, both freely available without a warranty of any kind. If you’d like to order prototype PCBs from the same place I ordered them, the project is shared on OSHPark.com, with each set of 3 bare boards available for about $5. These revision 1 boards change very little compared to the boards pictured above — a few vias have been moved to help make the design more light tight, and I’ve added in a solder jumper for those who would like to use an external ADC.

Contributor TODO List:
The open mini spectrometer is an open-source hardware project. Want to contribute? Here are a list of near-term todo items:

  • Find a source of tiny inexpensive relay optics (~4mm dia, short focal length) that are repeatedly and consistently available in both small and large quantities
  • Design a better way of inserting and securely mounting the diffraction grating
  • Modify the spectrograph to include relay optics
  • Try printing the spectrograph using different materials. How does the slit hold up? Are there materials or methods where the slit is printed better (e.g. SLS? Inkjet?). Are there matte black build materials that do not require the spectrograph to be painted prior to use?
  • Use the mini spectrometer to measure a variety of known spectral sources, and post the data
  • Modify the pi filter for better noise rejection. A good deal of noise still appears to come through the USB port/Arduino and into the spectrometer, requiring a greater number of averages for a clean signal. Battery power should also help with this.
  • Handy at signal processing? (and, specifically, deconvolution?) Feel free to grab some sample spectral data from thingiverse and see if you can improve the PSf and effective resolution with some postprocessing.

The Science Tricorder Mark 5 contains the same footprint for the spectrograph, so for compatibility I’d greatly prefer to keep the physical dimensions and the mating portion of the spectrograph the same (unless there’s a compelling reason for change, of course).

Thanks for reading!

Prototype Mark 5 Science Tricorder Boards Arrive

The new prototype Science Tricorder Mark 5 boards came today! I’m very excited! :) With nearly 150 parts, 1000 pins, and 600 traces all in something about the size of a blackberry, these are some of the highest density boards I’ve ever designed. To keep it easy to modify and remix by the community using Eagle CAD Light, as well as inexpensive to make, I’ve kept the boards a 2-layer design with fingers crossed that there won’t be many significant noise issues when the design gets assembled and tested.

dsc_0326

The prototype Mark 5 is a designed as a sort of updated squish between the first three iterations of the Open Source Science Tricorders, with updated versions of all of the sensors on the previous models, plus some prototypes of sensing modalities that have been on the wishlist since designing the Mark 1. Radiation sensing, low resolution thermal imaging, gas sensing, a prototype 3D printable visible spectrometer, and a freaking lightning sensor are all on the list of additions. Hopefully a good number of these experiments will work out on this prototype, and make it into the final Mark 5 release.

dsc_0289

I’ll be working to assemble the boards over the next few weeks, and starting to write some basic firmware to test their functionality. Stay tuned!