Some friends at the local hackerspace, Xerocraft, have organized what looks to be a great mini-maker faire style event here in Tucson this weekend called Maketopolis. I’ll be there with the open source CT scanner for any folks who want to drop by and see it in person. Hope to see you there!
After a marathon build session, the first images from the open source CT scanner are here! The story…
Recall that in the last update, the stock Radiation Watch Type 5 silicon photodiode high energy particle detector was found to be calibrated for Cesium, with a detection threshold likely somewhere near 80keV. This was too high to detect the ~22keV emissions of the Cadmium-109 source, and so I put together an external comparator that could adjust the threshold down to the noise floor. After testing the circuit on a protoboard, I designed a tiny board that sits on the back of the Type 5, and through the use of a 10-turn potentiometer allows you to recalibrate the threshold down to the noise floor.
I designed some mounting plates that could mount to the linear carriages for the source and detector.
Here, the detector is mounted onto an offset mounting plate, which in turn connects to the detector carriage. The wiring harness breaking out all the detector pins feeds through the center of the carriage to a fixed mount point on the bore that acts as a strain relief. Looks great!
Even with the upgraded extra-sensitive detector, I was still seeing many fewer detections than I was expecting — albeit about an order of magnitude more than without the enhancement. A kind fellow on the Radiation Watch facebook group made a spice simulation model based on the helpful schematics that the folks at Radiation Watch make available, and his simulations suggested that the noise floor for this circuit is around 30keV. This means that with Cadmium 109, whose primary emissions are around 22keV, I was likely still missing the majority of the emissions, and getting many fewer counts than I was expecting.
Enter the Barium-133. There are a number of radioisotope check sources that are commonly available, but many of them have very high energy emissions in the many hundreds (or thousands) of keV — likely far too high energy to be usefully absorptive for everyday objects. The emission spectra I’ve seen for the tubes in commercial CT scanners tend to have broad spectrum emissions centered around 60-70keV, and the datasheet for the silicon photodiode suggests it’s most sensitive from 10keV to 30keV, where the sensitivity drops off afterwards. A higher detection efficiency means that we can get by with a less intense source, and with check sources that are barely detectable over background a foot away, it’s a battle for signal, and every photon counts.
Barium-133 has primary emissions around the 33keV range, and seems to be one of the few commonly available radioisotopes (aside from Cadmium-109) with such low emissions. To give the system the best possible chance of working, I ordered a 10uCi Ba133 source (up from the 1uCi Cd109 source I was using previously). With the source 10cm away from the detector, with the background rate at 20 counts per minute, the Cd109 source reads about 70 (so a delta of 50), and the Ba133 source reads around 1500 (!), so we’re definitely detecting many more of the lower energy emissions, and this should have a much better signal-to-noise ratio, and decrease the acquisition time required for collecting good data.
The Ba133 source also comes as a sealed 25mm disc. I designed a sandwich mount for these source discs that contains between 3-6mm of lead shielding at a variety of angles, and a very rough approximation of a lead collimator with a 3mm hole drilled in the front to give some directionality to the source. Testing out a few angles, this appears to have brought the reading down to about 60 cpm at 15cm away, except for directly ahead, where the intensity is about 550cpm at 15cm. Sounds great!
Putting it all together
I have to confess, I’m a bit of a late sleeper (and a night owl), but I was so excited about finally putting everything together and collecting the first data, that I woke up early Saturday. After a marathon 13-hour build session, I finished designing and fabricating the source and detector mounts, and putting the bore back together.
With one of the bore covers removed, these pictures make it a little easier to see the complete linear axis mechanisms that are contained within the bore. You can thank my dad for discouraging my rampant hot glue use at a young age, and encouraging me to design things that were easily serviceable. I’ve given a few students the same talk when I see them wielding a hot glue gun for one of their projects… 😉
Putting it all back together — looks beautiful!
And now, the data!
After the marathon build session, I took the very first data from the instrument — a quick absorption image straight up the center of this apple. Data was low resolution and noisy, but fantastic for the very first data from the instrument.
A very tired, but very pleased person after collecting the first data off the scanner around 1am.
I had some time Monday evening to write some basic firmware for collecting images, storing them to an SD card, specifying the size and resolution parameters, the integration time for the detector, and so forth. In probably one of the strangest things I’ve ever done, and feeling very much like Doc Brown, I went to the grocery store and found a few vegetables that have internal structure and might be interesting to scan. I decided to start with the avocado…
I’d previously determined empirically that the optimal integration time for this setup is about 90 seconds per pixel — that tends to give a stable count of around 550cpm +/- 4 cpm. Lower integration times will give proportionately more noise, but be much quicker to scan.
The avocado is about 10cm by 12cm, and so to capture a first test image I set it to a 5mm resolution with a relatively fast 10 second integration time per point (bringing the total acquisition time to 20 x 24 x 10 seconds, or just over an hour).
And it worked! The image is certainly a bit noisy (as expected), but it looks great. The table and the avocado are clearly visible, and the seed might also be in there, but we’ll need a bit higher integration time to see if that’s real structure, or just noise.
Overlaying the scan atop the picture, the scan is a perfect fit!
The integration time for the first image was only 10 seconds per pixel, and so I setup a longer scan with an integration time of 60 seconds per pixel. Beautiful! This still isn’t quite at the empirically determined sweet spot of 90 seconds, but it really cleaned up the noise in the first image.
The same data, with log scaling rather than linear scaling. I’m not entirely certain whether avocado pits are more or less absorptive to 33keV photons than the surrounding avocado, so it’s not clear whether we’re seeing lots of absorption at the center because of the seed, or because there’s 10cm of fruit between the source and detector…
But I’d love to see some internal structure. So tonight I put the bell pepper on, which is about the same size as the avocado, and set it to an integration time of 20 seconds.
And the result! It definitely looks like a bell pepper, and you can clearly see the seed bundle inside. Incredibly cool!
The same image, log scaled instead of linear scaled.
And the overlay. Looks beautiful!
What a fantastic few days for the open source CT scanner, and the initial data looks great. There’s still plenty to do — now that the source and detector are working, I can finish designing the Arduino shield with four stepper controllers (two for the linear axes, one for the table, and one for the rotary axis). The source is also currently collimated in only the most liberal of senses, and in practice the detection volume for a given pixel is likely a pyramid that starts from the ~3mm source aperture and meets the ~1cm square detector — so the images should sharpen up a good deal by better controlling the beam shape. Once all of that is working, and I add an accelerometer sensor to the rotational axis to sense it’s angle, I should be able to scan from 180 degrees around the sample, and test the ability of the instrument in computed tomography mode, backing out the internal structure of a given slice from a bunch of 1D images. Very exciting!
Thanks for reading!
I thought I’d take a few moments to introduce the next prototype open source science tricorder that I’ve been working on, the Arduino-compatible Mark 5 Arducorder.
I wasn’t having a great deal of luck with the earlier Mark 5 design, and so I decided to start from scratch and create something completely different under the hood. The new design has a larger sensor suite (a few sensors have been updated, and a multi-gas sensor had been added), but it should also be easier to program, easier for folks to tinker with, more modular, and less expensive to produce. It’s also Arduino Due compatible, so the hundred thousand folks out there who love Arduino programming and building simple circuits should feel right at home tinkering.
Like the Mark 1 and 2, the Mark 5 Arducorder has a separate motherboard and sensor board. I think community building is a huge part of a successful open source design, and in this spirit I’d like it to be as easy as possible for folks to build new sensor boards for their Arducorders, or add expansions that I can’t anticipate. The Arduino folks have been very good with designing their boards to be expandable using “shields” that have standard, easy-to-prototype, 0.1″ headers. Similar to the idea of a shield, I’ve designed the Arducorder to have a 34-pin header for the sensor boards that expose a variety of pins for the I2C, UART, SPI, Analog, PWM, and Digital I/O peripherals, so that there are plenty of pins for expansion and interfacing to most kinds of sensors.
These boards were an interesting challenge to design. Conceptually the Arduino motherboards are fairly simple, but in order to maintain perfect compatibility with the Arduino Due board the routing was a little complex in areas. The whole Mark 5 Arducorder system is small — really small — and having the large easy-to-use sensor connector consumes a lot of real-estate, totaling nearly one quarter of the board area. Because of this I had to move to a 4-layer design, which takes a little longer to get fabbed. Still, the entire Arducorder including the 2.8″ LCD, WiFi module, and sensor board fits in about the same footprint as the original Arduino Due (or, about the size of my Blackberry), so it’s all quite compact and I’m very happy with the footprint.
In terms of hardware, the prototype Arducorder motherboard currently has the following specifications:
- Arduino Due compatible, using the Atmel SAM3X8E ARM Cortex-M3 CPU
- 84MHz CPU Clock, 512KBytes of flash, 96KBytes SRAM
- External 128KByte SPI SRAM
- microSD card socket for data, graphics, and so forth
- 2.8″ TFT LCD display w/touch panel
- FT800 Graphics and Audio Controller to offload graphical rendering. Supports JPEG decompression.
- CC3000 802.11b/g WiFi module
- Two user-selectable input buttons, one on either side
- microUSB for programming and charging (Due “native port”)
- Exposes the second programming port through a header, as well as the two erase/reset buttons on the side, to maintain Arduino Due compatibility
Having some mechanism to render quality graphics has been a requirement since the Mark 1 using its external SED1375 graphics controller, and was certainly the case with the Mark 2’s beautiful dual organic LED displays. But graphics of any resolution have always been difficult for microcontroller-powered systems, like the PIC family (used in the Mark 1) or the Atmel microcontrollers used with the Arduino family of boards. Even with a microcontroller fast enough to perform graphics rendering, most microcontrollers don’t have nearly enough memory to support even a single framebuffer for a 320x240x16bpp screen (128k), so any graphics they do render tend to look choppy.
Enter the FT800 Graphics controller, a new product from FTDI, the same folks who make the popular FT232R USB-to-serial converter. The FT800 looks a lot like a modern version of the the 2D tile-based graphics controllers found in handheld gaming systems a few years ago, while also incorporating audio and touch-screen peripherals. The Gameduino 2 is a recent Arduino-powered project that makes use of the FT800, and it shows Gameboy Advanced-era graphics on an Arduino Uno — so I’m confident that an attractive and elegant interface can be crafted on the Arducorder. While it has less graphical capabilities than the original Mark 5 design, it should be much easier for folks to modify — and I’m excited to see the first user interface themes folks come up with.
In terms of sensing capabilities, the current sensor board has footprints for the following sensors:
- Ambient Temperature and Humidity: Measurement Specialties HTU21D
- Ambient Pressure: Bosch Sensortec BMP180
- Multi-gas sensor: SGX-Sensortech MICS-6814
- 3-Axis Magnetometer: Honeywell HMC5883L
- Lightning sensor: AMS AS3935
- X-ray and Gamma Ray Detector: Radiation Watch Type 5
- Low-resolution thermal camera: Melexis MLX90620 16×4
- Home-built linear polarimeter: 2x TAOS TSL2561
- Colorimeter: TAOS TCS3472
- Open Mini Visible Spectrometer v1 using TAOS TSL1401CL 128-pixel detector, with NeoPixel light source
- GPS: Skytraq Venus 638
- Distance: Maxbotics Ultrasonic Distance Sensor
- Inertial Measurement Unit: Invensense MPU-9150 9-axis (3-axis accelerometer, gyro, and magnetometer)
- Microphone: Analog Devices ADMP401
Many of these are new or updated offerings that either offer new sensing modalities that weren’t previously available (like the lightning sensor from AMS), or that improve upon resolution, size, or cost over previous versions. Gas sensing has been on the wishlist for a long while, but many contemporary sensors are large and use power-hungry heating elements — so I’m particularly excited about trying out new line of micro gas sensors from SGX.
One sensor has been temporarily removed — the camera. There’s currently no easy way that I’m aware of to hook up a camera (which is a high bandwidth device) to an Arduino Due, which has limited memory. I’ve replaced the camera with a small board-to-board connector, in the hopes that someone in the open source community will develop a small SPI JPEG camera board with an onboard framebuffer shortly. If not, I’ll have to have a go at it once the rest of the device is functional.
While the top of the sensor board contains the thin, omnidirectional sensors, the bottom contains many of the larger, directional sensors including the open mini spectrometer and the ultrasonic distance sensor. I’d love to find a shorter alternative to these sensors at some point, as right now they are the determining factor in the Mark 5’s thickness — about an inch.
Currently I’m testing out the sensor board — some of the sensors are a bit expensive, so I’m iteratively populating the boards, testing, and so forth. I also have to 3D print more open mini spectrometers — my cats absolutely love to play with them, so the bunch I’ve made have vanished into the aether under the couch, never to be seen from again.
My current TODO list is to verify the basic functionality of the hardware (currently the FT800 as well as a few of the sensors have been tested), write low-level drivers for all the sensors and peripherals, then move on to creating the larger graphical user interface. At first the graphical environment will likely be somewhat modest, but I’d love to recruit the help of some skilled folks from the open source community who would enjoy working on the interface side of things once the prototype hardware is stable. Please feel free to contact me if you’re interested.
Thanks for reading!