Towards an inexpensive open-source desktop CT scanner


A bit of a story, and then a lot of pictures — by far the most interesting class I’ve ever taken was Advanced Brain Imaging in grad school. As a hands on lab class, each week we’d have a bit of a lecture on a new imaging technique, and then head off to the imaging lab where one of the grad students would often end up in the Magnetic Resonance Imager (MRI) and we’d see the technique we’d just learned about demonstrated. Before the class I was only aware of the structural images that most folks think of when they think of an MRI, as well as the functional MRI (or fMRI) scans that measure blood oxygenation levels correlated with brain activity and are often used in cognitive neuroscience experiments. But after learning about Diffusion Tensor Imaging, spin-labeling, and half a dozen other techniques, I decided that the MRI is probably one of the most amazing machines that humans have ever built. And I really wanted to build one.

MRI is a spatial extension to nuclear magnetic resonance spectroscopy (NMR), and requires an extremely homogeneous high-intensity magnetic field to function — far more uniform than you can achieve with permanent magnets or electromagnets. For MRI, this uniformity is often accomplished using a superconducting magnet that’s cooled to near absolute zero using liquid helium. This, of course, makes it extremely technically difficult to make your own system. While folks have been able to use large electromagnets for NMR (they average out the magnetic field intensity over the sample by spinning the sample very rapidly while it’s inside the magnet), I haven’t seen anyone demonstrate building an imaging system using an electromagnet. There are some experimental systems that try to use the Earth’s magnetic field, but the few systems I’m aware of are very low resolution, and very slow.

Volumetric biological imaging has two commonly used tools — MRI and Computed Tomography (or CT), sometimes also called Computed Axial Tomography (or “CAT”) scanning — although ultrasound, EEG, and a bunch of other techniques are also available. Fast forward about two years from my brain imaging class (to about three years ago), I had started my first postdoc and happened to be sitting in on a computational sensing / compressed sensing course.


About the same time I happened to be a little under the weather, and stopped into a clinic. I thought I’d torn a muscle rock climbing, but after examining me the doctor at the clinic thought that I might have a serious stomach issue, and urged me to visit an emergency room right away. As a Canadian living abroad, this was my first real contact with the US health care system, and as exciting as getting a CT was (from the perspective of being a scientist interested in medical imaging), from a social perspective it was a very uncomfortable experience. Without really going into details or belaboring the point, universal health care is very important to me, and (what many consider) a basic human right that most of the folks in the developed world have access to. My mom was diagnosed with cancer when I was young, and we spent an awful lot of time in hospitals. Her and my dad still do, after 15 years and more surgeries than anyone can count. It’s frightening to think of where we’d all be if her medical care wasn’t free. And so when a bill showed up a month or so after my emergency room visit for nearly $5,000 (most of which was covered by a health insurance company), I nearly needed a second trip to the emergency room, and I thought a lot about the many folks I knew, including my girlfriend at the time, who didn’t have any form of health insurance and basically couldn’t go to the doctor when they were ill for fear of massive financial damage.

With all of this in mind, knowing the basics of medical imaging, and having just discussed computed tomography and the Radon transform in the class I was sitting in on, I decided that I wanted to try and build an open source CT scanner, and to do it for a lot less than the cost of me getting scanned, by using rapid prototyping methods like laser cutting and 3D printing.

It’s been a few years since I’ve had access to a laser cutter, and they’re one of my favorite and most productive rapid prototyping tools. In the spirit of efforts like the Reprap project, I enjoy exploring non-traditional approaches to design, and designing machines that can be almost entirely 3D printed or laser cut. Fast-forward almost two and a half years to last month, and the local hackerspace happened to have a beautiful laser cutter generously donated. This is the first cutter I’ve had real access to since grad school, and with the CT scanner project waiting for a laser cutter and a rainy day for nearly two years, I immediately knew what I wanted to have a go at designing. On to the details.


From a high-level technical standpoint, a computed tomography or CT scanner takes a bunch of absorption images of an object (for example, x-ray images) from a variety of different angles, and then backs out 3D volumetric data from this collection of 2D images taken from different angles. In practice, this is usually done one 2D “slice” at a time, first by rotating an x-ray scanner around an object, taking a bunch of 1D images at tens or hundreds of angles, and then using the Radon transform to compute a given 2D slice from this collection of 1D images. One can then inspect the 2D slices directly to see what’s inside something, or stack the slices to view the object in 3D.


Mechanically, this prototype scanner is very similar to the first generation of CT scanners. An object is placed on a moving table that goes through the center of a rotating ring (or “gantry”). Inside the ring there’s an x-ray source, and on the other side a detector, both mounted on linear stages that can move up and down in unison. To scan an object, the table moves the object to the slice of interest, the gantry rotates to a given angle, then scans the source and detector across the object to produce a 1D x-ray image. The gantry then rotates to another angle, and the process repeats, generating another 1D image from a slightly different angle. After generating tens or hundreds of these 1D slices from different angles, one backs out the 2D image of that slice using the Radon transform. The table then moves the object slightly, and the process is repeated for the next slice, and the hundreds of other slices that are often taken in a medical scan. Modern scanners parallelize this task by using a fan-shaped beam of x-rays and hundreds of simultaneous detectors to scan someone in about a minute, but the first generation of scanners could take several minutes per slice, meaning a scan with even tens of slices could take an hour or more.


Designing an almost entirely laser-cuttable CT scanner with four axes of motion, one being a large rotary gantry, was a lot of fun and an interesting design challenge. I decided that a good way to rotate the gantry would be to design it as a giant cog that sat atop a system of drive and idler cogs, that could slowly index it to any angle.


One of the issues with laser cutting a giant cog is finding something to mate with it that can transfer motion. I’ve press-fit laser cut gears onto motor shafts before (like with the laser cut linear CNC axis, but in my experience they can slip or wear rather quickly, and I like being able to disassemble and reassemble things with ease. I decided to try something new, and designed a laser-cuttable 2.5D timing pulley that mates with the main rotary cog, and securely mounts on a rotary shaft using a captive nut and set screw. On either side of the shaft there’s space for a bushing that connects to the base, and inside the base there’s a NEMA17 stepper from Adafruit that transfers motion to the drive shaft using a belt and timing pulleys.


A small lip on the base acts as the other edge of the timing pulley, and helps keep the main rotary axis aligned.


Inside the rotary gantry are two linear axes 180 degrees apart — one for the source and the other for the detector. The gantry is about 32cm in diameter, with the bore about 15cm, and the gantry itself is about 8cm thick to contain the linear axes.


Each linear axis has a small carriage that contains mounts for either the source or detector, some snap bushings for two aluminum rails, and a compression mount for the timing belt. Each axis also has an inexpensive NEMA14 stepper and an idler pulley. Here, I’m using a very small solid state high-energy particle detector called the Type-5 from Radiation Watch, which can be easily connected to an external microcontroller. This is really very easy to work with, and saves me from having to use a photomultiplier tube and scintillation crystal that I found on eBay from an old decommissioned PET/CT scanner.


I’m certain if the symmetry were any more perfect, it would move one to tears. The rotary gantry has to be symmetric to ensure proper balance and smooth rotation. After rotating the gantry 180 degrees, here you can see the other linear axis intended for the source. It currently just contains a mount pattern with 4 bolts, that a source will eventually mount to.

Safety is very important to me. In medical diagnostic imaging it’s often important to have an image as soon as possible, but that’s not the case for scanning non-living objects purely for scientific or educational interest. This chart from XKCD shows the radiation that folks typically absorb from every day adventures like banana-eating and sleeping beside someone to hopping on planes or having a diagnostic x-ray. I’ve designed this scanner to operate on levels slightly above the natural background level, well into the blue (least intense) section of the xkcd graph, and make use of a “check source”, which is an extremely low intensity source used to verify the functionality of a high-energy particle detector. The trade-off for this safety is acquisition time, and it will likely take a day or more to acquire data for even a small object. This aspect of the design is scalable, such that if the scanner were to be used in a research environment in a shielded room, folks braver than I should be able to acquire an image a good deal faster.


The sandwich of four plates on either end of the linear axes contain precision mounts for the aluminum shafts, as well as a setscrew with captive nut to hold the shafts in place.


The table itself is about 40cm long, and offers nearly 30cm of travel. It uses a light-weight nylon lead screw to index the table, with a NEMA14 drive motor located in the base.


To test out the motion and detector, I put together an Arduino sheild with a few Pololu stepper controllers and a connector for the detector. The seeed studios prototype board I had on hand only had space for three stepper controllers, but it was more than enough to test the motion. Each axis runs beautifully — I was sure the rotational axis was going to have trouble moving smoothly given that most of the moving parts were laser cut, but it worked wonderfully on the first try, and moves so fast I had to turn down the speed lest the neighbours fear that I was building a miniture Stargate…

When I solidify all the bits that have to be in the controller, I’ll endeavor to lay out a proper board much like this prototype, but with four stepper controllers, and an SD card slot to store the image data for long scans.


For size, here you can see the Arduino and shield together on the scanning table. I’m hoping to start by scanning a carrot, move up to a bell pepper (which has more non-symmetric structure), and work up to an Apple. Since time on commercial machines is very expensive, I think one of the niche applications for a tiny desktop CT scanner might be in time-lapse scans for slowly moving systems. If the resolution and scan speed end up being up to the task, I think it’d be beautiful to plant a fast-sprouting seed in a tiny pot and continually scan it over a week or two to build a 3D volumetric movie of the plant growing, from watching the roots in the pot grow, to the stalk shooting up and unfurling its first leaves. I’m sure the cost of generating that kind of data on a medical system would be astronomical, where the material cost of this prototype is in the ballpark of about $200, although I’m expecting that a source will add about $100 to that figure.


And finally, here’s a quarter-size acrylic prototype that I designed and cut in an afternoon a few weekends ago, that started the build and brainstorm process. My recently adopted rescue cat ironically loves to hang around the “cat” scanner, and has claimed nearly all of the open mini spectrometers I’ve built as toys to bat around…

Laser cutters are really amazing machines, and it’s really incredible to be able to dream up a machine one morning, spend an afternoon designing it, and have a moving functional prototype cut out and assembled later that evening that you can rapidly iterate from. Since laser cutters are still very expensive, this work wouldn’t have been possible without kind folks making very generous donations to my local hackerspace, and I’m extremely thankful for their community-minded spirit of giving.

thanks for reading!

Sneak Peek: Science Tricorder Mark 5 development pictures

I thought I’d take a moment to snap and share some pictures of the Science Tricorder Mark 5 prototype in its mid-development state. I’ve recently hit a snag with the WiFi, and have a little downtime while I’m waiting for a reply to a support e-mail.


The form factor of the Mark 5 looks much like a smart phone. In fact, it happens to be about the same size as my blackberry, though ultimately it’ll be a little thicker to accommodate the size of some of the larger sensors, like the distance sensor, open mini spectrometer, and a few others. Ultimately I think this form factor is adds a lot in terms of usability over than the folded design — with the Mark 1 and 2 you’d often have to hold the device at an odd angle, with the angle for trying to scan something usually being much different than the angle to see the screen. Here, I’ve moved many of the omnidirectional sensors (that happen to be thin) to the top of the device, and placed the directional sensors (which also tend to be much larger) on the bottom — the idea being that you could make use of the omnidirectional sensors in any position, and use the directional sensors much like you’d take a picture with your smart phone. This also effectively doubles the amount of exterior-facing sensor space, which is fantastic.


Keeping things tractable is one of my central design philosophies, otherwise most of this wouldn’t be possible. This was a lesson that I learned very well with the Mark 2 — designing your own ARM-based motherboard is a lot of fun and you learn a great deal, but it’s also time consuming (even with reference designs), and as a one-person project you have to pick your battles. So in this respect, choosing the computational bits of the Mark 5 was one of the most challenging choices in that it has to balance capability, ease of modification, and implementation time. In terms of capability, it’s important that the Mark 5 have advanced visualization capabilities like the Mark 2, and WiFi capability both to move data out of the device, as well as (eventually) upload the data to a website that would allow folks to share their sensing data. In terms of ease of modification, I’d like folks to be able to modify and reprogram the device as easily as possible, and use it as a vehicle to explore electronics, science, and math as much as to visualize the world. In addition to all this, there are a bunch of pragmatic concerns — power consumption, development tools, product end-of-life, and so forth.

This was a very difficult choice to make, and given that there’s no perfect option, I bounced back and forth quite a bit. On one hand I thought about moving to something Arduino or Chipkit compatible, that would be very easy to program, and fast to develop, but which would sacrifice computational capability. On the other hand, ARM-based surface-on-a-chips would have computational capability, and could run a piece of middleware that would make it easy for folks new to programming to modify, but the development time and development cost would be very high. The Mark 5 would likely have to move to a 4-layer or 6-layer design, which would add a barrier to folks in the open hardware community who might want to contribute, or make derivatives.

In the end, I went back to an idea that I’d considered for the Mark 2, which is to use a small system-on-a-module that contains the time consuming bits — processor, memory, wifi, etc. — and be able to focus my attention on the project-specific bits like the sensors. There are currently not a lot of options for an extremely small system-on-a-module that includes WiFi. For the Mark 2 I had considered using a Verdex by Gumstix, and for the Mark 5 I settled on trying their Overo FireSTORM modules, which include a TI OMAP3730 processor running at 800mhz, 512meg of RAM, 512meg of flash, and onboard wifi and bluetooth. The modules are also very, very small, and run Linux.

After weeks of tinkering I’m still having issues connecting to the WiFi (it’s been very spotty for me, only worked a few times, and most of the time doesn’t detect the WiFi hardware), and while it’s not clear whether it’s a hardware or software issue, from the Overo mailing list it appears as though this is an issue a bunch of other folks have run into. I sent off an e-mail to the Gumstix folks early last week, and hopefully I’ll hear back from them soon with some help. Hopefully after that’s sorted out I can work on the display driver, and start populating the sensors.


In addition to the touch display and a bunch of level translators, the top of the board contains an ultra low power PIC microcontroller to act as an interface between the sensors and the Gumstix, much as in the Mark 2.


Because I’m still tinkering with the Gumstix, I haven’t yet populated many of the sensors on the Mark 5 so that I can better diagnose any issues that come up. To help prototype the Mark 5’s sensor suite, and also for when I was considering making an Arduino-powered Mark 5, I designed a breakout board that’s essentially just the upper sensor section of the Mark 5. Here only the top sensors are populated, including the magnetometer, inertial measurement unit (consisting of a 3-axis gyro, accelerometer, and internal magnetometer), ambient humidity sensor, ambient temperature and pressure sensor, lightning sensor, and the GPS. Both the lightning sensor and GPS have RF components, which I don’t have a lot of experience with, so it was very comforting to see the GPS acquire a lock and display position information accurate to within a few meters. Interested readers may also notice the footprint for the open mini spectrometer on the left side of the board. The bottom side of the board, not populated or shown here, contains the spectrograph for the open mini spectrometer, camera, distance sensor, low resolution thermal camera, colour sensor, as well as a prototype for a 3D printable linear polarimeter much like the one on the Mark 1. The Mark 5 board itself includes footprints for both a radiation sensor and a gas sensor that didn’t fit on this breakout board.


Assembly Pictures
I thought I’d include a few assembly pictures. Here’s one of the solder paste stencils, for the bottom of the board. Among pictures that I’ve taken recently, it’s also one of my favorites.


Here, after the solder paste was applied and parts placed, the bottom components are being soldered in a make-shift reflow oven.


Fresh from the oven and after cleaning a few solder bridges, the first prototype Science Tricorder Mark 5 board is ready to begin the development process.

thanks for reading!

Sneak peek: 3D-printable mini spectrometer

I thought I’d take a moment to show a sneak peek of a something I’ve been working on for the Mark 5, an inexpensive 3D printable mini spectrometer. (The Mark 5 is going well, by the way — The first prototype is half-built, and I successfully communicated with it’s linux console over USB this weekend!)


This is a prototype Open Mini Spectrometer, a very small, inexpensive, and partially 3d-printable mini visible light spectrometer for embedded systems. Technically it has two components, the detector electronics, and the spectrograph.



The detector board contains:

  • a TSL1401CL linear CMOS detector w/128 channels
  • an AD7940 external analog to digital converter (14-bit @ 100kSPS)
  • a small power filter
  • a standard 0.1″ header to easily breadboard the spectrometer or connect it to a microcontroller (including an Arduino)
  • a 4 x 2mm-hole mounting pattern to attach the spectrograph
  • for the stand-alone pcb, 2 x 3mm mounting holes (one on either end)



The prototype spectrograph is an experiment in low-cost design, and is almost entirely 3D printed using ABS plastic on an inexpensive desktop 3D printer (such as a Makerbot, though I used an ORD Bot Hadron). I have much more experience designing electronics than I do designing optical systems, and so the spectrograph is designed to be swappable/upgradable as newer designs come to pass (and I expect it to go throught a few iterations). This first spectrograph design has a 3D printed slit, and uses an inexpensive 1000-line/mm diffraction grating of the kind you can find on diffraction grating slides for classroom experiments. I read a paper a while ago on using deconvolution to post-process the data from slit spectrometers and basically sharpen the point-spread function (or PSF) to effectively increase the resolution of the instrument. Inspired by this, I decided to leave out the relay optics between slit-to-grating and from grating-to-detector to see if I could use post-processing to effectively sharpen up the overly broad PSF and have an even simpler and less expensive instrument.

The spectrograph design:

  • contains a ~0.2mm printed slit
  • 400-700nm (approx) spectral range
  • Variable spectral resolution (~3.3nm @400nm, ~1.8nm @ 700nm), not accounting for the PSF
  • 1000 line-per-mm diffraction grating (cut into a 4mm wide strip, and inserted into the spectrograph flush with the slit aperture)
  • 3D printable on an inexpensive printer
  • Very small size — about 1cm wide x 2cm long x 3cm tall.

With a spectrometer you’re often battling for SNR, and have to worry about stray light. Although these pictures don’t show it, the spectrograph has to be spray painted with a flat matte black paint to get any kind of performance.

Example Data:

I connected the open mini spectrometer to an Arduino Uno, and wrote a quick sketch to acquire spectral data and send it serially to a Processing sketch. Let’s have a look at some data collected from the instrument from two light sources — the first a white LED, and the second a red laser diode. The following images include four subplots: (1) the raw detector data from the light source, (2) a baseline measurement to determine the ambient light, (3) the difference of 2 from 1, to arrive at just the light from the light source, and (4) the spectrum re-sampled from variable (1.8-3.3nm) to evenly spaced spectral bins:


White LED

Red Laser Diode (~650nm)

Currently it has all the performance you’d expect from a $20 spectrometer with no relay optics — the PSF is quite broad (the FWHM on the laser diode is about 20nm), and although the printed slit is fairly deep there’s still a fair bit of translation on the detector depending on the spatial location of the source. I haven’t had much luck using deconvolution to sharpen the spectra, but I don’t have a great deal of experience with deconvolution on noisy data.

All of that being said, it’s a great first prototype in a functioning state, and with plenty of potential for improvement!

In terms of cost, in small quantity the detector boards have about $20 of parts. If you’d like to use an external ADC, I’ve included a solder jumper to output the raw analog voltage on the CS pin, and this also reduces the cost in small quantities by about $10. The spectrograph can be made for the cost of printing, painting, plus the cost of the diffraction grating. I think the materials cost for me was probably less than $1.

Source Files:
The source files, including the Eagle files for the PCB, the Google Sketchup and STL files for the current spectrograph, as well as sample Arduino and Processing sketches (and example data, in CSV format), are available on Thingiverse. The code portions are released under GPL V3, and everything else under Creative Commons Attribution Share-a-like 3.0 Unported, both freely available without a warranty of any kind. If you’d like to order prototype PCBs from the same place I ordered them, the project is shared on, with each set of 3 bare boards available for about $5. These revision 1 boards change very little compared to the boards pictured above — a few vias have been moved to help make the design more light tight, and I’ve added in a solder jumper for those who would like to use an external ADC.

Contributor TODO List:
The open mini spectrometer is an open-source hardware project. Want to contribute? Here are a list of near-term todo items:

  • Find a source of tiny inexpensive relay optics (~4mm dia, short focal length) that are repeatedly and consistently available in both small and large quantities
  • Design a better way of inserting and securely mounting the diffraction grating
  • Modify the spectrograph to include relay optics
  • Try printing the spectrograph using different materials. How does the slit hold up? Are there materials or methods where the slit is printed better (e.g. SLS? Inkjet?). Are there matte black build materials that do not require the spectrograph to be painted prior to use?
  • Use the mini spectrometer to measure a variety of known spectral sources, and post the data
  • Modify the pi filter for better noise rejection. A good deal of noise still appears to come through the USB port/Arduino and into the spectrometer, requiring a greater number of averages for a clean signal. Battery power should also help with this.
  • Handy at signal processing? (and, specifically, deconvolution?) Feel free to grab some sample spectral data from thingiverse and see if you can improve the PSf and effective resolution with some postprocessing.

The Science Tricorder Mark 5 contains the same footprint for the spectrograph, so for compatibility I’d greatly prefer to keep the physical dimensions and the mating portion of the spectrograph the same (unless there’s a compelling reason for change, of course).

Thanks for reading!

Prototype Mark 5 Science Tricorder Boards Arrive

The new prototype Science Tricorder Mark 5 boards came today! I’m very excited! 🙂 With nearly 150 parts, 1000 pins, and 600 traces all in something about the size of a blackberry, these are some of the highest density boards I’ve ever designed. To keep it easy to modify and remix by the community using Eagle CAD Light, as well as inexpensive to make, I’ve kept the boards a 2-layer design with fingers crossed that there won’t be many significant noise issues when the design gets assembled and tested.


The prototype Mark 5 is a designed as a sort of updated squish between the first three iterations of the Open Source Science Tricorders, with updated versions of all of the sensors on the previous models, plus some prototypes of sensing modalities that have been on the wishlist since designing the Mark 1. Radiation sensing, low resolution thermal imaging, gas sensing, a prototype 3D printable visible spectrometer, and a freaking lightning sensor are all on the list of additions. Hopefully a good number of these experiments will work out on this prototype, and make it into the final Mark 5 release.


I’ll be working to assemble the boards over the next few weeks, and starting to write some basic firmware to test their functionality. Stay tuned!

Open Source Hardware “Better World” Charitable License for Science Education and Literacy

In preparation for releasing the prototype Open Source Science Tricorder Mark 5, as well as a spin-off project that works to build an inexpensive 3D-printable mini embedded spectrometer, I’ve been working on drafting something very important to me — an open source hardware license that helps benefit science education and literacy charities when open hardware is sold for a profit.

Having previously worked with literacy charities, I know how poorly funded they can be, and how substantial a benefit even modest amounts of donations can make. For example, while in grad school I volunteered to live in our university library for a week (in a tent) to help raise funds for Live in for Literacy, a charity that helps build libraries in third world countries. A few thousand dollars is a significant fraction of the cost of building a school library in poor countries like Nepal, which helps provide substantially positive social-economic-status outcomes for hundreds of students per library, per year. LIFL notes that raising $170,000 since 2006 has allowed them to build 15 libraries, greatly support education for girls and women (which is often severely impeded in countries where they are not recognized as equal), and support the preservation of local languages through literacy projects ( ). They estimate that they substantially benefit the education of over 5,000 children per year. This is only a single example of a very small charity.

The spirit of this license is the belief that a large portion of the social and economic issues that are present in portions of both the developed and undeveloped world are a direct response to poor literacy and poor education, and that helping to educate folks will enable them to make informed choices that effect positive social change.


Open hardware is still a new thing, and folks including the Open Source Hardware Association (OSHWA) have been working to establish a wishlist of best practices for sharing, and help figure out the licensing and other issues involved in mixing copyright, patent, and other intellectual property law in open licenses. Of the three main copyleft licenses used by the community, the Creative Commons Attribution-ShareAlike makes use of copyright law to allow folks to share, remix, and make commercial use of a work, as long as they provide attribution and share any modifications they make with the community. Two other licenses, the CERN Open Hardware License and the TAPR Open Hardware License, are specifically designed for open hardware and work to add intellectual property and patent sharing provisions and language into the mix to ensure that a work can be shared, built, and effectively maintained by the community. While there’s a great selection of licenses to choose from, the general feeling in the community is that because open hardware is new most of these licenses haven’t yet had the opportunity to be tested in court on hardware specific cases. Add that to the recent push to overhaul of the patent system, and the international collaborations that are common with open projects, and the feeling is that we’ll have to wait for a few years to see how well the provisions work for each of the different licenses.

The Open Hardware Better World License

Technically, the charitable license I’d like to introduce is heavily derived from the Creative Commons Attribution-ShareAlike 3.0 Unported license, which is copyright based, and very common in the community. It’s central provisions are that you’re free to share and remix a work, under the conditions that you:

  • Provide Attribution to the original work
  • ShareAlike by releasing any modifications, additions, or improvements you make under the same license.

The draft of the Open Hardware Better World license adds upon this with hardware-specific provisions similar to the CERN and TAPR Open Hardware Licenses, including:

  • Patent sharing, and immunity from suit for patents that the authors may hold that pertain to a work
  • Language that specifically mentions works of science and engineering, such as schematics, circuit layouts, etc.
  • Language that specifically mentions manufacturing physical devices, in addition to the already present copyright language for sharing the source files

In addition, I have added the charitable requirement that when open hardware projects are used in a for-profit setting:

  • 10% of the profit must be donated to a science education or literacy charity of the manufacturer/seller’s choice.
  • The charity that you choose must have a charitable commitment (the percentage of donations it uses to further its charitable mission, and not operating costs) of at least 75%, and must not discriminate based on race, gender, ethnicity, sexual orientation, faith, or religious affiliation (or lack thereof).
  • Donations may also be split between science education or literacy charities, and the Electronic Frontier Foundation (EFF) / Open Source Hardware Association (OSHWA), with the majority of the donation going to science education or literacy charities.
  • Because I can imagine few things better than waking up one morning and learning that your project has helped people, but also pragmatically for verification purposes, folks who manufacture a project are required to let the author(s) of an open source hardware project know which charity/charities they’ve chosen and the respective donation amount(s) quarterly. This puts a minor burden on the manufacturer, but shouldn’t take more than 15 minutes of their time per quarter.

In terms of a concrete example, this means that if a device costs $20 to assemble and manufacture, that:

  • For-profit: If the device is sold for (say) $50, then the difference between the manufacturing and sale cost is $30. 10% of this, or $3 per device, must be charitably donated to a science education or literacy charity. The other 90%, or $27, goes to pay the help, run the business, and make a profit.
  • Non-profit/at cost: If the device is sold for $20, then no charitable contribution is required. Thank you for helping bring more open projects into the world!

License Draft and Comments
The draft of the license is available here (pdf). Additions to CC-BY-SA 3.0 have been highlighted in green, and some portions largely related to the performance arts provisions of CC-BY-SA have been omitted. Because I’m an academic and a content author, but not a lawyer or a business owner, I’d love to hear your comments — both general, and specific to:

  • Adoptability: It’s my hope that the license could serve as a drop-in replacement for open-hardware projects that aimed to use a CC-BY-SA 3.0 license, but that would like to include charitable donations for for-profit use — and it’s also the license I’d like to use for the next iteration of the Open Source Science Tricorder project, the open mini embedded spectrometer project, and other open hardware projects. As a content author, would you also be likely to use this license? Why, or why not?
  • Contribution Percentage: Ultimately, the license’s utility to science education and literacy is only as useful as its adoption rate. The donation proportion is set at 10% of the profit (defined as the difference between the manufacturing cost and the sale price). Intuitively this seems like a fair figure to me, but I’m not running a business and only have a very general idea of what margins are like. Do you think this is a good number? Would you go higher? Would donating 10% of the profit run you into the ground?

Please feel free to leave your comments below, or send them to peter at tricorderproject dot org.


A new research fellowship, and an Interview with

Before getting to the interview, a quick bit of news. About six weeks ago I hung up my Medical Tricorder hat, and returned to the University of Arizona to begin a new Postdoctoral Research Fellowship in their new Natural Language Processing group. I’m absolutely loving the new group, the wonderful research, and that my cute little home in Tucson has plenty of geckos living in the garden!

Adam Savage and Jamie Hyneman’s recently posted a very in-depth, fun, and entertaining interview with me by Norman Chan where we had the opportunity to chat about the Tricorder project, open source design, and my academic research in artificial intelligence. From the interview (speaking about my visualization experiments with the Science Tricorders):

…This collection of sensors is often called an inertial measurement unit. By coupling that collection of sensors with other sensors, say a non-contact infrared sensor, then you’re theoretically able to pair the Science Tricorder’s orientation in space with the temperature of what it’s pointing at, and (after waving it back and forth for a few seconds) construct something like a very low resolution thermal image for very low cost.

While that’s very exciting, the technique is fairly general, and so you could conceivably fuse the readings from additional sensors to, for example, make a volumetric image of the magnetic field intensity and direction in a given space, which is something that to my knowledge isn’t done with off-the-shelf instruments today.

You can read the full interview here.

Rent a Tricorder on a Satellite!

I had the opportunity to grab dinner and swap stories with the ArduSat folks a few weeks ago, and I’m happy to hear that they’re most of the way into their Kickstarter campaign only a few days into it!

The ArduSat is a nano satellite loaded with sensors connected to Arduino microcontrollers. Students, schools, or regular old nerds can use their satellite to take pictures or upload their own experiments, all for only a few hundred bucks. The thing that gets me most excited about the ArduSat is that it brings space science into the reach of everyday folks for the first time. I’m itching to see what comes next.

Kind of a nerdy thing to admit, but in middle school my buddies and I used to spend hours drawing and designing our own middle school grade space probes for interstellar exploration. I’m pretty sure that there were serious arguments about whether cast iron was an acceptable satellite constructing material. My job was designing the solar sails — if the ArduSat folks ever want to cruise their next satellite towards Mars, I’d be happy to design the warp core!

Check out their video, and help support (and participate!) in some of the first open space science!

Toward a design for the next Open Source Science Tricorder

It’s been a full, unbelievable two months. The Tricorder project has been everywhere from Slashdot to MSNBC to Forbes. I’ve given talks, met wonderfully amazing open source hardware folks, been humbled by an inbox full of eager citizen scientists hoping to learn more about their worlds, and picked up a life (for the second time — they tell you in grad school you’ll likely have to do this at least a few times) and moved to the San Francisco Bay area to design Medical Tricorders, win a race for an X-PRIZE, and help transition health care into an information science. I’d say that’s a lot for two months! Now that the press has calmed down a bit, and my inbox has slowed to an almost manageable trickle, I’ve been able to play and tinker a bit with the next Open Source Science Tricorder design.

Folks have asked a bunch about the next design — what sensors will it have, what will it look like, and so forth. I’ve been sketching out some designs in hardware and in code a while now, playing with creative elements to see what works well, and what needs work, and how the design can be kept tractable, exciting, inexpensive, and have the potential for manufacture so that folks can actually have Open Source Science Tricorders in their hands. Seeing the challenge that folks are having with building and modifying the earlier Science Tricorders (and rightfully so — they’re very complicated to assemble), it’s clear that I can do a lot better on my accessibility design goal. In light of this, I’d like the next design to be extremely easy for regular folks to modify over a weekend, both in hardware and software, maybe by adding a new analog sensor or changing one of the visualizations, and in this way serve as a vehicle for beginning to learn programming and electronics. The Science Tricorders are all about science, and helping people explore their worlds and learn about science, and the idea of making them as easy as possible to modify meshes very well, I think. I confess that I have partially selfish motivations for this — I feel like it would be extremely rewarding to read about all the interesting ways people have modified their Science Tricorders and all the wonderful things they’re learning about the world with them while I eat my fruitloops in the morning and begin the day.

I’ve tinkered with two experimental designs for the next Science Tricorder. One (pictured above), that I’ve been working with over the last half year, is an experiment in keeping things inexpensive and pervasive by placing sensors on something as small as a keyfob, that everyone could keep with them. The device would be paired with a smartphone for visualization, which saves the size and expense of adding a display to a design. It’s a pretty design (especially with the RGB led’s), and fun to play with something so tiny. The two hitches are that it’s difficult to include a diverse array of sensors in the design, and that by requiring a smartphone you’re discluding kids from carrying one around unless they also borrow their parents expensive phones, which is clearly a non-ideal use case.

The second and earlier design (pictured above) came after the Mark 2. The Mark 2 is absolutely beautiful, but it was a huge undertaking for only a single science nerd to develop, no matter how nerdy. And it was expensive. I tried experimenting with low cost design, and so the processor was changed from the ARM9 core running linux to one of the most powerful microcontrollers at the time (a PIC32MX), the dual OLED displays were swapped for a single TFT, and — critically — most of the sensors were taken out, replacing them with headers that sensors could be plugged into. This design experiment had good elements — it was definitely moving towards making things less expensive, and added connectivity options (like bluetooth) that I liked, but it strayed a bit too far off the path. I feel like Science Tricorders are about having lots of sensors integrated in a small package, and readily available. I don’t want folks to have to carry around a bag full of sensors with them, or have to swap sensors in and out on the field, only having some subset of them available at any given time.

So, two good experiments, and lots of lessons learned. With these “concept sketches” of the next Science Tricorder — sketches in hardware and in code — I’ve been designing something different from both and closer (at least conceptually) to the Mark 1 and Mark 2 designs. I think what comes out of this process will likely be fairly close to the final design for the next Open Source Science Tricorder.

The clam shell design of the Mark 1 and 2 is great, but it’s mechanically difficult and requires an extra screen for the bottom, which requires extra electronics — it really contributes to making things expensive. The keyfob is on the other end of the spectrum — tiny, inexpensive, and no screen at all. The PDA-style design was somewhere in the middle, but I’m not a fan — it’s very plain, and the screen is out in the open and could easily be damaged. It’s also very flat, and challenging to find space for everything (sensors, electronics, etc) to fit. With all this in mind, I’ve been trying to figure out a good mechanical design that I like.

I mocked this one up in Google Sketchup. It’s kind of inspired by the Tricorders in the Enterprise series, in that the screen is essentially in a protective pouch, and can be slid out for use. I flared out the sensor section (top), and added a bit of an angle on it. Often I’ve been mounting the sensors at a 90° angle from the screen, which would mean to sense something infront of you you have to hold the unit completely horizontal and look down at the screen. By having the sensors mounted at about a 30-60° angle, which seems to be where you naturally tend to hold the Tricorders when you’re walking around with them, you’ll be sensing what’s infront of you, but also holding the screen at an easily viewable angle — both to yourself, and to the folks around you. In that way I hope to make it more usable, and help bring the folks around a Tricorder user into the experience.

This is just a rough mock up, and I’m sure it’ll change a bunch. I was going to print one out to play with, before remembering parts to my 3D printers are hidden away in a UHAUL storage container until I find somewhere to live in the bay area!

To get a rough idea of whether things would fit, I sketched up a drawing in Adobe Illustrator with some of the big (potential) parts, and started shifting them around to get a rough idea on size. There are plenty of other things to fit in, but many of them are pretty tiny, and the details of their placement tends to be sorted out when designing the circuit board.

My working list of sensors and “big” parts is currently:

  • Ambient Temperature, pressure, and humidity: likely Sensiron’s new SHT20 sensor (that replaces the SHT1x series), as well as a BMP085 pressure sensor, which looks /way/ easier to work with than the SCP1000 in the previous models.
  • Non-contact Temperature: likely a Melexis MLX90614 series sensor. These are great, and Melexis has been very generous with samples in the past.
  • 3-axis Magnetometer: still not sure on this one, but likely a HMC5883L or similar. These have very similar specs to the MicroMag3 in the first two models, but the sensor itself is far smaller. I’d also like to experiment with having two magnetometers mounted some distance apart, to give not just 3-axis direction and field strength, but ideally also the distance to the field source.
  • Light level and colourimetry: I’m thinking some of TAOS’s (now AMS) line of sensitive light and colour sensors. Some of these have very high dynamic ranges, so they’ll be useful across a broad range of light levels. The Avago colour sensor I used in the Mark 2 had a small internal white LED to illuminate the sample, so I might have to include something similar beside the colour sensor.
  • Distance sensor: likely a MaxSonar by Maxbotics. They have some newer models that are supposed to have better resolution, range, and noise rejection. I’d really love to use a laser-based range finder, but I don’t think there are any tiny (or inexpensive) options yet.
  • Inertial Measurement Unit: everyone’s very excited about the Invensense MPU6050 integreated 3-axis accelerometer and gyro, and they have a version coming out that also includes a 3-axis magnetometer — which could act as the second magnetometer mentioned above.
  • Sound: this was a big request. There were always headers for microphones on the first two models, but this model will include an internal microphone for doing FFTs/frequency analysis and such.
  • Gamma ray detector: another big request. I’ve been experimenting with this since the Mark 1, but noise has always been an issue. Gieger tubes are way too large, and have hefty voltage requirements. PIN photodiodes still seem like the way to go, and folks have posted schematics online that seem to do okay — so this is definitely something to investigate again. I have also thought of coupling a scintillation crystal with a photodiode, but I’m sure cost, noise, and finding enough tiny scintillation crystals to make more than a few would be trouble — so directly using the photodiode as a detector (without a scintillator) may be a good compromise.
  • Gas sensor: another request, and one that still needs a bunch of research before it’s officially on the list. I’d love to include a gas sensors that could sense something like greenhouse gasses emitted by vehicles or industry, or something simlar. Most of the sensors I’ve found are large, and use heaters that both heat the whole unit up until it’s very hot, and require a lot of power. I’d love to hear about a small, low-power sensor.
  • Display: If Alibaba is correct, there’s a chance that the 2.8″ OLED touch displays used in the Mark 2 are back on the market, and so I’m looking at seeing if there’s a good supply of them for the forseeable future. If not, I’ll have to find an inexpensive TFT that’s bright and has good viewing angles — I’d love to hear about one, if you have one and a supplier in mind.
  • Connectivity: definitely at least one of bluetooth or WiFi. Bluetooth would be nice for pairing with a close-by device (like a laptop or smartphone) if you’re out in a field somewhere, where WiFi would be nice if you’re at home or school — but WiFi modules are traditionally a bit more expensive, too. I’m still not sure on which one to go with.

I’m still deciding on the processor — there are one or two more options that I’d like to check out before making a decision, and hopefully their evaluation boards should arrive in the next few weeks.

I’m eager to hear comments on this high-level design before plunging into the details. If you have thoughtful comments, suggestions, or sensor/part recommendations, I’d love to hear your comments — either here, or sent to peter at tricorderproject dot org .

stay tuned!

Adam Savage’s Maker Faire 2012 Talk: Why We Make

This morning over breakfest I had the chance to watch Adam Savage’s great talk from Maker Faire, about why we make:

A lot of what he says really resonates with me, and I’m sure a lot of other makers. I remember late nights at the hackerspace in grad school, slaving over a hot laser cutter to fabricate some fantastic machine that I’d dreamed up that morning and designed in the afternoon, and really needed to get out of my brain and into the real world to see it real, see how it worked — a sort of sketching in plastic and in gears and in code. The story is very much the same with the Science Tricorders, though the process a lot longer because of their complexity. I’ve been developing them for years, but more than that, I /can’t help/ but develop them. It’s almost like we’re compelled, as makers, and scientists, and Tricorder builders — there’s so much we have to see, so much we have to know.

At about 17:30 in the video, Adam speaks about project-based education — something I feel very strongly about — and gives a shout out to building your own fully functional Tricorders from Star Trek!

Forbes Article: Social Medicine is the Next Big Thing After Social Media

There’s a great article over in Forbes this morning by Mark P. Mills about Medical Tricorders, Scanadu, and the Tricorder project entitled Tricorder Update — Social Medicine is the Next Big Thing After Social Media. From the article:

You want a vision for the future of health care? Don’t look to policymakers and regulators. Look to innovators and innovations. Look to San Diego’s wireless mesas and San Francisco’s silicon valleys. Look at Scanadu’s protean medical Tricorder. They get it, and it’s awesome. Watch their one-and-a-half-minute video before reading on.

Scanadu’s vision embraces patient-centric healthcare as a personal information service, in your control – in your hands – amplified by the Cloud. It is the key to unleashing the power of social medicine. Welcome to the future of healthcare.

Of course we can already use social media whether at Facebook [NASDAQ:FB] or its health-centric imitators like PatientsLikeMe to “friend” within a subject domain (symptoms, questions). But what we hunger for is hard facts about our personal medical problem that we can share with the best medical expertise. Enter Scanadu and the Tricorder.
Scanadu is competing for the Qualcomm [NASDAQ:QCOM] Tricorder X-Prize I wrote about earlier this year. (See New Era of Metadata Medicine) The underlying DNA of Scanadu is illuminated by the newest member of their impressive team, Canadian Peter Jansen, a polymath with a background in astro and optical physics, cognitive artificial intelligence, and medical imaging. Jansen’s talents speak volumes about the kind of imaging and information processing that will change the face of medicine. Jansen says “medicine must become an information science.”

You can read the full article here. Thanks Mark!