In the past few years, the Penn Museum has been experimenting with different kinds of scanning in order to create 3D models of some objects in our collection. Today we are announcing that many of these 3D models are now view-able via the website Sketchfab. The Sketchfab community includes many other museums, most notably the British Museum, who are sharing their collections online and allowing people to see them in three dimensions for the first time. As technology shifts from 2D to 3D displays, communities working in virtual spaces will want content and assets to be accessible. Sketchfab currently hosts 30 objects from our collection which are also embedded in our collections website. Some of these models were created in-house while others were made by Penn students learning how to utilize photogrammetry techniques and incorporate them into their studies. Head over to the models page and check them out:
A brief history of the journey toward creating 3D models and making them available to the public follows:
Beginning in the Digital Cave
Starting in 2004, the Penn Museum partnered with the University of Chicago on a project called the Xiangtangshan Caves Project in conjunction with the Smart Museum and the Smithsonian. The overarching goal of the project was to scan a set of caves in China along with a number of statues that had left the country in the early 20th century and see if they could virtually place them back in their original context. The scans of the models were highly detailed, done by a professional crew, and utilized high-end equipment that combined laser technology with digital photography. It was an amazing testing ground for using the power of 3D scanning to help cultural institutions learn more about their collections and share them around the world.
One of the challenges, however, was that the models were only view-able in a proprietary software that required a powerful computer to run. While the pieces looked great on the project’s website, the only place one could view the textured models was in a traveling exhibit on a touch screen. We had a really neat data set that we wanted to show people, but no way to share it.
I wondered how long it would be before this type of project could be standard practice: scanning museum objects, placing them virtually in digital environments, and 3D printing them for display. There was a fairly large hurdle to overcome in terms of the cost of creating high quality 3D models and being able to share them with anyone. The pipeline of capturing the data, making the model, exporting the model, sharing the model, and 3D printing the model was a tall order.
Capture Data > Create Model > Export Model > Share Model > Print Model
Surprisingly, all of that has changed in only the last three years. A combination of web standards, 3D model standards, cheap software, and the promise of consumer-based 3D printing has brought 3D modelling to the masses. But the road to easily create 3D models that can be shared online had a few stops along the way.
Jay Leno and the NextEngine Scanner
A number of years ago the Museum purchased a NextEngine scanner. Curious, I went to their website and was greeted by a video starring Jay Leno who told me that this was basically “The Jetsons.” Intrigued (but preferring a Star Trek replicator as my reference of choice), I decided to do some testing using some casts of objects in the collection. My dream was to someday be able to scan an object, create a 3D model, and then 3D print that model at full scale in color. This would allow us to create reproductions of our objects without having to make a mold off the original.
My experiments with the NextEngine were fairly successful. I was able to choose objects that could stand on their own, set them on a platform, and walk away while the scanner did its thing. An hour later I had a 3D model that had been stitched together automatically by the software. The models could then be exported in a number of 3D formats. However, only someone with special software could actually see what I had made.
Adobe PDFs to the Rescue
With models in hand, the next piece of the puzzle was trying to figure out how to share my creations with the world. One solution was to use a process to embed them into an Adobe pdf. This was a baby step into being able to email someone a model that they could then open on their computer and actually manipulate themselves. The Minnesota Historical Society utilized this technology and made a few pieces that you could download to your computer.
You can still download the pdf to your computer and view some of the objects today: http://collections.mnhs.org/cms/web5/media.php?pdf=1&irn=10219263.
Photogrammetry: A Small Revolution
The NextEngine scanner, while creating good, detailed 3D models, had a fairly high price tag and a steep learning curve. What really started to bring down the cost of creating 3D models was the process of photogrammetry. Essentially you could take a large number of photographs of an object, put them into some software, and out popped a 3D model with good-looking texture. The quality of digital cameras and particularly cameras on smartphones also paved the way for this innovation. Suddenly you could be creating 3D models of objects for a few hundred dollars with a camera you already had in your pocket.
Sharing Is Caring: SketchFab Builds a Community
Sharing the objects via pdf was a step in the right direction, but that still limited how you hosted and embedded the models. The next hurdle was going to be a way to share them online and at a high quality. Various people threw their hat in the ring and have been successful by targeting different audiences, including the gaming community and the 3D graphics community. One that started to stand out for cultural institutions was SketchFab. For one, they had the British Museum on their site and pushed for cultural institutions to become part of a larger 3D object oriented community. They also had a reasonable pricing structure with unlimited uploads and the ability to embed the models into your own website.
3D Printing that Goes to 11
When it comes to visualizing objects on the web, photogrammetry is an incredibly effective method given how cheap the process is and the level of detail in the texture. However, if you actually want to 3D print a model, the underlying geometry can sometimes leave something the be desired. The Fine Arts Library at Penn has been experimenting with new scanning technology. The Museum was able to borrow their ARTEC scanner for a couple of weeks to see what the latest tech could deliver. I was impressed. I was able to get a good, high-quality model within 20 minutes and export it to a number of standard formats. Thanks to the FabLab on Penn’s campus, I was also able to get a professional-quality 3D print of an object I scanned. When everything lines up you could potentially scan and print an exact duplicate of an object in less than a day. The pipeline is now complete. As costs come down and the technology improves I expect a lot of institutions and consumers will be getting into the 3D scanning and printing game. Penn has created a initiative on campus called PennImmersive that is bringing interested parties together to see what the future of this type of technology could look like in a University setting. I can’t wait to see what comes next.