Equipment choices for Visible Induced Infrared Luminescence

by Tessa de Alarcon

Since we have posted often about visible induced infrared luminescence (VIL) and the equipment we use at the Penn Museum, we on occasion get emails from other conservators and museum professionals asking about what equipment to buy and the costs associated with this photographic technique. This technique is often used for imaging Egyptian blue, but it can also be used for Han blue and Han purple. Much of the same equipment is also used for infrared reflectance.

E1827A-E multimodal data set including a visible image (left) and infrared reflectance image (center) and a visible induced infrared luminescence image (right).

Making specific equipment recommendations though are tough because there are a lot of options and a lot depends on your budget. Basically though, what you need are the right lights, a camera (and lens), and a long pass filter for the camera to capture in the infrared. I thought I’d do some testing to show how some these different elements impact the results in the hopes that it might help others figure out budgets, and to show that it’s also possible to build this equipment up piece meal, starting with equipment you might already have or is very low cost. Some elements though are pricey and things can add up.

All of the VIL images in this blog post also have a Spectralon standard in them (99% infrared reflectance standard). This is not low cost, and not required for the technique. It is often required for publication, and is useful for trouble shooting or developing new techniques. I’m mentioning this because it is useful to evaluate the data presented here, but it isn’t something that is strictly required. We did not have one for a long time, but were able to do this type of imaging. We waited as it was a big investment for us (approx. $500). These standards also range in price depending on the size and calibration.

An infrared (IR) filter is a requirement for this technique. I only tested one IR filter (though we have two). They range in cost depending on where you get it and the quality of the filer and the size of the filter. Ours is a B+W and is an 830nm IR long pass filter and is 62mm ($130). There are cheaper ones available as well as more expensive ones too. We got ours to fit our macro lens, and have adaptor rings (all our rings are generic brand low cost ones that were each under $10) to fit it onto other lenses that we have.

The right lights are a critical factor for this technique, but not necessarily one that has be high cost. You need a bright light with no infrared radiation, red lights are commonly used and LED bulbs are preferred as they produce no infrared radiation. I tested out three different lights for visible induced infrared luminesce. These include at the high end of the budget a Mega 64 Profile Plus RGB + UV Par light at maximum intensity with the red LEDs on only. This is a roughly $200 light. The other lights I tested are both low cost options. I tested one FEIT electric one red LED light bulb I put in one of our extra reflectors and photo light stands. I bought this bulb for $6 from my local hardware store, and a two daylight LED bulbs (UL certified) that I got from our facilities department to replace burnt out bulbs on our copy stand. I used both bulbs in the regular copy stand set up for imaging. I don’t know what the bulbs I got from facilities cost, but there are no brand name stamps on them so I’m guessing they weren’t expensive, that UL logo on it just means that it is a UL certified bulb.

The Mega 64 profile plus (left), a daylight LED bulb (center), and a red LED bulb (right)

I tried both our full spectrum modified camera, and our regular digital SLR (unmodified). Neither of these is an inexpensive camera, but should show generally the difference between using what ever digital camera you already have, and getting a similar cost one as a modified camera. Neither is new, we use our cameras until they can’t be repaired. The unmodified camera is a Nikon DSLR D5100 and the modified camera is a Nikon DSLR D5200. We bought our modified camera new, and sent it to Life Pixel to remove the internal filter, but they now sell used cameras that they will modify for you (there are options). Ours was modified to be a full spectrum camera. At the time of writing this post, the most economical option I saw on life pixel for a used camera with this modification package cost a total of $449. So even used not cheap. But lets get to the testing and start looking at results.

Lets get to the results from the modified camera. I have a visible reference image in the set, and the VIL images using each of the different lights. All the lights worked, but the brighter MEGA par 64 gave the best results, especially at exciting traces of Egyptian blue. Though the daylight no-name brand LED bulbs were not bad. These have a range of camera settings, and the benefit of the modified camera is that I could see the results in live view, focus the image and adjust the settings with minimal bracketing. The spectralon should not be visible, and if it is usually means that the image capture settings are not quiet right.

Modified Camera results: visible reference image (top left), VIL image with the MEGA 64 LED lights (top right), VIL image with the no-name daylight LED bulbs (bottom left), and VIL image with the RED LED bulb (bottom right).

Next up, the unmodified camera. The images are arranged in order of the lights used the same way as for the modified camera for easy comparison. So I did get results with all of the lights. The down side is that the live view shows nothing so the focus can’t be corrected. These images are all slightly out of focus because the focus in IR is different from the focus when capturing in the visible range and I focused the image before putting on the IR filter. All of them had to be taken at the longest possible exposure of 30 seconds. This made data collection easy since I had no choice in settings. And for the low costs bulbs it looked like they were just black with no data until I got them in adobe camera raw and converted them to grey scale by adjusting the saturation to -100. Then I could see something, but I did also adjust the exposure for the images you see here. You can see the spectralon and the background in all of the images and interpretation might be harder with these than the images taken with the modified camera. I will say I think this could be used for detecting Egyptian blue, but it’s important to note that the unmodified camera only got the thick areas of Egyptian blue, and didn’t have the sensitivity to pick up the traces visible in the images taken by the modified camera.

Unmodified Camera results: visible reference image (top left), VIL image with the MEGA 64 LED lights (top right), VIL image with the no-name daylight LED bulbs (bottom left), and VIL image with the RED LED bulb (bottom right).

For fun, I also tried putting the IR filter over my cell phone and using the MEGA par lights took a photo. This is just to show that even a small sensor like what is in a cell phone camera can work. It is out of focus though and like with the unmodified Nikon DSLR you aren’t getting traces of Egyptian blue. But it did show something, and I could see the results in the live view. This is also an avenue that I know others are working on: producing low cost modified cell phone cameras with built in filter wheels. Sean Billups has presented at AIC on this topic.

Cellphone VIL image: unprocessed and shown as shot with an 830nm IR long pass filter over a an unmodified cellphone camera

To wrap things up, I think it is possible to build this equipment up overtime. You can start with the camera you already have for documentation, and then get better lights and a camera as you can afford them. The IR filter can also be used for IR reflectance, this is also possible with any digital camera with a long exposure and using any light that produces infrared radiation. There is much less difference in data quality between a modified full spectrum camera and an unmodified camera for this method, though again there is no live view and sharp focus is hard to do with an unmodified camera. We use incandescent photo floods (real bright and toasty) but any light that gets warm probably produces infrared radiation and could be used. Daylight for example works real well too (and is free).

Infrared reflectance images (left) and infrared reflectance false color images (right). The modified full spectrum camera was used for the images on the top and the unmodified camera was used for the images on the bottom.

Ancient Glow-in-the-Dark Artifacts

By Sean Billups

“Technical analysis” is a term frequently used in the conservation field to describe the use of specialized techniques to examine objects. Those techniques can include using scientific instruments, special cameras, and lots of other equipment. Maybe that term sounds boring, but in addition to telling you a lot about an object, you can also sometimes find fun surprises. 

This was certainly the case when I began some preliminary analysis on an alabaster bowl for the reinstallation of the Egyptian and Nubian galleries. The bowl dates from the Egyptian Early Dynastic period, from roughly 3000-2800 BCE.

Alabaster bowl, E14243

Before starting what was seemingly a simple treatment of retouching/repainting some old fills, I assessed it under ultraviolet (UV) light in a dark room. My goal was to take a look at the fills and adhesives, hoping that UV would give me some information about the materials used. When I turned off the UV light, I noticed a faint greenish glow coming from the bowl. It looked like a glow-in-the-dark sticker.

Phosphorescence from alabaster.

Trying this a few more times verified that I wasn’t imagining things, and the light was coming from the alabaster bowl itself. For a second or two after I turned off the UV light, the alabaster would glow. A bit of quick research taught me that this phenomenon is called phosphorescence, and does, indeed, occur in alabaster. 

Phosphorescence is a type of photoluminescence; the higher energy UV light is absorbed by the material and emitted at a lower energy (in the visible range). Unlike fluorescence, which occurs only while the light is applied, phosphorescence continues for a longer period of time, from a few microseconds to even hours. 

This glow-in-the-dark quality is usually not noticeable because even if there is enough UV to cause phosphorescence, the result is so dim that it gets easily overpowered in the presence of visible light. 

What does this finding have to do with planning my simple inpainting treatment? Absolutely nothing, but those are often the most interesting finds. While the inpainting treatment did not specifically benefit from this find, this knowledge can be applied in other ways, like identifying alabaster before turning to more intensive methods. Discovering phosphorescence in alabaster artifacts is a reminder of the many surprises that can be uncovered through the process of technical analysis–and that technical analysis, which might sound dry, is often how we find the most interesting things.  

What’s all that 3D data for?

By Tessa de Alarcon

We’ve had a few posts (this one by Chelsea Kim and this one by Christy Ching) on creating 3D models using photogrammetry, and I thought I’d give some examples of what we are doing with that data once it’s collected. For some objects we are creating ortho-mosaics and these 2D images are going into reports as after treatment images as well as going into the catalogue model as record photography that also shows up in the online collection database. This wooden coffin 2017-20-1.3 is an example of this type of imaging.

2017-20-1.3 after treatment photos created using ortho mosaics generated from a 3D model created using photogrammetry.

For other objects we are also producing ortho-mosaics, but they are before treatment images. For example with E641 a wall painting that was previously on display.

E641 when it was on display

The wall painting is currently in two sections and each one has been imaged separately. These before treatment images have been used to create condition maps.

Before treatment ortho mosaics of E641 created with photogrammetry

The maps go into our reports and help provide visual documentation to support our written reports. For large objects, these kinds of condition maps are often easier to understand than written descriptions and can provide more precise information on the location of specific condition issues. Here you can see the condition map for E641. The map is not yet complete, I am still working on documenting one of the sections but I have combined the two maps into one image so you can see what that process looks like.

E641 condition map. The map for the section on the left is complete while the mapping on the section on the right is still in progress

The models can also be used to show surface distortion, so here in this screen shot of the 3D model of E641 you can see planar distortions in the wall painting where the fragments are not aligned. There may be a variety of causes leading to this distortion including poor alignment during the previous reconstruction or they may be the result of lifting/separation of the original material from its current modern backing.

Detail of E641. One the left is a mesh without the color added to the 3D mesh-model and on the right is the same area with the color and surface texture added to the model. The image on the left you can easily see the fragments and how they are misaligned in some areas.

I am currently working on learning how to create a 2D false color image where the colors reflect depth, so that we can have these planar distortions documented in 2D as well as being able to see them in the model.

So all together, this data is being used to document both the final condition of objects after treatment, as well as to document them before treatment. The models are also useful tools to assess complex condition issues and are valuable for evaluating next steps. For example, our current plan is to remove the wall painting from it’s current modern backing and put it on a new one. Our hope is to correct some of these planar distortions as a part of that process, and this model as well as one we make after treatment will be useful for evaluating the efficacy of the treatment and provide a base line for assessing its condition in the future.

Eyes are the Window to the Soul, Or So They Say

By Tessa de Alarcon

Typically, at the Penn Museum when we are working on objects, even for display, we prioritize stability over aesthetics. This means that we are often do less cosmetic work than would be done at an art museum when it comes to putting in fills and toning out areas of loss. However, I recently undertook a project where I went further than I usually do to recreate lost material. This blog post is going to walk through why that decision was made in this case as well as some of the mysteries that I found along the way

E1019 Before treatment. At this point the object was being tracked as E17632

The object in this case is an Egyptian cartonnage mask E1019. When it entered the lab it had a lot of condition issues, including the top of the head was partially crushed, it had been heavily treated before, and it was missing the inlays for its eyes and eyebrows. The missing eye inlays had been giving many visitors to the lab the creeps as the mask appeared to have dark empty eye sockets. Because of this, from the start I had been polling to my colleagues about what level of repair I should do to reduce the distraction of the missing inlays. I was not at this point considering replacing them, but was instead thinking about maybe toning out some of the other losses on the cheek to draw less attention to the eyes.

E1019 before treatment, a detail of the face and eyes.

When it first entered the lab the mask was being tracked as E17632 but over the course of the treatment, I found a different accession number on the interior, E1019. With the help of our curators, we were able to piece together that E1019 was the original accession number, and E17632 had been assigned to it later. When I looked up the record for E1019 in the museum collection database, I found the record included two eye inlays! I was so hopeful that this would mean that I could reintegrate two inlays, one into each eye. However, when I reached out to the curators to get more information, I found out that they are two parts of the same eye, the white part of the eye and a pupil/iris.

Eye inlays E1019.1, and E1019.2 before treatment

Well, this left a new set of problems. Especially since you can see here, the white part of the eye was not very white anymore since it was covered with a dark brown substance. I was left with a lot of options, leave the eye inlays out, reintegrate them as they are, or clean them and reintegrate them, and if I reintegrated them should I then also create a replica set for the other eye?

Before making any decisions, I checked to see if they inlays fit the eye sockets in the mask, which they did. The inlays turned out to be for the masks right eye. After that, I spent some time characterizing the dark coating on the white part of the eye inlay. This included UV examination and comparing how the coating fluoresced with the brown modern materials I found on the interior of the mask from previous treatments. The results were not as clear cut as I was hoping. It seems that there is more than one brown substance on the inlay based on the UV examination. With this data in hand, I reached out again to the curators with the options of leaving the eyes out, reintegrating them as is, or cleaning and reintegrating. The curators indicated that they wanted the inlay reintegrated, and that they would like a replica for the missing inlay as well so that she looked even as one eye seemed worse than no eyes. Together we decided to clean the eye inlay, but to keep samples of the substances on the inlay for future analysis.

E1019.1 white part of the eye inlay in visible light (top) and under 368nm UV radiation (bottom). The rectangular material is a piece of acidic board with brown residues on it that had been used on the interior of the mask as part of a modern restoration. The fluorescence on the front of the eye inlay under UV is similar though not as bright as the modern brown residues but the back of the eye the brown residues do not fluoresce.

Once clean, I set about making a copy for the masks left eye to be a close but not identical match. Based on previous experience I decided to make the new inlay set out of a two-part light weight epoxy called Wood Epox as it is easy to shape and can be sanded and carved. To start, I made a paper template of the shape of each inlay. I made sure to mark what I wanted to be the front of each so that the shape would be a mirror image of the original inlay. The white inlay is slightly curved, so I also created a form that would have the same curvature using foam.

The inlay, E1019.1 after cleaning (left), the paper template of the inlays (center) and the foam support mimicking the curvature of the inlay with the inlay in place during a test fit (right).

Next, I rolled out some sheets of wood epox, and using the paper template trimmed out the shape I needed for both parts of the eye. The pupil/iris part I let set flat, what let the one fore the white of the eye set in the form I had made so that it would have the same curvature as the original. Once cured I sanded them to finish, with the final stages being wet sanding so that the replica inlays would also have a natural gloss.

The inlays replicas curing with the white part in the curved support (left) and the original inlays (E1019.1, and E1019.2) laid out above the shaped and sanded replicas (right)

The final step before assembly and placement in the mask was the paint them to resemble but not exactly match the originals. I used gloss medium for the pupil/iris as this inlay was especially glossy and I could not get that level of gloss with polishing and painting alone.

The original inlays (E1019.1 and E1019.2) laid out above the replicas after the replicas have been toned to be similar thought not identical to the originals

Finally, here you can see the end results after treatment. You will see though, that I have not attempted to recreate the inlays for the eyebrows. Because we had the one set of eye inlays, I had something to reference for making the replica set of inlays, however, there are still pieces missing which I had no frame of reference for. There were also likely inlays that went around the outside of the eye as well. These and the brows might have been made out of a variety of materials and without the originals for reference, there is no way to be certain about what their color and appearance would have been.

E1019 after treatment. The original inlays are in the masks right eye and the replicas are in the masks left eye.

2D to 3D

By Chelsea Kim

As an intern working with the conservation department, I have received the opportunity to work on many projects and experience things I never thought I would. Recently I have been working on this software called Reality Capture using photogrammetry. Photogrammetry is a process that uses an abundance of photographs to create a 3D model without any distortion from many overlapping images stitched together to form a detailed and geometrically corrected image called an orthomosaic. This process is usually used on larger objects, and this is because it’s too big to be in frame when taking pictures and has a lower quality with distortion which is far from perfect, and Christy Ching explains more in depth about this in her previous blog post.

I want to show how to create a digital three-dimensional model using the software, Reality Capture, and I’ll demonstrate with an example of the after-treatment photos of an Egyptian coffin.

To start off with, having pictures of the object is a must. For this example, they were already taken and edited in Photoshop, to adjust the white balance using adobe bridge ahead of time. Then I begin by opening the software and then under workflow at the top left corner, I select “inputs.”

Screenshot of the software highlighting where to click “Inputs” which is above “1. Add Imagery”

Then I select all the images making sure that they were a .jpeg file and then I click on “Align Images” as highlighted above. After the images are aligned, a transparent box surrounding the coffin appears. I adjust the box by dragging the control points around to make it as small as possible without cutting off any part of the coffin. As you can see in the image below, using E883C, the box is close to the coffin but does not intercept the coffin itself.

Screenshot of the Egyptian coffin E883C after the images were aligned with the transparent box adjusted to tightly around it.

Now for the fun part to see the coffin take shape, I click next to “Calculate Model” to select “Preview Quality” as highlighted below. Then I go to the tools bar to use the lasso option to erase all the unnecessary space around the coffin. Then after being satisfied with selected area, I click on “Filter Selection,” which turns the selected areas from orange to dark blue showing that it worked.

Screenshot of the coffin after selecting “Preview Quality.”

Finally, I go back to the Workflow bar to select “Texture” which is highlighted below and then it shows all the details of the 3D model without any distortion in high detail and quality.

Screenshot of the 3D model after being textured.

Egyptian Sarcophagus or Museum Time Capsule?

by Tessa de Alarcon with images by Alexis North, Molly Gleeson, and Christy Ching

We recently de-installed two stone sarcophagi from Egypt from the upper Egypt gallery at the Museum: E15415 and E16133. These pieces are slated for reinstall in the new Egyptian and Nubia Galleries and will likely need extensive treatment before they go back on display. This is why they have come off display, so that we can assess their condition and evaluate what needs to be done for the new gallery. For both pieces, we need to check the stability of the previous treatments. Both have previous joins and fills that were done before the formation of the conservation department. This means that we have no records for when these treatments were done or the materials that were used to reconstruct the stone and fill the losses.

E15415 at the top and E16133 on the bottom

In the case of E15415, this meant we needed to see the underside. We brought in Harry Gordon, a sculptor and professional rigger, to build a wooden cradle or cribbing and then lift the piece and flip it so we could see the joins and fills from the other side.

E15415 as it was on display in it’s vitrine (top) and images from the process of cribbing and flipping the object so we could examine the bottom (bottom right and bottom left).

When we flipped the object over and took the plinth it had been sitting on off, we found an additional puzzle. The piece has had a plexiglass vitrine over it for quite many years to protect it while on display. However, that has not always been the case, it used to be uncovered in the gallery. It seems that prior to the placement of the vitrine some visitors took advantage of the small gap between the stone and the wooden plinth below it to slide things under the object.

E15415 after we flipped it over and lifted off the plinth that had been underneath it (left) and a detail of the interior as we sorted through the items that we found that had been underneath the sarcophagus

In a way, this has made this object a sort of time capsule. We found a number of things that had been hidden under the sarcophagus including a coupon for Secret deodorant (worth 5 cents), a program for the Graduation Exercises for the University of Pennsylvania Oral Hygiene Class of 1967, a museum map from the when the museum was called The University Museum, a votive candle donation envelope for the church of St. John the Evangelist Sacred Heart Shrine, a scrap of paper with dishes on it, and two black and white photographs. While some of these things are easily identifiable, like the program and the coupon others are more of a mystery.

The paper items that we found underneath the E15415

I personally find the photos the most interesting. They look like shots perhaps from a photo booth. This is based on their size and format and that each has a torn edge (one at the top and the other at the bottom) suggesting that they may have been part of a longer whole or strip of images. Who is the subject in each image? Were the photos discarded because the owner or owners didn’t like them? Were they taken at an event or party at the Museum? Were the photos captured at the same event or in the same photo booth (if they are indeed from a photo booth)? They are similar in size and format, but that doesn’t mean they relate to one another. Were they taken somewhere else and discarded during a visit to the Museum? I have only questions and no answers, but my hope is that by sharing these images maybe someone reading this will know or recognize them and be willing to tell us more.

Photographs found underneath E15415. The image on the left is torn at the bottom while the one on the right has a torn edge at the top.

Party Time or New Photo Light?

By Tessa de Alarcon

The conservation department recently acquired a new light for multi-modal imaging – an ADJ MEGA PAR Profile Plus (one for use at the conservation lab annex and one for the museum main lab). The MEGA PAR is a tunable LED light source, with 64 different color channels. While not designed for analytical imaging, it provides a bright and large spot size that we can use for visible induced infrared luminescence (VIL) imaging of Egyptian blue. It will also be something we can use to test out other imaging methods in the future. Taking VIL images is not new to the lab, but the light source we had been using stopped working and we needed to replace it. We are grateful to Bryan Harris for making the purchase of the new equipment possible.

The spectralon and the new MEGA PAR Profile Plus light (right) and the new equipment in use (left)

Along with the new light, we also acquired a new reference standard, a 99% reflectance spectralon. This standard is critical for developing methods and standard procedures for imaging in the lab. In this post I am going to show an example of how this standard can be used and how I developed a protocol for VIL imaging with the MEGA PAR light.

Set up for round one testing: Egyptian green (left pigment sample) Egyptian blue (right pigment sample) and a V4 QP grey scale card.

Since the MEGA PAR light is new, one of the first things I did when it arrived (after unpacking it and reading the instructions of course) was run a variety of tests on known reference materials to see what settings might work for creating visible induced infrared luminescence images of Egyptian blue. As part of that process, I set up a grey scale card (QP card V4) and two reference pigment samples, Egyptian blue and Egyptian green (both from Kremer pigments). I chose these so I would have a known pigment that should luminesce, the Egyptian blue, and one that should not, the Egyptian green. Using the department modified full spectrum camera, I took a visible reference image of the known pigments and the QP card using our regular fluorescent photo lights and a visible bandpass filter over the camera lens so that I could have a normal color image.

Screen shot of thumbnail images of the round 1 testing

Then I captured a series of images using the same set up but replacing the visible band pass filter with an 830nm long pass infrared filter so that I could capture images in the infra-red, with the fluorescent light turned off and the MEG PAR turned on. Each of the images I captured were with the same settings on the camera and with the MEGA PAR light in the same position, just going through each of the 64 color channel options.

Screen shot of Adobe Camera RAW showing the process for evaluating the response of Egyptian blue to each setting

I converted the images to grey scale adobe camera RAW by sliding the saturation level from 0 to -100, so that the red, green, and blue values (RGB) would each be the same. I then used the dropper tool to take a reading over where the Egyptian blue standard is in each image and recorded the number. The higher the number, the brighter the luminescence.

Set up for round 2 testing with the Egyptian blue pigment sample (top left), the Egyptian green pigment sample (below the Egyptian blue), the 99% reflectance spectralon standard (right), and a V4 QP grey scale card (bottom).

After doing that I had a reduced set of options that produced good luminescence in the Egyptian blue for a second round of testing. For round two I did the same thing with the more promising group, but also included in my images the 99% reflectance spectralon standard so that I could check and verify that the light is not producing infra-red radiation. If there is any infra-red, than the 99% reflectance standard should be visible. None of the second round of options showed any infra-red. While any of them can be used for VIL, CL08 gave the strongest response.

Screen shot of round 2 testing evaluation

After developing a working set-up, I did a test in the photo studio using an object that I knew had Egyptian blue, and the standards. I captured a visible image with the modified camera with the visible band pass filter and the fluorescent photo lights, and a VIL image with the 830nm long pass filter and the CL08 setting on the MEGA PAR. The false color image was created by splitting the color channels on the visible image in photoshop, discarding the blue data, and putting the VIL data in the red channel, the red visible data in the green channel, and the green visible data in the blue channel. As you can see the spectralon is not visible in the VIL image meaning there is no IR radiation being produced by the MEGA PAR light.

Images of E12974 with a visible image (left), a visible induced infrared luminescence image in the center showing Egyptian blue in white (center), and a false color image showing Egyptian blue in red (right).

After all this work, I had an opportunity to see how the new light would perform in less than ideal settings. I have been working on a study of one of the coffins in the collection, 2017-20-1.3, to examine the coatings and pigments. VIL is the perfect method of identifying blue areas on the coffin but the coffin is too big to fit in the department photo studio. The set of images below were taken in the Artifact Lab (our public lab in a gallery space) where there is IR from the windows (daylight) as well as from the gallery lights. I hoped that a short exposure with the new very bright MEGA PAR would reduce the effects of IR in the image. As you can see in these photos below, the 99% reflectance spectralon is slightly visible but not as clearly as the Egyptian blue on the coffin. These results are much better than what we used to get in the Artifact Lab using our old light, so I am very happy with these results.

Detail from the coffin 2017-20-1.3 with a visible reference image (left) a VIL image with Egyptian blue in bright white (center) and a false color image created by combining channels from the visible reference image with data from the VIL image resulting in the Egyptian blue showing up as red (right).

Special Photography for Larger Objects: Photogrammetry

By Christy Ching

Conservation Technician Christy Ching photographing the underside of an Egyptian coffin 2017-20-1.3 for photogrammetry.

One project I have really enjoyed working on as a pre-program conservation technician is documenting larger objects for a process called photogrammetry. Photogrammetry is a technology that gathers spatial and color information of an object from multiple photographs to form a geometrically corrected, highly detailed, stitched image called an orthomosaic. Essentially, photogrammetry creates a distortion-free, three-dimensional model of an object based on two-dimensional photos of every surface photographed in sections. 

Left: Four photographs of an ancient Egyptian coffin lid L-55-16B at various angles, which were used to create a 3-D model. Right: 3-D model draft of L-55-16B.

*L-55-16B (21-46-9) is a loan object from the Philadelphia Museum of Art (PMA)

This can be done for objects of any size. However, we are mostly reserving this technique for larger objects, specifically larger textiles and Egyptian coffins. This is because photographing the coffins and textiles normally with a single shot requires a greater distance between the object and the camera in order to fit the entirety of the object into the frame, and doing so reduces the image quality. Not only that, but the camera distortion that is inherent in all photographs will become more obvious. The resulting image will not be an accurate representation of the coffin or textile, which is not ideal for documentation purposes. 

The image on the left is a single-shot photograph of L-55-16B while the image on the right depicts the same coffin lid created by photogrammetry. When comparing the two images, the camera distortion in the single-shot photograph can be seen especially in the feet and head of the coffin lid.

With photogrammetry, we can take parts of the 3-D model and use them as high resolution, distortion-free, 2-D images of the object instead.

Six views of L-55-16B depicting the top, interior, and the four sides of the coffin lid generated using photogrammetry.

So far, a little less than ten coffins, a few textiles, a pithos fragment, and a giant granite relief have been documented using photogrammetry. The models and orthomosaic images are all generated by Jason Herrmann from CAAM, and we are very grateful that he is doing this for us! To learn a little bit more about the photogrammetry process, view this Digital Daily Dig here.

This project was made possible in part by the Institute of Museum and Library Services.

The Stories We Wear

By Debra Breslin

Over the past 18 months, I completed the examination and treatment of over 200 objects for the upcoming exhibit, The Stories We Wear, which will open at the Penn Museum in September 2021.  The exhibit focuses on the idea that what is worn on the body tells a narrative about time, place, and culture. Ethnographic and archaeological material from Oceania, Asia, Africa, Europe, and the Americas will be featured.  Alongside these objects will be contemporary ensembles with local connections. 

One of the most interesting aspects of treating this group of artifacts is the extensive range of materials.  I worked with metals such as gold and silver, fabrics made of silk or wool, organic material such as hair and teeth, and different types of wood. For an objects conservator, this was an ideal project to challenge and enrich my skills.  Below are examples of the types of materials that came across my workspace in preparation for the exhibit.

SILK

Many of the objects in the exhibit that represent the various cultures of Asia are made of silk.  Since silk is a fragile and light-sensitive material, these artifacts will be taken off display after a few months and replaced with similar objects to avoid over-exposure to light in the galleries. 

Deel (garment), Mongolia, early 19th century
Silk, cotton, brass
2002-15-1

This beautiful silk garment is part of the wardrobe of a married Khalkha Mongolian woman. The silk on the padded shoulders had become worn and thin and was torn at the highest points. These areas were covered with toned Japanese tissue. I toned the tissue with acrylic paints to match the surrounding material and slipped it under the edges of the broken fabric.

SILVER

Another example of remarkable artifacts from central Asia are these 19th century silver hair ornaments worn by the Daur women of Inner Mongolia.  These were used to adorn their elaborate hairstyles. When these pieces came to the lab, they were dark with tarnish, and it was difficult to see their details. 

Hair ornaments (20452, 20447, 20453, 20455A) in the fume hood

In a museum of archaeology and anthropology, tarnish is not often removed from objects, as it is usually considered part of the historic record of the object.  In this case, I talked with the curators of the exhibit and we felt it was appropriate to safely remove the tarnish and coat the silver objects to fully reveal their details.

Before Treatment
20448B
After Treatment

GOLD

Many cultures around the world valued gold as a symbol of high status. One of several such objects in the exhibit is this gold diadem.  The rosettes are believed to have decorated a headdress or garment of an elite Scythian woman. They were mounted on a modern rod in the 20th century.  The rosettes are made of gold foil and wire. 

Before Treatment
Diadem (Crown), Maikop, Republic of Adygea, Russia, 4th century BCE
Gold
30-33-5

One of the petals of the flower on the far right had broken off at some point and was stored with the object.  The petal was attached on the back side with Hollytex fabric (a spunbound polyester) and B-72 (an acrylic copolymer in acetone).

Detail of repair on right side petal
After Treatment

OTHER ORGANIC MATERIALS

In addition to silk artifacts, other objects made of plant and animal materials will be on display, such as this weapon made by the I-Kiribati people of the Gilbert Islands. It is constructed of wood, coconut fiber, and shark teeth.

Weapon, Gilbert Islands, 19th century
2003-32-338

After cleaning the surface with soft brushes, the shark teeth were further cleaned with enzymes and deionized water.  To stabilize loose cords and teeth, I added small pieces of cotton thread through the existing holes. The red circles indicate the areas of added thread.

Here is an example of what the shark teeth looked like before and after cleaning on a small dagger.

Before (top row) and after treatment (bottom row) P3157A

These are just a sample of the artifacts that will be on display in The Stories We Wear exhibit opening in September 2021.  I hope visitors will appreciate the history and craftsmanship of these objects as much as I do.

An Ivory Figure from Hierakonpolis

By Tessa de Alarcon

The figure you see here, E4893, is an ivory statuette from the site of Hierakonpolis that I am working on as part of an IMLS grant funded project. I have just started the treatment, but thought I would give a brief run through of the initial examination since this is a good example of when and why we use X-radiography in our department to evaluate the condition of objects before treatment.

Before Treatment photograph of E4893

You may have noticed that the middle of this object is fill, so not part of the object. The fill has some cracks and splits that suggests it is unstable and should be removed. There is no written documentation for when this fill was done or by who, but it’s possible that this was done shortly after it was excavated. The object was accessioned in 1898. Given that the conservation lab at the Penn Museum was not founded until 1966 that leaves a big gap for the possibilities for when this treatment might have been done.

Annotated before treatment photograph of E4893 indicating the large fill at the waist of the figure.

Based on previous experience, I often worry with these old fills that there are unseen things, like metal pins or dowels, lurking below the surface. X-radiography is a great way to check for these types of hidden previous treatment issues. Though in this case, what I found when I X-rayed the object was not your typical pin or dowel.

Before treatment photograph of E4893 (left) and an X-ray radiograph of the object (right). The X-ray was captured at 60kV, and 6mA for 6 seconds. There are four nails visible in the fill.

Here in the X-ray you can see what I found: while this fill did not have any pins or dowels, whoever had done this treatment had decided to reinforce it by putting nails (4 in total) into the fill material. While this makes the figure look like he has eaten a bunch of nails, it is in some ways better news than a pin would be. Pins usually go into the original material, and if they are iron, can rust and expand causing damage to the object. Pin removal can also be risky and lead to damage of the object especially if the pin is deeply imbedded or corroded into place. These nails, on the other hand, appear to be only in the fill and do not look like they go into the original material of the object at all. This suggests that removal of the fill and the nails should be possible without damaging the object. As this treatment progresses, I will follow up with additional posts and updates.

This project was made possible in part by the Institute of Museum and Library Services