“Technical analysis” is a term frequently used in the conservation field to describe the use of specialized techniques to examine objects. Those techniques can include using scientific instruments, special cameras, and lots of other equipment. Maybe that term sounds boring, but in addition to telling you a lot about an object, you can also sometimes find fun surprises.
This was certainly the case when I began some preliminary analysis on an alabaster bowl for the reinstallation of the Egyptian and Nubian galleries. The bowl dates from the Egyptian Early Dynastic period, from roughly 3000-2800 BCE.
Before starting what was seemingly a simple treatment of retouching/repainting some old fills, I assessed it under ultraviolet (UV) light in a dark room. My goal was to take a look at the fills and adhesives, hoping that UV would give me some information about the materials used. When I turned off the UV light, I noticed a faint greenish glow coming from the bowl. It looked like a glow-in-the-dark sticker.
Trying this a few more times verified that I wasn’t imagining things, and the light was coming from the alabaster bowl itself. For a second or two after I turned off the UV light, the alabaster would glow. A bit of quick research taught me that this phenomenon is called phosphorescence, and does, indeed, occur in alabaster.
Phosphorescence is a type of photoluminescence; the higher energy UV light is absorbed by the material and emitted at a lower energy (in the visible range). Unlike fluorescence, which occurs only while the light is applied, phosphorescence continues for a longer period of time, from a few microseconds to even hours.
This glow-in-the-dark quality is usually not noticeable because even if there is enough UV to cause phosphorescence, the result is so dim that it gets easily overpowered in the presence of visible light.
What does this finding have to do with planning my simple inpainting treatment? Absolutely nothing, but those are often the most interesting finds. While the inpainting treatment did not specifically benefit from this find, this knowledge can be applied in other ways, like identifying alabaster before turning to more intensive methods. Discovering phosphorescence in alabaster artifacts is a reminder of the many surprises that can be uncovered through the process of technical analysis–and that technical analysis, which might sound dry, is often how we find the most interesting things.
We’ve had a few posts (this one by Chelsea Kim and this one by Christy Ching) on creating 3D models using photogrammetry, and I thought I’d give some examples of what we are doing with that data once it’s collected. For some objects we are creating ortho-mosaics and these 2D images are going into reports as after treatment images as well as going into the catalogue model as record photography that also shows up in the online collection database. This wooden coffin 2017-20-1.3 is an example of this type of imaging.
For other objects we are also producing ortho-mosaics, but they are before treatment images. For example with E641 a wall painting that was previously on display.
The wall painting is currently in two sections and each one has been imaged separately. These before treatment images have been used to create condition maps.
The maps go into our reports and help provide visual documentation to support our written reports. For large objects, these kinds of condition maps are often easier to understand than written descriptions and can provide more precise information on the location of specific condition issues. Here you can see the condition map for E641. The map is not yet complete, I am still working on documenting one of the sections but I have combined the two maps into one image so you can see what that process looks like.
The models can also be used to show surface distortion, so here in this screen shot of the 3D model of E641 you can see planar distortions in the wall painting where the fragments are not aligned. There may be a variety of causes leading to this distortion including poor alignment during the previous reconstruction or they may be the result of lifting/separation of the original material from its current modern backing.
I am currently working on learning how to create a 2D false color image where the colors reflect depth, so that we can have these planar distortions documented in 2D as well as being able to see them in the model.
So all together, this data is being used to document both the final condition of objects after treatment, as well as to document them before treatment. The models are also useful tools to assess complex condition issues and are valuable for evaluating next steps. For example, our current plan is to remove the wall painting from it’s current modern backing and put it on a new one. Our hope is to correct some of these planar distortions as a part of that process, and this model as well as one we make after treatment will be useful for evaluating the efficacy of the treatment and provide a base line for assessing its condition in the future.
Typically, at the Penn Museum when we are working on objects, even for display, we prioritize stability over aesthetics. This means that we are often do less cosmetic work than would be done at an art museum when it comes to putting in fills and toning out areas of loss. However, I recently undertook a project where I went further than I usually do to recreate lost material. This blog post is going to walk through why that decision was made in this case as well as some of the mysteries that I found along the way
The object in this case is an Egyptian cartonnage mask E1019. When it entered the lab it had a lot of condition issues, including the top of the head was partially crushed, it had been heavily treated before, and it was missing the inlays for its eyes and eyebrows. The missing eye inlays had been giving many visitors to the lab the creeps as the mask appeared to have dark empty eye sockets. Because of this, from the start I had been polling to my colleagues about what level of repair I should do to reduce the distraction of the missing inlays. I was not at this point considering replacing them, but was instead thinking about maybe toning out some of the other losses on the cheek to draw less attention to the eyes.
When it first entered the lab the mask was being tracked as E17632 but over the course of the treatment, I found a different accession number on the interior, E1019. With the help of our curators, we were able to piece together that E1019 was the original accession number, and E17632 had been assigned to it later. When I looked up the record for E1019 in the museum collection database, I found the record included two eye inlays! I was so hopeful that this would mean that I could reintegrate two inlays, one into each eye. However, when I reached out to the curators to get more information, I found out that they are two parts of the same eye, the white part of the eye and a pupil/iris.
Well, this left a new set of problems. Especially since you can see here, the white part of the eye was not very white anymore since it was covered with a dark brown substance. I was left with a lot of options, leave the eye inlays out, reintegrate them as they are, or clean them and reintegrate them, and if I reintegrated them should I then also create a replica set for the other eye?
Before making any decisions, I checked to see if they inlays fit the eye sockets in the mask, which they did. The inlays turned out to be for the masks right eye. After that, I spent some time characterizing the dark coating on the white part of the eye inlay. This included UV examination and comparing how the coating fluoresced with the brown modern materials I found on the interior of the mask from previous treatments. The results were not as clear cut as I was hoping. It seems that there is more than one brown substance on the inlay based on the UV examination. With this data in hand, I reached out again to the curators with the options of leaving the eyes out, reintegrating them as is, or cleaning and reintegrating. The curators indicated that they wanted the inlay reintegrated, and that they would like a replica for the missing inlay as well so that she looked even as one eye seemed worse than no eyes. Together we decided to clean the eye inlay, but to keep samples of the substances on the inlay for future analysis.
Once clean, I set about making a copy for the masks left eye to be a close but not identical match. Based on previous experience I decided to make the new inlay set out of a two-part light weight epoxy called Wood Epox as it is easy to shape and can be sanded and carved. To start, I made a paper template of the shape of each inlay. I made sure to mark what I wanted to be the front of each so that the shape would be a mirror image of the original inlay. The white inlay is slightly curved, so I also created a form that would have the same curvature using foam.
Next, I rolled out some sheets of wood epox, and using the paper template trimmed out the shape I needed for both parts of the eye. The pupil/iris part I let set flat, what let the one fore the white of the eye set in the form I had made so that it would have the same curvature as the original. Once cured I sanded them to finish, with the final stages being wet sanding so that the replica inlays would also have a natural gloss.
The final step before assembly and placement in the mask was the paint them to resemble but not exactly match the originals. I used gloss medium for the pupil/iris as this inlay was especially glossy and I could not get that level of gloss with polishing and painting alone.
Finally, here you can see the end results after treatment. You will see though, that I have not attempted to recreate the inlays for the eyebrows. Because we had the one set of eye inlays, I had something to reference for making the replica set of inlays, however, there are still pieces missing which I had no frame of reference for. There were also likely inlays that went around the outside of the eye as well. These and the brows might have been made out of a variety of materials and without the originals for reference, there is no way to be certain about what their color and appearance would have been.
As an intern working with the conservation department, I have received the opportunity to work on many projects and experience things I never thought I would. Recently I have been working on this software called Reality Capture using photogrammetry. Photogrammetry is a process that uses an abundance of photographs to create a 3D model without any distortion from many overlapping images stitched together to form a detailed and geometrically corrected image called an orthomosaic. This process is usually used on larger objects, and this is because it’s too big to be in frame when taking pictures and has a lower quality with distortion which is far from perfect, and Christy Ching explains more in depth about this in her previous blog post.
I want to show how to create a digital three-dimensional model using the software, Reality Capture, and I’ll demonstrate with an example of the after-treatment photos of an Egyptian coffin.
To start off with, having pictures of the object is a must. For this example, they were already taken and edited in Photoshop, to adjust the white balance using adobe bridge ahead of time. Then I begin by opening the software and then under workflow at the top left corner, I select “inputs.”
Then I select all the images making sure that they were a .jpeg file and then I click on “Align Images” as highlighted above. After the images are aligned, a transparent box surrounding the coffin appears. I adjust the box by dragging the control points around to make it as small as possible without cutting off any part of the coffin. As you can see in the image below, using E883C, the box is close to the coffin but does not intercept the coffin itself.
Now for the fun part to see the coffin take shape, I click next to “Calculate Model” to select “Preview Quality” as highlighted below. Then I go to the tools bar to use the lasso option to erase all the unnecessary space around the coffin. Then after being satisfied with selected area, I click on “Filter Selection,” which turns the selected areas from orange to dark blue showing that it worked.
Finally, I go back to the Workflow bar to select “Texture” which is highlighted below and then it shows all the details of the 3D model without any distortion in high detail and quality.
by Tessa de Alarcon with images by Alexis North, Molly Gleeson, and Christy Ching
We recently de-installed two stone sarcophagi from Egypt from the upper Egypt gallery at the Museum: E15415 and E16133. These pieces are slated for reinstall in the new Egyptian and Nubia Galleries and will likely need extensive treatment before they go back on display. This is why they have come off display, so that we can assess their condition and evaluate what needs to be done for the new gallery. For both pieces, we need to check the stability of the previous treatments. Both have previous joins and fills that were done before the formation of the conservation department. This means that we have no records for when these treatments were done or the materials that were used to reconstruct the stone and fill the losses.
In the case of E15415, this meant we needed to see the underside. We brought in Harry Gordon, a sculptor and professional rigger, to build a wooden cradle or cribbing and then lift the piece and flip it so we could see the joins and fills from the other side.
When we flipped the object over and took the plinth it had been sitting on off, we found an additional puzzle. The piece has had a plexiglass vitrine over it for quite many years to protect it while on display. However, that has not always been the case, it used to be uncovered in the gallery. It seems that prior to the placement of the vitrine some visitors took advantage of the small gap between the stone and the wooden plinth below it to slide things under the object.
In a way, this has made this object a sort of time capsule. We found a number of things that had been hidden under the sarcophagus including a coupon for Secret deodorant (worth 5 cents), a program for the Graduation Exercises for the University of Pennsylvania Oral Hygiene Class of 1967, a museum map from the when the museum was called The University Museum, a votive candle donation envelope for the church of St. John the Evangelist Sacred Heart Shrine, a scrap of paper with dishes on it, and two black and white photographs. While some of these things are easily identifiable, like the program and the coupon others are more of a mystery.
I personally find the photos the most interesting. They look like shots perhaps from a photo booth. This is based on their size and format and that each has a torn edge (one at the top and the other at the bottom) suggesting that they may have been part of a longer whole or strip of images. Who is the subject in each image? Were the photos discarded because the owner or owners didn’t like them? Were they taken at an event or party at the Museum? Were the photos captured at the same event or in the same photo booth (if they are indeed from a photo booth)? They are similar in size and format, but that doesn’t mean they relate to one another. Were they taken somewhere else and discarded during a visit to the Museum? I have only questions and no answers, but my hope is that by sharing these images maybe someone reading this will know or recognize them and be willing to tell us more.
In preparation for the new Ancient Egypt and Nubia galleries the conservation department began a survey of the current Upper Egypt gallery to understand the condition of the objects and anticipate treatment time. Part of this survey includes performing archival research on the excavation and exhibition history of the monumental pieces in the gallery. This research will help us better understand previous treatment and display decisions to inform future treatment decisions.
One of the monumental pieces is the “Triumphal Stela” (29-107-958) from the Bet Sh’ean expedition directed by Clarence S. Fisher, Curator of the Egyptian Section at the time. The Penn Museum Archives contain records from the expedition, including Fisher’s field diaries handwritten in cursive. I was able to locate the diary entries from when the stela was found and transcribed them to the best of my ability for easier reference later. Called the “Ramses II stela” during excavation, this rounded top stone pillar was found toppled, underneath the “Seti I stela” on May 31st, 1923.
“The discovery of a second Egyptian stele at Beisan and one with such a nicely cut and clear inscription is of immense importance and we all eagerly await the turning over of the stones” – Page 167 Fisher Diary
The excitement of the find is clear in Fisher’s journal entries. He asserts that the stelas were toppled purposefully as they once stood on stone bases next to each other. The Ramses II stela was found in two pieces. Alan Rowe completed a drawing of the bottom portion of the stela, and a close-up image was taken of the top portion.
The stela was recently de-installed from the current Upper Egypt gallery and will be on view in the Eastern Mediterranean galleries (opening at the end of this year) before being installed in our new Ancient Egypt and Nubia galleries.
The conservation department recently acquired a new light for multi-modal imaging – an ADJMEGA PAR Profile Plus (one for use at the conservation lab annex and one for the museum main lab). The MEGA PAR is a tunable LED light source, with 64 different color channels. While not designed for analytical imaging, it provides a bright and large spot size that we can use for visible induced infrared luminescence (VIL) imaging of Egyptian blue. It will also be something we can use to test out other imaging methods in the future. Taking VIL images is not new to the lab, but the light source we had been using stopped working and we needed to replace it. We are grateful to Bryan Harris for making the purchase of the new equipment possible.
Along with the new light, we also acquired a new reference standard, a 99% reflectance spectralon. This standard is critical for developing methods and standard procedures for imaging in the lab. In this post I am going to show an example of how this standard can be used and how I developed a protocol for VIL imaging with the MEGA PAR light.
Since the MEGA PAR light is new, one of the first things I did when it arrived (after unpacking it and reading the instructions of course) was run a variety of tests on known reference materials to see what settings might work for creating visible induced infrared luminescence images of Egyptian blue. As part of that process, I set up a grey scale card (QP card V4) and two reference pigment samples, Egyptian blue and Egyptian green (both from Kremer pigments). I chose these so I would have a known pigment that should luminesce, the Egyptian blue, and one that should not, the Egyptian green. Using the department modified full spectrum camera, I took a visible reference image of the known pigments and the QP card using our regular fluorescent photo lights and a visible bandpass filter over the camera lens so that I could have a normal color image.
Then I captured a series of images using the same set up but replacing the visible band pass filter with an 830nm long pass infrared filter so that I could capture images in the infra-red, with the fluorescent light turned off and the MEG PAR turned on. Each of the images I captured were with the same settings on the camera and with the MEGA PAR light in the same position, just going through each of the 64 color channel options.
I converted the images to grey scale adobe camera RAW by sliding the saturation level from 0 to -100, so that the red, green, and blue values (RGB) would each be the same. I then used the dropper tool to take a reading over where the Egyptian blue standard is in each image and recorded the number. The higher the number, the brighter the luminescence.
After doing that I had a reduced set of options that produced good luminescence in the Egyptian blue for a second round of testing. For round two I did the same thing with the more promising group, but also included in my images the 99% reflectance spectralon standard so that I could check and verify that the light is not producing infra-red radiation. If there is any infra-red, than the 99% reflectance standard should be visible. None of the second round of options showed any infra-red. While any of them can be used for VIL, CL08 gave the strongest response.
After developing a working set-up, I did a test in the photo studio using an object that I knew had Egyptian blue, and the standards. I captured a visible image with the modified camera with the visible band pass filter and the fluorescent photo lights, and a VIL image with the 830nm long pass filter and the CL08 setting on the MEGA PAR. The false color image was created by splitting the color channels on the visible image in photoshop, discarding the blue data, and putting the VIL data in the red channel, the red visible data in the green channel, and the green visible data in the blue channel. As you can see the spectralon is not visible in the VIL image meaning there is no IR radiation being produced by the MEGA PAR light.
After all this work, I had an opportunity to see how the new light would perform in less than ideal settings. I have been working on a study of one of the coffins in the collection, 2017-20-1.3, to examine the coatings and pigments. VIL is the perfect method of identifying blue areas on the coffin but the coffin is too big to fit in the department photo studio. The set of images below were taken in the Artifact Lab (our public lab in a gallery space) where there is IR from the windows (daylight) as well as from the gallery lights. I hoped that a short exposure with the new very bright MEGA PAR would reduce the effects of IR in the image. As you can see in these photos below, the 99% reflectance spectralon is slightly visible but not as clearly as the Egyptian blue on the coffin. These results are much better than what we used to get in the Artifact Lab using our old light, so I am very happy with these results.
Our department has owned a Compact Phoenix Nd:YAG laser for several years now and we have successfully used it to clean objects like this trio of birds for our Middle East Galleries. While there are a lot of possible applications, we have found the laser to be especially effective for cleaning stone objects with coatings, stains, and surface grime that are not easily removed using other tried and true cleaning methods including solvents, steam, and gels.
Did somebody say “stone objects with coatings, stains, and surface grime”? Because we have tons of those (literally) in our Conservation Lab Annex (CLA) where we are working on monumental projects for the Ancient Egypt and Nubia Galleries. But the last time we held a laser training session was before we hired our CLA team. Lasers are not found in all conservation labs, so it is not unusual for experienced conservators to have little to no experience with lasers.
In order to ensure a safe set-up and to get everyone trained on the equipment, we brought in Philadelphia-based conservator Adam Jenkins to provide the team with a full day of training. Adam specializes in laser cleaning and also conducted our last training session at the Museum in 2017.
After a classroom session covering the fundamentals and science of lasers, and the necessary safety protocols and PPE, we moved to the lab to try the laser on a few objects. We had success with several, which is very promising! The team is now set up to continue laser testing and cleaning on their own. We are grateful to Adam for his expertise and support and for this professional development opportunity. We are excited to incorporate this tool into the work out at CLA!
One project I have really enjoyed working on as a pre-program conservation technician is documenting larger objects for a process called photogrammetry. Photogrammetry is a technology that gathers spatial and color information of an object from multiple photographs to form a geometrically corrected, highly detailed, stitched image called an orthomosaic. Essentially, photogrammetry creates a distortion-free, three-dimensional model of an object based on two-dimensional photos of every surface photographed in sections.
This can be done for objects of any size. However, we are mostly reserving this technique for larger objects, specifically larger textiles and Egyptian coffins. This is because photographing the coffins and textiles normally with a single shot requires a greater distance between the object and the camera in order to fit the entirety of the object into the frame, and doing so reduces the image quality. Not only that, but the camera distortion that is inherent in all photographs will become more obvious. The resulting image will not be an accurate representation of the coffin or textile, which is not ideal for documentation purposes.
With photogrammetry, we can take parts of the 3-D model and use them as high resolution, distortion-free, 2-D images of the object instead.
So far, a little less than ten coffins, a few textiles, a pithos fragment, and a giant granite relief have been documented using photogrammetry. The models and orthomosaic images are all generated by Jason Herrmann from CAAM, and we are very grateful that he is doing this for us! To learn a little bit more about the photogrammetry process, view this Digital Daily Dig here.
This project was made possible in part by the Institute of Museum and Library Services.
The figure you see here E4893 is an ivory statuette from the site of Hierakonpolis. In a previous blog post I discussed the X-radiography that helped me determine that the large fill around the waist of the object could be safely removed. Based on that X-ray, I was able to mechanically remove the soft fill material and separate it from the object.
Sometimes the full picture is not always clear from an X-ray. While I was able to remove the fill material and the nails, one thing that was not apparent on the X-ray and only became clear during treatment, is that part of the lower half of the object was embedded in the fill. This section also keys into the upper fragment. This may seem like a minor detail, but it is very important for knowing how the pieces should go back together. The loss in the waist is large and a fill is needed to stabilize the object structurally. One worry I had as I approached this treatment, was figuring out what the fill should look like and how elongated should the body be. However, once I found that in the fill there was a section of the object that keyed the bottom and the top pieces together, I knew that the placement of the two fragments could be conclusively determined.
Even knowing how the pieces should go together joining the pieces was far from straight forward. The point of contact is too small for an adhesive join without fill material taking the weight of the fragments or to relay on the connection to hold the pieces in alignment during loss compensation. I had to instead figuring out how to support the fragments in the correct alignment while I created the fill. I decided that the best way forward was to create a removable fill using an epoxy putty. This is a fill that has to be adhered in place, as if it were another fragment, rather than relaying on the fill material to adhere or lock the fragments together. This means that I needed a barrier layer between the fill material and the object, and a system to hold the pieces together. The barrier layer is meant to prevent the fill material from sticking or adhering to the object and you will see in the images below that there is cling film between the epoxy and the object that I used as a barrier layer. The support system, however, took some trail and error before I found a method that worked.
First I tried laying it flat in a bed of glass beads to support the object, but this did not work, it was too hard to see if I had everything lined up correctly and the fragments kept shifting as I put the epoxy in place. Taking inspiration from my colleagues working on Egyptian monumental architecture at the conservation lab annex (CLA). I decided to try making a rigging system in miniature to hold the fragments in place vertically. This allowed me to see the object all the way around and check the alignment more reliably. However, my second attempt using a vertical support system with the object upside down, still led to too much shifting when I tried to put in the fill material.
As a result, I adjusted the system from the second attempt and put the object right side up, carved a chin rest for the figure into the foam support and added a piece of foam to the back to hold the upper fragment more securely in place. The wooden skewers you can see in the images are used to hold the foam pieces together. My third attempt was very effective at holding the object in place in a rigid way with no shifting and gave me plenty of visibility to check the alignment.
After I made the fill, I sanded it smooth and checked to make sure it fit right. Here you can see if dry fit in place and after everything was joined together. This should be a much more reversible treatment than what was done before should this treatment need to be redone again at some point in the future. While the object does not look all that different from the way it did before treatment, it is much more stable now with materials have better aging properties and allow for easier retreatment should that be needed.
This project was made possible in part by the Institute of Museum and Library Services