What’s all that 3D data for?

By Tessa de Alarcon

We’ve had a few posts (this one by Chelsea Kim and this one by Christy Ching) on creating 3D models using photogrammetry, and I thought I’d give some examples of what we are doing with that data once it’s collected. For some objects we are creating ortho-mosaics and these 2D images are going into reports as after treatment images as well as going into the catalogue model as record photography that also shows up in the online collection database. This wooden coffin 2017-20-1.3 is an example of this type of imaging.

2017-20-1.3 after treatment photos created using ortho mosaics generated from a 3D model created using photogrammetry.

For other objects we are also producing ortho-mosaics, but they are before treatment images. For example with E641 a wall painting that was previously on display.

E641 when it was on display

The wall painting is currently in two sections and each one has been imaged separately. These before treatment images have been used to create condition maps.

Before treatment ortho mosaics of E641 created with photogrammetry

The maps go into our reports and help provide visual documentation to support our written reports. For large objects, these kinds of condition maps are often easier to understand than written descriptions and can provide more precise information on the location of specific condition issues. Here you can see the condition map for E641. The map is not yet complete, I am still working on documenting one of the sections but I have combined the two maps into one image so you can see what that process looks like.

E641 condition map. The map for the section on the left is complete while the mapping on the section on the right is still in progress

The models can also be used to show surface distortion, so here in this screen shot of the 3D model of E641 you can see planar distortions in the wall painting where the fragments are not aligned. There may be a variety of causes leading to this distortion including poor alignment during the previous reconstruction or they may be the result of lifting/separation of the original material from its current modern backing.

Detail of E641. One the left is a mesh without the color added to the 3D mesh-model and on the right is the same area with the color and surface texture added to the model. The image on the left you can easily see the fragments and how they are misaligned in some areas.

I am currently working on learning how to create a 2D false color image where the colors reflect depth, so that we can have these planar distortions documented in 2D as well as being able to see them in the model.

So all together, this data is being used to document both the final condition of objects after treatment, as well as to document them before treatment. The models are also useful tools to assess complex condition issues and are valuable for evaluating next steps. For example, our current plan is to remove the wall painting from it’s current modern backing and put it on a new one. Our hope is to correct some of these planar distortions as a part of that process, and this model as well as one we make after treatment will be useful for evaluating the efficacy of the treatment and provide a base line for assessing its condition in the future.

2D to 3D

By Chelsea Kim

As an intern working with the conservation department, I have received the opportunity to work on many projects and experience things I never thought I would. Recently I have been working on this software called Reality Capture using photogrammetry. Photogrammetry is a process that uses an abundance of photographs to create a 3D model without any distortion from many overlapping images stitched together to form a detailed and geometrically corrected image called an orthomosaic. This process is usually used on larger objects, and this is because it’s too big to be in frame when taking pictures and has a lower quality with distortion which is far from perfect, and Christy Ching explains more in depth about this in her previous blog post.

I want to show how to create a digital three-dimensional model using the software, Reality Capture, and I’ll demonstrate with an example of the after-treatment photos of an Egyptian coffin.

To start off with, having pictures of the object is a must. For this example, they were already taken and edited in Photoshop, to adjust the white balance using adobe bridge ahead of time. Then I begin by opening the software and then under workflow at the top left corner, I select “inputs.”

Screenshot of the software highlighting where to click “Inputs” which is above “1. Add Imagery”

Then I select all the images making sure that they were a .jpeg file and then I click on “Align Images” as highlighted above. After the images are aligned, a transparent box surrounding the coffin appears. I adjust the box by dragging the control points around to make it as small as possible without cutting off any part of the coffin. As you can see in the image below, using E883C, the box is close to the coffin but does not intercept the coffin itself.

Screenshot of the Egyptian coffin E883C after the images were aligned with the transparent box adjusted to tightly around it.

Now for the fun part to see the coffin take shape, I click next to “Calculate Model” to select “Preview Quality” as highlighted below. Then I go to the tools bar to use the lasso option to erase all the unnecessary space around the coffin. Then after being satisfied with selected area, I click on “Filter Selection,” which turns the selected areas from orange to dark blue showing that it worked.

Screenshot of the coffin after selecting “Preview Quality.”

Finally, I go back to the Workflow bar to select “Texture” which is highlighted below and then it shows all the details of the 3D model without any distortion in high detail and quality.

Screenshot of the 3D model after being textured.

An introduction to our child mummy Tanwa

In my recent post about the Philadelphia Science Festival, I put in a little teaser photograph of one of our child mummies currently in the lab:

child mummy overallNow, all of our mummies are special, but this child mummy has several qualities that make her particularly endearing. One of the things that we really love is that her name is written on her wrappings, near her feet.

Child mummy detailHer name is actually written in both Greek and Demotic – Demotic is the language/script that developed in later periods in Egypt (and is one of the languages inscribed on the Rosetta Stone, along with Greek and hieroglyphic Egyptian). In Greek, this inscription reads: “Tanous (daughter of) Hermodorus”. In Demotic her name reads as “Tanwa”.

So, based on this inscription, we know that she dates to the Ptolemaic Period, and that she is a girl. According to our Egyptologists, what is interesting about the names is that they give a good indication of the multi-cultural nature of this time period. Not only in the fact that 2 languages are represented, but that the girl’s name incorporates the name of an Egyptian goddess, Iwnyt, while her father’s name includes the name of a Greek god, Hermes.

Tanwa has been CT-scanned, which has confirmed the fact that she is a girl, and was likely right around the age of 5 when she died.

Here is a still from the CT scan showing a detail of Tanus' skull. Based on her teeth it has been estimated that she was right around 5 years old when she died.

Here is a still from the CT scan showing a detail of Tanwa’s skull. Based on her teeth it has been estimated that she was about 5 years old when she died. That pin you can see near the top of her skull is modern and not actually in her skull-it was used to secure the outermost layers of linen in that area.

One of my favorite things that CT scanning has shown is that she is wearing 2 bracelets on her left wrist. We are guessing that these might be gold.

Two bangle bracelets on the left wrist show up clearly on the CT scan.

Two bangle bracelets on the left wrist show up clearly on the CT scan.

She also has a small metal ball included in her wrappings just over her right tibia. Exactly what this is and why it was placed there is a bit of a mystery.

A detail shot of the metal ball near her right tibia.

A detail shot of the metal ball near her right tibia.

There is a lot more we can learn from these CT scans, which I will describe in a future post.

Fortunately, Tanwa is in fairly good condition; one of the main issues that we need to address here in the conservation lab is that many of the narrow linen bands wrapped around her body are fragile, torn and partially detaching. I am currently more than halfway through the conservation treatment, and I will provide a thorough report on what we are doing to stabilize her wrappings next. Stay tuned!

 

Reimagining an ancient Egyptian material

Have you checked out our In the News section of this blog? Periodically, we try to update this page with some interesting articles related to our Egyptian collection, stories about projects and discoveries in Egypt, and even our own lab highlighted in the press.

One of the more recent stories that we’ve posted is about a new discovery related to Egyptian blue, one of the world’s first synthetic pigments. The ancient Egyptians made it by heating together copper, silica (sand), lime (calcium oxide) and an alkali such as natron (sodium sesquicarbonate) and it is found on objects from as early as the 4th Dynasty through to the Roman Period. We see this pigment on artifacts here in the lab, including Tawahibre’s coffin (and for more details read our blogpost on how we know this.)

A detail of Tawahibre's coffin. Based on analysis, this pigment has been determined to be Egyptian blue.

A detail of Tawahibre’s coffin. Based on analysis, this pigment has been determined to be Egyptian blue.

One thing that has been discovered about Egyptian blue is that it has luminescent properties-this luminescence cannot be seen in normal light conditions, but can be detected and recorded using a device that is sensitive to infrared light. This phenomenon is called visible-induced infrared luminescence. Using a regular (visible) light source and a modified digital camera, it is possible to not only positively identify Egyptian blue using a completely non-invasive technique, but it is also possible to discover very small traces of Egyptian blue pigment on surfaces of objects. It is our hope that we might be able to try this technique to examine some of the artifacts in our collection.

A painted wood uraeus on display in our Upper Egypt gallery. The paint has not been analyzed, but based on appearance the blue is most likely Egyptian blue. Examination with an IR source could confirm this.

A painted wood uraeus on display in our Upper Egypt gallery. The paint has not been analyzed, but based on appearance the blue is most likely Egyptian blue. Examination with an IR light source could confirm this.

Furthermore, it is now understood that this luminescence is produced by the nanostructure of the pigment – scientists have discovered that the calcium copper silicate in Egyptian blue can be broken into nanosheets, which produce infrared radiation similar to beams that communicate between TVs and remote controls and car door locks. It is now being envisioned that these nanosheets could be used for future near-infrared-based medical imaging techniques and security ink formulations!! Talk about a new life for such an ancient material.

You can read and hear more about this by following this link to our In the News section of this blog. Have you read or heard about something recently that you think we should share on our blog? Leave a comment here and we’ll try to incorporate these suggestions whenever possible.