Understanding Animal Faces

I found my project on zooniverse, which is not a crowdsourcing project itself, but rather a platform which currently lists 51 projects from across several disciplines that are in need of volunteers. The project I contributed to was Understanding Animal Faces, the goal of which is to get computers to recognize the faces of different animals. I believe this type of project was mentioned in class, and so I was excited to see how researchers are actually going about this task. The first phase of the project had involved volunteers locating the face of an animal in a photo of the entire animal. The next phase, and the one I contributed to, involved annotating nine “landmarks” on the animal’s face. These landmarks were left eye (left corner), left eye (right corner), right eye (left corner), right eye (right corner), nose tip, lip left (corner), lip right (corner), upper lip, and lower lip.

The process of annotating involved clicking on the names of one of these landmarks in a list, which then generated a colored circle which you could drag and place on the photo of an animal’s head. Before each annotation, however, the volunteer is asked the question: “Are there any animal faces in this image and is the image not too blurry?” If I answered yes, then I was directed to the next picture. Volunteers could contribute to as many pictures as they saw fit. I continued contributing until I had annotated fifty images.

The information page of the project specifies that after the annotation portion of the project is complete, the researchers will “train Deep Neural Networks with the aim of training a computer to distinguish one species from the other based on facial information solely.” My contribution will help researchers get closer to this next stage in their research. However, annotating images quickly started to feel like a monotonous and routine task, which is what makes me understand why some people may harbor concerns that crowdsourcing is a form of cheap labor. However, the project justified that these contributions are in fact necessary for their project to move forward, explaining that each image took fifteen to thirty seconds to annotate, but that they needed to annotate thousands of images of hundreds of different species.

At first my work did not feel like activism because of the simplicity of the task, but after considering the impact of the project I began to feel that it was a form of activism. The team emphasized that most of the effort to link perception and artificial intelligence has been concentrated on getting computers to recognize human faces. Additionally, the project page notes that this work “will impact on a number of fields including animal health and welfare, species identification and animal-robot interaction.” I think it would help motivate volunteers to contribute if the researchers were more precise than this and gave a more detailed explanation of the impact their work will have in these fields.

One response to “Understanding Animal Faces”

  1. Crowdsourcing for Understanding Animal Faces involves getting volunteers to annotate nine facial landmarks on an animal’s face. The purpose of this crowdsourcing is to train a computer to be able to distinguish between species based on just facial information. In other words, crowdsourcing here means having volunteers contribute to the formation of relevant research tools.
    1] What is the relationship betwen crowdsourcing efforts on animal facial recognition and human facial recognition?
    2] How good of a job does this project do at locating and motivating volunteers? What works well and what does not?