by QSR

Supports robust qualitative and mixed methods research for virtually any data source and any research method.

Learn more

by QSR

Intuitive data analysis software designed for public policy experts analyzing surveys.

Learn more

Creating software to help you discover the rich insights from humanised data.

Learn more

Cool things from the MIT Media Lab

23 August 2018 - BY Silvana di Gregorio

When I am looking for inspiration, I have a look at the MIT Media Lab’s website. The Media Lab was founded in 1985 and is not organised by academic disciplines but brings together researchers from technology, media, science, and art and design.

Today, I went on the site and found a really cool project that I wanted to share. The DeepMoji project is using machine learning to understand the emotional content of sentences. The analysis of sentiment is important in academic, government and commercial sectors. For example, academic analysis of Twitter streams around controversial events and customer satisfaction data both require an understanding of the positive and negative aspects of responses.  However, we know that the way people use language can be subtle and nuanced. In particular, the current state of natural language processing finds it difficult to identify sarcasm or irony which can also fly over the heads of humans.

The DeepMoji project is developing artificial intelligence to identify emotion by training it to predict emoji’s from 1.2 billion tweets. The rationale is that if it can predict correctly the emoji that was used with a tweet, it will understand the emotional content of the tweet. You can help train DeepMoji by spending five minutes of your time by rating how you felt when you wrote your last three tweets. Here is the link that explains more.

The DeepMoji project is one of many projects in the Scalable Cooperation group of the Media Lab. The group explores how social media can connect people and build virtual institutions combining artificial intelligence, machine learning, and computer optimization in order to solve global problems.

The Turing Box is another cool project from this group. As I have discussed in my white paper, Transparency in an age of mass digitization and algorithmic analysis, social scientists need to be involved in the development of algorithms to ensure that they are developed ethically and responsibly.

The Turing Box enables non-computer scientists to study the behaviour of artificial intelligence. For example, if you want to study an algorithm that determines whether a suspect should get bail or not, you can put into the Turing Box a dataset of user profiles including characteristics such as age, gender, and ethnicity.  You would then be able to trace any systematic biases in the algorithm.  For example, are people who share a certain set of characteristics more likely to get bail than others? You will soon be able to test this out yourself on this link (which is not functional at the time of writing but will be soon).

But you can get a sense of how difficult it is to program ethically sound decisions by going through the scenarios of another project from the Scalable Cooperation group – the Moral Machine.  You can reflect on your own moral decision making in programming decisions that a self-drive car will have to make in case the brakes suddenly fail. You will then be able to compare your decisions with those of others who have gone through the scenarios on the webpage.

What I love about these projects is that they are making issues around artificial intelligence accessible to all of us. We need to get involved because artificial intelligence is developing in leaps and bounds. Social scientists need to work together with computer scientists to ensure we’re a part of the process and well informed.