Saving lives with Social media

In the emergency domain, the information shared on social media platforms can become a powerful resource for both rescuers and affected population.

by Francesco Tarasconi

alt=
Environment 26 February 2018

Social media platforms are popular tools that are used to share information on anything going on in the world. In the emergency domain, such information can become a powerful resource for assessing the development of hazards and their impact, and how the affected population perceives them. Natural Language Processing and automatic event detection are therefore crucial in developing an effective disaster management system.

During the course of our research for the “I-REACT” project, we focused on developing an Artificial Intelligence system that could continuously monitor different types of hazards on social media and autonomously extract high-quality, organized information from posts.

One of the goals of our platform is to keep the first responders constantly up-to-date with what is happening, leaving them with the option to narrow down their information feed to focus on specific disaster areas (geographical location of a landslide for example) as well as the details (such as reports on damages and casualties).

Because our monitoring system is always active, there is no need for a disaster to be actually occurring for the information to be collected. Instead, precious gems of knowledge taken from social media can have an impact on any of the three main emergency phases: preparedness (when citizens should become aware of risks), response (to quickly identify key factors and afflicted areas) and post-disaster (to assess further damages and consequences).

Twitter, which is widely used in the study of natural disasters, is our life-source. This is because the basic form of communication on Twitter (the limited length tweet) is essentially a broadcast, meaning that the platform is especially suitable for succinct, emergency-focussed announcements.

The monitoring module was implemented within the Spark Streaming architecture, an open source infrastructure designed to deal with real-time analysis, transformation and operations. Twitter Streaming APIs directly provide the data for the system by tracking several keywords and hashtags connected to hazardous events across several languages. An example: for the monitoring of the flood hazard across English tweets, we used “flood” as a keyword and tracked it, but also included key hashtags such as #floodsafety and #floodaware.

Every day we hear about environmental disasters wreaking havoc all over the world – we therefore have a huge responsibility to work more globally. Currently, our platform only fully integrates English, Italian and Spanish, but we are working on branching out into new geographical areas and languages (the next being Finnish) in order to make our world safer for all. Across different languages, we collect hundreds of thousands of tweets every day, with peaks during specific crises – often this can culminate in millions of tweets within just a few hours. These large and fast-flowing amounts of documents can be efficiently managed within our big data, scalable architecture.

After data collection, our Natural Language Processing (NLP) pipeline kicks in, – ingesting all unstructured text and analyzing it through a combination of linguistic rules and machine learning algorithms. Broadly speaking, we employ linguistic analysis to express and capture specific semantics.

Each collected tweet is tagged according to what kind of information it contains: imagine an emergency-expertised, context-aware, Artificial Intelligence placing labels on stored documents based upon the way we explicitly programmed it to do so and what it autonomously learned from data.

Data is first filtered to identify posts that are actually emergency related. Keywords such as flooding, storm and drought can present themselves in different contexts, which have little to do with emergencies… or even the weather! Expressions such as “flooding of news” or “love drought” to name but a few.

After filtering unrelated, potential spam or trolling, content, the type(s) of information each tweet contains can be identified. For example, the text may contain a reference to an impacted location (e.g. “there is a storm approaching Manchester”), affected individuals and infrastructures (including missing people and blocked roads), or warnings and recommendations. From a more basic perspective, to understand that something new and unexpected is happening can be vital (e.g. “OMG I just saw some cars flying in a tornado”) and it’s one of the first steps towards emergency event detection, which eventually will require a validation from first responders.

An “Informative flag” is eventually placed on tweets that contain high-quality information. This is  potentially very helpful in getting prepared for or responding to a crisis. Tweets can receive any number of tags depending on their limited length and variety: we call this an enrichment process through a multi-class model. In fact, a single document may relate to different hazards (such as signalling a fire caused by a lightning storm) and provide information from different angles (knowing there are cars blocked in a highway will be relevant both for assessing damage to infrastructures and to the risk for civilians).

Tweets may also discuss specific aspects and be strongly related to a crisis, without being informative and immediately useful to first responders (charity requests for example). Language is inevitably difficult to analyse, and almost limitlessly varied. A combination of linguistic awareness, acquired through years of experience in the NLP field and the employment of professional native speakers is crucial. Together with innovative machine learning techniques, we can leverage upon large amounts of data.

At the end of our Social Media system, first responders can access the information through an app or dashboard developed within the I-REACT Project. All NLP tags are available through buttons and filters that can be used to select, navigate and explore the enriched data easily.

However, none of this work plays an important role in a European research project without a crucial phase of domain study and validation. To employ NLP and Machine Learning techniques a certain level of ground truth is required. Ground truth is data that humans, possibly proficient and domain-experts, have manually labelled. For this reason, we conducted a large annotation campaign of more than 10,000 unique tweets, eventually producing a multi-language, multi-hazard corpus (collection). This corpus had to be balanced between different languages and hazards in order to provide enough variety to the Artificial Intelligence component… and the developers who worked on it. Italian tweets mostly related to domestic crises, such as earthquakes in Central Italy, flooding in Piedmont (end of 2016), extremely hot temperatures and droughts (Summer 2017). English tweets were instead about crises from all over the world: landslides in China, Turkish and Greek earthquakes, Storm Cindy etc.

Twelve annotators from different countries were employed to classify all of the collected content, making sure that three native speaker professionals annotated each tweet. It is common in the crowd-sourcing world that work completed manually, not just by machines, must be cross-validated.

Because we want to test and tune our system to new events, in order to better face future unexpected situations, we have recently conducted new annotations and analysis of Hurricane Ophelia and Harvey, and of the Piedmont wildfires at the edge of Turin.

In conclusion, the approach we proposed looks viable for monitoring generic, emergency-related data streams from Twitter (and potentially other social media) and continuously extracting relevant information from it.