pluto

Saving lives with Social media

Social media platforms are popular tools that are used to share information on anything going on in the world. In the emergency domain, such information can become a powerful resource for assessing the development of hazards and their impact, and how the affected population perceives them. Natural Language Processing and automatic event detection are therefore crucial in developing an effective disaster management system.

During the course of our research for the “I-REACT” project, we focused on developing an Artificial Intelligence system that could continuously monitor different types of hazards on social media and autonomously extract high-quality, organized information from posts.

One of the goals of our platform is to keep the first responders constantly up-to-date with what is happening, leaving them with the option to narrow down their information feed to focus on specific disaster areas (geographical location of a landslide for example) as well as the details (such as reports on damages and casualties).

Because our monitoring system is always active, there is no need for a disaster to be actually occurring for the information to be collected. Instead, precious gems of knowledge taken from social media can have an impact on any of the three main emergency phases: preparedness (when citizens should become aware of risks), response (to quickly identify key factors and afflicted areas) and post-disaster (to assess further damages and consequences).

Twitter, which is widely used in the study of natural disasters, is our life-source. This is because the basic form of communication on Twitter (the limited length tweet) is essentially a broadcast, meaning that the platform is especially suitable for succinct, emergency-focussed announcements.

The monitoring module was implemented within the Spark Streaming architecture, an open source infrastructure designed to deal with real-time analysis, transformation and operations. Twitter Streaming APIs directly provide the data for the system by tracking several keywords and hashtags connected to hazardous events across several languages. An example: for the monitoring of the flood hazard across English tweets, we used “flood” as a keyword and tracked it, but also included key hashtags such as #floodsafety and #floodaware.

Every day we hear about environmental disasters wreaking havoc all over the world – we therefore have a huge responsibility to work more globally. Currently, our platform only fully integrates English, Italian and Spanish, but we are working on branching out into new geographical areas and languages (the next being Finnish) in order to make our world safer for all. Across different languages, we collect hundreds of thousands of tweets every day, with peaks during specific crises – often this can culminate in millions of tweets within just a few hours. These large and fast-flowing amounts of documents can be efficiently managed within our big data, scalable architecture.

After data collection, our Natural Language Processing (NLP) pipeline kicks in, – ingesting all unstructured text and analyzing it through a combination of linguistic rules and machine learning algorithms. Broadly speaking, we employ linguistic analysis to express and capture specific semantics.

Each collected tweet is tagged according to what kind of information it contains: imagine an emergency-expertised, context-aware, Artificial Intelligence placing labels on stored documents based upon the way we explicitly programmed it to do so and what it autonomously learned from data.

Data is first filtered to identify posts that are actually emergency related. Keywords such as flooding, storm and drought can present themselves in different contexts, which have little to do with emergencies… or even the weather! Expressions such as “flooding of news” or “love drought” to name but a few.

After filtering unrelated, potential spam or trolling, content, the type(s) of information each tweet contains can be identified. For example, the text may contain a reference to an impacted location (e.g. “there is a storm approaching Manchester”), affected individuals and infrastructures (including missing people and blocked roads), or warnings and recommendations. From a more basic perspective, to understand that something new and unexpected is happening can be vital (e.g. “OMG I just saw some cars flying in a tornado”) and it’s one of the first steps towards emergency event detection, which eventually will require a validation from first responders.

An “Informative flag” is eventually placed on tweets that contain high-quality information. This is  potentially very helpful in getting prepared for or responding to a crisis. Tweets can receive any number of tags depending on their limited length and variety: we call this an enrichment process through a multi-class model. In fact, a single document may relate to different hazards (such as signalling a fire caused by a lightning storm) and provide information from different angles (knowing there are cars blocked in a highway will be relevant both for assessing damage to infrastructures and to the risk for civilians).

Tweets may also discuss specific aspects and be strongly related to a crisis, without being informative and immediately useful to first responders (charity requests for example). Language is inevitably difficult to analyse, and almost limitlessly varied. A combination of linguistic awareness, acquired through years of experience in the NLP field and the employment of professional native speakers is crucial. Together with innovative machine learning techniques, we can leverage upon large amounts of data.

At the end of our Social Media system, first responders can access the information through an app or dashboard developed within the I-REACT Project. All NLP tags are available through buttons and filters that can be used to select, navigate and explore the enriched data easily.

However, none of this work plays an important role in a European research project without a crucial phase of domain study and validation. To employ NLP and Machine Learning techniques a certain level of ground truth is required. Ground truth is data that humans, possibly proficient and domain-experts, have manually labelled. For this reason, we conducted a large annotation campaign of more than 10,000 unique tweets, eventually producing a multi-language, multi-hazard corpus (collection). This corpus had to be balanced between different languages and hazards in order to provide enough variety to the Artificial Intelligence component… and the developers who worked on it. Italian tweets mostly related to domestic crises, such as earthquakes in Central Italy, flooding in Piedmont (end of 2016), extremely hot temperatures and droughts (Summer 2017). English tweets were instead about crises from all over the world: landslides in China, Turkish and Greek earthquakes, Storm Cindy etc.

Twelve annotators from different countries were employed to classify all of the collected content, making sure that three native speaker professionals annotated each tweet. It is common in the crowd-sourcing world that work completed manually, not just by machines, must be cross-validated.

Because we want to test and tune our system to new events, in order to better face future unexpected situations, we have recently conducted new annotations and analysis of Hurricane Ophelia and Harvey, and of the Piedmont wildfires at the edge of Turin.

In conclusion, the approach we proposed looks viable for monitoring generic, emergency-related data streams from Twitter (and potentially other social media) and continuously extracting relevant information from it.

What is Machine Learning?

Machine Learning is a state-of-the-art subset of AI and is charging the rapid change and developments we are seeing in this field. It is fundamental in the development of Generalised AI: systems and devices which are less common than their Applied AI cousins and more exciting in the sense that they can, in theory, handle any task.

Although a buzzword of today, ML has, like many technologies we are witnessing mature, a longer history than you might expect. Coined by American computer gaming an AI pioneer Arthur Lee Samuel in 1959, ML came about with the realisation: why do we have to teach computers everything when it might be possible for them to teach themselves?

But it wasn’t until the dawn of the internet – with the vast quantities of data that generated – that engineers were able to act on Samuel’s idea. The result? The creation of more efficient machines. Machines with coding that allowed them to think more like human beings that we could plug into the internet and give them access to all of the information in the world.

Coding which allows machines to think more like us is called a Neural Network. Neural Networks are sets of algorithms, lightly modelled on the human brain, that are programmed to recognise patterns. A Neural Network can then classify information in the same way that we do – labelling the elements contained in an image for example.

To do this, it uses the data it already has to make educated ‘guesswork’ of what the information presented before them is in nature. There is then either a positive or negative feedback loop depending on a correct/incorrect categorisation, resulting in “learning”.

We discover more applications of ML every day. ML can diagnose anomalies in health results better than we can and tell us more about our personal risks of developing diseases in the future results. It can better detect fraud, the tone of voice of texts and even what mood we are in when we are browsing Netflix for the umpteenth time. The implications of ML are only growing in number: from self-driving cars to customer-serving bots, it will be ML that powers the technologies of the future.

So what is ML? It is the next wave of AI capabilities, the brains behind the mechanical brawn, and quite possibly the biggest development in technology this century.

Technological waves

Charles A. Beard warned almost one hundred years ago: “Technology marches in seven-league boots from one ruthless, revolutionary conquest to another, tearing down old factories and industries, flinging up new processes with terrifying rapidity.” This image of technology is bleak at best, but there is a lot of truth tucked away inside this Technological Determinist statement.

Throughout the history of mankind, significant changes in how society, business and culture function has followed key developments in our technology. Across thousands of years, humanity has ridden five distinct technological waves – waves that have all uniquely impacted how we experience the world, and waves that have all eventually taken backstage and given way to a new one.

The five waves that have crashed into our society so far are as follows: “Early Mechanisation” (1770s to the 1830s), “Steam Power and Railways” (1830s to 1880s), “Electrical and Heavy Engineering” (1880s to 1930s), “Fordist Mass Production” (1930s to 1970s) and “Information and Communication” (1970s to 2010s). We are now beginning to see the surge of mankind’s sixth wave: “Miniaturization”.

How do we know? Because all prior waves have been charged by the same forms of technologies: General Purpose Technologies (GPTs) to be exact. GPTs are paramount in connecting the past with the present, and, in the most reductionist sense, a GPT is a single generic technology that develops over time but can still be described as the same technology over its lifetime – the printing press, the wheel and the computer are three quintessential examples of GPTs. Initially, these technologies have immense space for growth and are eventually widely adopted – permeating different sectors and industries over time.

In order for a new technological wave to grow, there must be at least one, but often multiple GPTs that drive it. These technologies themselves must go through a three-step process in order to reach widespread adoption: experimentation, expansion and transformation.

Taking cars as an example, at first, no-one really understood the applications of this new technology, it wasn’t viewed as a great method of transportation and was just a novelty way of getting from A to B – this was its experimental phase. Following this, the expansionist phase, where we begin to see interesting innovative use cases as businesses incorporate the technology into their scopes, and a democratization of the technology due to increased affordability and reliability. Finally, we see the transformational phase, characterized here by the emergence of suburbs, drive-in cinemas and shopping malls. Essentially the technology began to transform how society worked, how society was now experienced was determined by this new technology.

Another interesting aspect of the technological wave phenomenon is that when you have a technology wave maturing (as we see in Information and Communication today), at the same time as new enabling technologies in the wave gaining momentum, you see some significant societal changes occur.

One of which is the rise of multidisciplinary experts: individuals who are extremely good at many different things. Researchers who are well versed in genetics and machine learning, and are able to combine the two with remarkable results. This leads to another aspect: cross-pollination between disciplines, with people combining different disciplines in new and interesting ways – a major driver of innovation. You can see this for example in how we are now using AI to understand the human brain and vice versa. The most pervasive of all, however, and one which we can all attest too, is how these periods in the waves duration upend our understanding of time. Today our attention spans are shrinking, and we as consumers demand everything we believe should be delivered in real-time. The rapid amount of change our time is characterized by is a direct result of where we are placed on this current technological wave.

Which brings me back to what comes next. To understand what this may be, it is essential to identify what the next GPT is as without a doubt it is already in our midst. Inevitably it will still be in its infancy, but it holds the potential to interact across disciplines and technologies, and, once it has matured, will have multiple use cases. One GPT that is very likely to do this is nanoscience – the miniaturization of technologies and our ability to understand and interact on molecular and atomic scales. In short, nanoscience is a technology that will shape future.

Nanoscience is part of a group of emergent technologies which will define the next wave. These technologies, described as BANG technologies (Bits, Atoms, Neurons and Genes), are what will connect us to the future. We have already seen the impact that bits, the life-force of information and technology, has had on our society. Atoms, as in nanotechnology, have had less of an impact so far but will revolutionise our relationship with materials. Neurons, our understanding of neuroscience, is already influencing much of our understanding of the human body and how AI (another high-probability GPT) works. Finally, genes and genetics, a technology that is vastly growing in our understanding of it and the life-bending effects it could have on our bodies and planet.

As a futurist, it may seem odd for me to talk about the history of technology. It is an often overlooked discipline, an oxymoronic one in many ways. Yet by looking to the past, we can learn a lot about our current context and, in many ways, predict the future – preparing ourselves and societies for the coming Tech Storm.

 

Trust is your biggest currency and you should work on it now

2017 was the year of the trust crisis. According to Edelman’s trust barometer, there has been a steep decline of people’s trust across all key institutions: economics, governments, media, and NGOs. This scenario has now led to the erosion of social values and to an increase in fear, which has set ablaze the embers of populism across some of the world’s most powerful countries. Our trust has, in fact, been betrayed in many ways: let’s think of fake news, of why we end up reading it and believing it in the first place. With our attention span becoming shorter and shorter, we started taking our news sources on social networks and elsewhere for granted. Left with scarce time for digging deeper, we merely rely on our own intuition — which, unfortunately, cannot always be right.

Trust is a double-edged sword: it can simplify our life, yet it can damage it. Think of Amazon’s Alexa, Google Maps or Facebook’s Messenger: they are tools that help us immensely with our daily tasks and provide us with updated news, shopping lists, effective means of communication, driving directions, weather information and so on. In order to use such services, we give up an extremely valuable currency: not money, but data. Google, in particular, attracts a lot of our trust with a simple rewarding mechanism: we ask it anything and we get answers. The trust is so strong that we ask questions to Google which we would never ask anyone else. As a result, Google crowdsources even our most evil thoughts. As human beings, we lie on our job, relationship, resume — the only machine we don’t lie to is Google. The human willingness to tell it our ‘secrets’, to trust it so blindly, turned it into one the most powerful predictive machine in the world.

Still, trust is quickly becoming the most valuable currency we have. It is already the basis of today’s internet-powered economy, a blend between the digital and the physical world. The game-changer was eBay, which back in 1995 introduced a trust currency within its system to prove (to itself, and to other users) that you are a trustworthy and credible seller or buyer. Evolved systems are implemented today by all of the main players of the digital economy — Uber, Lyft, BlaBlaCar, Kickstarter, or TaskRabbit to name a few. These companies are based on a trust score for each user: they use different kinds of trust ‘protocols’ through which our value and reliability is established and validated in front of the community’s eyes. A good example is Airbnb, where many base their decisions to book a place on other reviews and on what other people have said about the property and the host. The same system works the other way around, with me as the ‘reviewed’ traveler. Airbnb built a “currency” that helps people connect with like-minded people, and evaluates them through their profiles.

Trust is therefore becoming a digital currency that we can ‘spend’ for our online (and offline) experiences. Now, a company named Deemly has launched a reputation and social verification tool for P2P marketplaces and sharing economy businesses. This tool collects all of its user’s information from APIs of different services – especially from the marketplaces of the ‘sharing economy’ – and creates an overall score of the person. At the same time, China is creating the Sesame Credit system, which ranks citizens based on a number of factors and aggregating data from Alibaba’s services. These variables include how much a given user is loyal to Chinese brands, or if they support the government in their interactions on social media, — the one who gets a higher rating is more likely to get a job, access a loan, or have a shorter waiting time in public offices.

The Edelman report states that “to rebuild trust and restore faith in the system, institutions must step outside of their traditional roles and work toward a new, more integrated operating model that puts people at the center of everything they do.” To achieve this goal, we’d also have to step up our transparency standards and procedures. In fact, transparency will be one of the biggest costs in the future, an area that we need to really push things forward in, so that no governments or industries take advantage of the trust systems of the future. We first trusted a calculator to calculate a formula; then we trusted a computer; then we trusted a system. As machines become smarter and their predictive powers increase, we’ll have to come together as human beings and build protocols that play in our favor, and protect us from the services we need.

Nothing is certain except taxes and death–but what taxes, and whose death?

Mobility will fracture the very laws and infrastructures that our cities stand upon. How will we pay for the loss of revenue usually sourced through tickets, parking and license tax? How will our laws adapt to the disorder that changes in mobility will bring? How do we make the future of mobility not only sustainable, but also affordable whilst at the same time financing the infrastructure investments needed to avoid gridlock in our cities?

If we are successful, parking space for unused cars will be repurposed for walkways and green spaces. Our cities will become more walkable, and we will embrace a multi-modal mobility world which includes innovative PEVs (Personal Electric Vehicles) that have a smaller footprint and are eco-friendlier. Fleets of robotic taxis will float around our urban areas governed by laws and ethics that we are still trying to work out. But how will we pay for the loss of revenue from tickets, parking, license tax? And how will our laws adapt to the disruptive changes the future of mobility will bring?

The fuel tax that finances most countries road infrastructures will fade as electric mobility takes off. In its place, road usage charging models will begin to appear where people – and robotic taxis – pay taxes on miles travelled instead of fuel used. As infrastructure has a fixed capacity, and urbanisation continues to grow, the pressure to optimize utilization grows. This could be in the form of dynamic pricing, incentivising people to use the road infrastructure at different times of the day. Instead of going to work at 09:00 am, the system would suggest that you work from home or commute off-peak – helping optimise traffic flow. Tokens issued to citizens could serve as related incentive mechanisms.

Other methods to encourage people to use sustainable modes of transportation could be introduced to incentivise mobility users. Where users insist on using their own car, the system should encourage people to share rides to increase occupancy rates. It would be possible to offer them access to priority lanes or lower their road usage tax if they share their ride with others. Without a doubt: the future of mobility is multi-modal. It would be best to think of all of these solutions in a multimodal sense – intelligently connecting different modes of transportation – which ensure a seamless transition from one mode of transportation to another.

On top of this, methods for ensuring that this change in systems will be affordable could include the introduction of an unconditional basic income for mobility – wherein each and every citizen would have a minimum budget for access to mobility and transportation. Everyone will be allocated tokens that can be spent on any mobility service or even traded amongst one other for goods outside of mobility. Tokens could help finance fleets of Personal Electric Vehicles shared amongst the community members. Such new ways of financing could also help modernize infrastructure and nurture an understanding of it being owned and used by the local community. Because of urbanisation, these major investments will need to be made anyway. If we don’t have to build new parking lots and bigger roads for new arrivals then we will already be saving a lot of money otherwise wasted.

Whenever a new technology becomes the norm, it takes a while for laws and regulations to adapt. With mobility, we are looking at uprooting and changing some very basic laws that have been in place for a very long time.

But we will also be wading into completely new territory, with new ethical and legal questions that must be answered. Questions remain as too what level of Artificial Intelligence (AI) system performance will be the industry norm. And how we should think about liability given that the very nature of AI is that of a black box which continuously learns and adapts? When self-driving cars occupy our roads, and when something inevitably happens, we still are unsure of who will ultimately be accountable. The trolley test, e.g. should your AI enabled self-driving car save its own passenger over the lives of a pedestrian (or vice versa), is riddled with ethical potholes, even disrupting our idea of the first law of robotics (a robot may not injure a human being or, through inaction, allow a human being to come to harm). MIT are now offering a test online that is allowing users to engage in this conversation – showing where participants are in regards to the consensus. But before we can get even remotely close to solving these issues, we need to agree on the ethics of it all. Currently, most countries make these decisions by themselves, with little bilateral discussions or consensus occurring.

In any case, the march towards a shared mobility economy will continue, and both urban and legal infrastructures will have to adapt. In the U.S.A. today, 90% of all accidents are caused by human error – sadly we are the bug in the mobility system. Luckily, advancements in machine learning and autonomous vehicles mean that when we surrender our ownership of mobility to a properly thought out and safer, shared system, we will succeed in making mobility more affordable, efficient, and safer for everyone and the cities we live in.

What is Direct-to-Consumer?

Supply chains are a seemingly never-ending conveyer belt of transactions. Factories, pallets, warehouses, shop floors result in a labyrinth of labor (and that’s not even including the fleets of vehicles which transport goods from A to B). As they have grown and evolved, they have become encumbered by a process full of middlemen.

Direct-to-Consumer (DTC) can then be viewed as just another symptom plaguing the ailing middlemen – a phenomenon we are witnessing across all industries. Today, overly-populated and complex supply chains are eating into manufacturer’s revenues, and with revenue growth forever being a challenge, more and more manufacturers are experimenting with DTC channels.

What else has caused this shift? As with most change today, the digital transformation of markets has a huge role, as channels and communication has become ever more streamlined. This, in conjunction with the boom in e-commerce means that brands can now better set up their own online stores easier, or even work with bespoke delivery partners if they do not have their own fleet to hand.

Leading brands then are now increasingly embracing DTC sales, this not only provides full control over their own supply chain, but also in turn gives them full control of the brand experience in its entirety. DTC is not unique to B2C however, and B2B businesses are also capitalising on this trend.

And the trend is set to continue, as over time more and more shareholders in business will become more comfortable with this model. Customers too, as they begin to see the that DTC often leads to the best brand experience. In 2015, one third of customers bought direct.

Last summer, Nike, the world’s leading sports brand, launched their Consumer Direct Offensive, a self-described “faster pipeline to serve consumers personally, at scale”. This effort, centred on 12 key cities across 10 key countries, is expected to represent 80+% of growth through 2020. The footwear In fact, the sports giant announced that their DTC revenue amounted to approximately 9.08 billion U.S. dollars last year.

Similar to DTC is Direct-to-Store Delivery (DSD). DSD, whilst not exactly DTC, is also on the rise, and consists of manufacturer warehouse’s shipping to retail stores directly, resulting in a closed loop, multi-stop, supply network. This enables retailers to be more responsive to customer needs and store stock levels, leading to a more intuitive customer experience, better cost margins and better ownership of a simpler supply network.

Ultimately, DTC and DSD are establishing a better dialogue with their customers, bringing them closer to the brands and facilitating more transparency and control in this relationship. This is possibly the most important aspect of this trend, as closer channels of communication provide manufacturers access to customer data that, previously, was lost in a maze of bureaucracy.

So what is Direct-to-Consumer? It another final nail in the coffin for middlemen businesses, a new level of transparency of communication between manufacturers and customers and another avenue for brands to harvest the new oil – data.

Tips from the e-commerce giant Zalando

We sit down and speak to Zalando’s Managing Director of Lisbon Marc Lamik to hear some advice on the state of e-commerce today.

What is going on e-commerce globally today?

The biggest trend in e-commerce is increased volumes, both in terms of customers and emerging businesses. This is happening all over the world – if you look at Europe, the US and Asia, the numbers are growing steadily and so new technical solutions are changing the way we shop.

One of the most interesting areas to look at the moment is customer-centricity and how consumer demands are in many ways steering the development of online companies. This has completely upgraded the relationship between retail and the consumer, and nowadays the consumer is used to a much higher level of service.

What are the biggest challenges facing Europe in terms of e-commerce?

Generally, there are still a lot of growth opportunities in Europe. There are countries – specifically the Nordics but also Germany or the UK – that have a very high online penetration and thus have heavily adopted e-commerce from an early stage.

Convenience will become more and more important to European consumers, which will then challenge the logistics of infrastructure. Another area that we will keep paying attention to this year is the connection between online and offline shopping. Connectivity, whether it is between digital players and their consumers – or between suppliers and retailers – is becoming increasingly important in the fashion industry.

Are these trends applicable to industries outside of fashion?

The fashion industry is one of the earliest adopters of e-commerce. But other areas such as home service providers and food delivery are rapidly becoming digitalized. A common trend for all of the above, including Zalando, is the high level and sophistication of an even further personalized customer experience.

Amazon is now offering one-hour delivery. How can smaller companies look to offer the same experience and compete?

If there is one thing that we have learned over the past years it is that one company cannot do everything by themselves. In order to become truly agile one must partner up with innovative companies who are leaders in their field of expertise. For example, there are several logistics companies offering various logistical services that startups can profit from in order to offer the most convenient customer journey possible.

Do you have any examples of last mile delivery startups in Europe that you think are worthy of a mention?

We cooperate with many different delivery startups across Europe. In Belgium we work together with Parcify to offer geolocation delivery. In the Netherlands with Trunkrs to offer same day delivery in Amsterdam. Another example of a successful collaboration is the return of on-demand services that we offer together with our French partner Stuart.

Which technologies should consumers be excited about?

It depends on the timeframe. If we’re talking about in the next couple of years, then I think that we will see a big move into the 3D space, as presented by Apple’s cameras in the iPhone 10. Once this becomes more than just semi-supplied and common in a majority of consumer goods, Augmented Reality and 3D will have a huge impact and transform how e-commerce works. But, at the present moment, it needs to be refined before it can really be a viable technology for consumers to use.

Zalando grew very quickly. How was the company’s culture managed during this acceleration?

There are several things in terms of company culture. One of the biggest things is to embrace changes and, in particular, understand how the company culture changes as it grows. A startup will inevitably have a different culture to a company of more than 14,000 employees, but it is important to try and keep the culture as similar to that of a startup.

Furthermore, the culture needs to have most of its emphasis on a top-down basis. Zalando’s culture has remained so close to how it began is due to the fact that the founder is still ever present in the company – they are always there to look to as a reference point to what the company was and is about.

On the other hand, culture inevitably relies on the people in the company. You should hire those who fit the intended culture of a company in conjunction with training your leaders the proper ways of conducting the business. This is in order to prevent your culture from splintering in directions that are counterintuitive to your initial vision.

Explain Zalando’s concept of Radical Agility.

It is about installing an agile framework. Less about the day to day involvement but the overall concept of working together across different teams. It is ultimately a software development methodology that allows engineers to get work done whilst management gets out of the way. Based on three pillars: Autonomy, Mastery, and Purpose, Radical Agility is bound together through organizational trust.

Autonomy refers to letting the teams decide the best way to achieve their goals, whilst Mastery is focused on making them experts in their craft. Finally, Purpose. Purpose is about having defined and clear goals. This helps not only in inspiring engineers to understand why they are doing what they are doing, but also allows them to be able to measure their success whenever they completed the process.

What is Zalando Research?

Zalando Research is our research lab that’s focussed on what we think will be the most important technologies over the next five or more years. Technologies such as machine learning and artificial intelligence to name a few.

How do you manage the innovation process?

On the one hand it is part of the culture. On the other hand, we have changed how the company is organized many times over recent years. One thing that we have really embraced though is giving ownership of what employees are working on. We believe that this fosters innovation because if you are responsible for, and really believe in, what you are doing you can bring in new ideas and solutions – working towards objectives and not just a project plan that you tick the boxes of.

Another thing that we do at Zalando is our innovation hub, where each employee can pitch their ideas. If it is accepted, then they can receive funding to test it out and then if it is successful this can become a solidified part of the platform.

Part of your innovation process involves startups outside of Zalando. How do you manage these investments?

We have a very diverse approach. With some, we act as the classic investor, but in other cases, such as with the new ZalandoBuild platform, we integrate startups who offer solutions and functionalities that personalize the customer’s shopping experience and boost inspiration. In this sense, we act more as a partner that helps the startups scale their products by offering them access to our 22 million customers.

We are also, in many areas, and this comes back to the Autonomy principle mentioned before, every business unit owner can drive cooperation with startups as well. So we have multiple approaches to innovation.

The algorithmic self

Today, technology has permeated every section of our society. Our daily life is flooded with new applications and we are as a species rapidly becoming one with our tools. Most of these developments are visible, we see them around us all of the time. But what about the technologies that are not overtly present in our lives?

Those which affect us much more than anything else? These unseen forces are now an intrinsic part of how nations and businesses function – and to remain in the dark about them is to stay blind to how many predictive algorithms are in control of your own future and the potential paths your life can take.

All over the world, predictive algorithms are shaping societies in profound ways. In China, we are seeing witnessing one of the biggest social experiments in history, as 400 million people are relocating from villages to cities. In the largest mobilization project ever, China now has to come up with a way to integrate these people into urban society quickly. One way that has been devised is to give these rural citizens a credit scoring system in which they can leverage their potential to new employees and schools. Many of these citizens are completely off the record when it comes to banking, education and other indicators of an individual’s merits, so authorities have no real means to analyze their capabilities.

What this situation has resulted in is a complete rethink of the credit scoring system. Instead, China has opted for a social scoring system. The leading project in this area is Sesame Credit, a social credit scoring system being developed by Ant Financial Services Group, an affiliate of the Chinese Alibaba Group and associate of the Chinese government.

So how does it work? Initially, the platform establishes who your friends and family are in order to offset your score (between 300-900) and map your social graph. A person’s behavior will then subsequently impact their scoring, and by utilizing its citizen’s phones and other platforms, the government can ascertain what the individual is up too. Does the individual buy baby food and provide for his or her family? Then their score will go up. Do they spend ten hours a day playing video games? Their score will go down. Did the individual criticize the government online? Then their score will take a nosedive.

One of the most interesting aspects of this system is that although when it began it was simply a social scoring experiment, once users realized they had scores they began to share them. The population has completely bought into the idea of social scoring. This new society, not yet installed nationwide, has four different tiers that will define a substantial amount of the individual’s mobility ability. If you find yourself in the top first tier, you can apply for governmental positions, if you are in the fourth, you cannot even obtain a passport to travel out of your region.

Businesses too have accepted the idea, one dating website in China has offered free membership for users who have a score of 750 and above. An employment website has stated that if you have a score of less than 600, then they will not accept you as someone suitable for the jobs they have on offer. Others have used it as a promotional campaign, with train companies stating that you can only apply for first class tickets if you have a score of 700 or more, whilst a hotel chain has offered deposit-less bookings for anyone of a score of 700 or higher.

Right now China has only implemented this idea in one region, but a governmental paper states that by 2020 it is aiming to have made it a nationwide phenomenon. Initially it will be transparent, but eventually, citizens will not be able to see their scores or learn what is affecting them and how. This system does echo the caste/class systems seen across the world and could end up being just as difficult for individuals to mobilize themselves from within.

This all seems very suspect and we can be thankful that our governments and businesses aren’t experimenting with our prospects in the same way. However, what many don’t know is that they are, and we already have very similar algorithms in place across Western society. But unlike the Chinese, we know next to nothing about what these forces are for and who is designing them.

This raises a multitude of issues: the first being what information is used who gave the user the permission to take and use it? The second issue comes with what I call the distorted algorithm: who’s actually building these algorithms and do they have an agenda? Can they, for example, be racist and not even know that they are racist? And another, third issue is, what if someone is scored incorrectly? Is there anyone that can be held accountable? Can it be rectified? Are there any systems in place at all at all?

There is the counter argument that this fails to differ from the standardised rankings we have permeated throughout our society anyway. Our education, online presence, and businesses are all scored through different means, so what fears should we have of a score that is the culmination of these?

As much as that may be the case, ultimately we as a society should still have the right to regulate how these background algorithms affect our lives. Currently we do not, and it is impossible to know whether or not we have missed out on jobs, health insurance or even relationships thanks to these unseen forces. This is a discussion that must be had as more and more our algorithmic selves decide our lives for us.