pluto

Exploring the learnscape

Innovation in education today has become a huge talking point compared to what it was in the past and is found featured in daily and political discourse more than ever before. People in business, education, and society at large are now asking what we want from education, whether it be a more balanced and rounded sort or something different entirely. Yet, as with most things, turning vision or talk into action is difficult – and when it comes to innovation in education, the story is no different.

Nowadays everyone, everywhere, agrees that innovation is important. But what is innovation in education? What feeds it? What shapes it? What prevents it? Education is one of the sectors with one of the highest levels of investment of GDP expenditure in OECD countries – where an average 6% is spent on education. One would expect such a sector to invest a lot in innovation and continuous improvement, but there is little conscious investment in this agenda. Is funding then, a prerequisite to innovation? Many would argue yes but few would say that innovation springs in totality from the investment.

In fact, one of the reasons why spending more GDP expenditure fails to correlate with increased levels of innovation is that in education it is mainly devoted to staff costs, leaving little room for discretionary or unusual expenses. A lot of innovation just comes from policy reform. New ministers come in at a relatively fast pace with a new, often worthwhile, purpose and mandate some changes which result in innovation – paradoxically leading to innovation fatigue in the teachers on the ground who, at the end of the day, are tasked with implementing it. This is because there is simply not enough time for them to get to grips with some reforms before another wave of reform hits. There is this little space to talk about the more usual innovation policy which consists of empowering actors in a system so that they generate positive change bottom-up.

Today you would be hard-pressed to point to one country and say that it is getting it really right. But there are countries who are getting things right in many ways. In some places, the idea of innovation is well articulated – in France for example – even though they often lack the resources behind the concept to achieve their better-defined goals. In the US, there is a strong innovation drive which is more focused on technology and on an entrepreneurial mindset – yet it has yet to be understood if this is leading to any actual change. Unfortunately, we do not have many domestic data on what is changing or not, which makes it rather difficult for countries to know whether they are on the track they want, not to mention if they are on a good track.

One of the difficulties in measuring innovation at the most fundamental level is that innovation can happen anywhere – while we cannot measure everything. We thus have to be clear about the area of innovation at which we are looking. At OECD, what we just measured as innovation in education is the magnitude of the change in over 150  educational  practices during a decade, mainly teaching and learning practices, teacher professional development practices, and relationship with the stakeholders. (Other areas of innovation could have been new product adoption, change in administrative procedures, or new work organizations, for example, but we did not have enough international information about that.) We have also grouped them into broader categories which include the likes of active learning in science, learning through memorization, students learning without teacher input, homework or assessment practices. A few others are more teacher-focused such as peer-to-peer learning and teacher training.

By analyzing the state of educational practice in 2016 compared to 2006 through international surveys, we were able to document a moderate change in the overall educational practices students were exposed to – so a noticeable change that is not negligible but that is also not large or spectacular on average. For example, between 2006 and 2016, the share of secondary students who frequently practiced their math skills on computers went up from 8% to 31% in an average OECD country – this we would argue is an innovation in their educational experience as what was a marginal practice has significantly scaled up. Same if there is a contraction: for example when the share of students having access to a tablet during their reading lessons drops from 83% to 51%, as was the case on average. Systemic innovation is the emergence of something new or significantly different from before.

So where have we seen the most innovation? The role of technology in the classroom is perhaps the most obvious one – due to the more frequent use of IT in class (in spite of a small decrease in the access to IT equipment). But perhaps more surprising is the increase in peer-to-peer learning between teachers everywhere. Teachers have become more collaborative as of late, and we are witnessing teachers actually talking to each other and comparing ideas and teaching methods. Innovation is often charged by collaboration and the strength of learning processes within a sector, just as much as by money and governmental oversight.

This is because issues are solved faster through such collaboration; two brains are better than one after all. Classic teacher training is often not as effective or useful in the classroom due to the time disconnect between formal training and practice. Teachers are therefore more likely to get more practical, hands-on learning in how to teach in the moment or through the discussions with their peers who are in a similar position. We think this change correlates with the adoption of digital tools, as it allows for faster exchanges of information between individuals. There has been a strong push in educational policy as of late in growing educational professional communities and encouraging this collaboration.

One of the difficulties is the lack of investment in knowledge dissemination, in evaluation and in research. Practices tend to get adopted, not always because their success is likely, but because they are the ones that teachers know how to implement (or some politicians or parents think they were helpful in their own education). The big challenge is for teachers to expand their teaching portfolio, but it is no small task – it means mastering new teaching techniques, but that sometimes means being less controlling, taking more risks, and getting out of their comfort zone.

Personally, I do not believe that what is taught in schools is as important as it is made out to be. What is important is how they are taught. I would be content with an educational system that rebalanced the transmission of knowledge content (technical skills across different domains) with higher order skills and social and emotional skills. One of the problems of schools today is that they, perhaps ironically, are teaching too much subject-wise. The schools I would want are more reflective and more creative, and as this takes time, they should probably drop some of the content being taught, and teach more deeply. However, people have been saying this for decades, and getting anything removed from a curriculum is very difficult as teachers can argue convincingly that everything is important in their discipline.

This is not me advocating for less knowledge; on the contrary, I am advocating deeper knowledge. Some  people say we no longer need to know anything as we can google it: I find it outright bizarre – after all, how do we know what questions to ask if we know nothing? Learning something gives us confidence in ourselves and our abilities to learn other things and grow, mental milestones of our experience on earth. While acknowledging the importance of personalized, tailored education, we need to keep in mind that schools are social institutions, and a curriculum provides students with a common basis to be part of a common society. This common understanding is the foundation of mutual understanding and collaboration, and a cornerstone of the functioning of society.

Nowadays, if you do not say the word artificial intelligence (AI) in your sentence, you risk losing people’s attention. This is another topic we are working on. In higher education, one of the biggest innovations we are anticipating is the implementation of early warning systems, which monitor student’s progression and alert teachers to the possibility, to name a couple of examples, that they are not engaging with their studies or are at risk of dropping out – reducing the disconnect between the student and the teacher. Some AI agent could help faculty interpret the data.

This is a very good thing because data is everything today and more of it is exactly what innovation in education needs. Currently, when it comes to good actionable data, education is woefully lacking – in spite of being a sector producing a lot of data. Without a better capture and analysis of data, we are less capable of reflecting on what does and what does not work in the education sphere – handicapped in our ability to make informed changes to practice, or target direct funding accordingly.

Innovation in education is a multifaceted affair. With better data, including on innovation, we can finally put to rest the question of what is needed, adapt our curriculum accordingly, prove our theories wrong or right, and ensure that education continues to improve so as to ensure that the students are best equipped to deal with an uncertain future. What we must remember throughout is that we also need to ensure that education remains meaningful and enjoyable, that the students learn more than just facts and techniques. They also have to play with ideas, understand good intentions and each other.

New ways of thinking, learning, and driving change

A robust corporate learning strategy focuses on deep expertise development, collaboration, knowledge sharing, and the continuous reinforcement of expertise as a central to success.

When there is no formal training at all, managers and staff tend to coach each other to try to do their jobs more effectively. This form of organizational learning can be effective, but it doesn’t scale well and is dependent on the skills of its senior people. As managers either tire of training people themselves or they find they can no longer develop people well enough, professional trainers are needed. In more advanced cases, a company builds a “corporate university” (and these come in many shapes and sizes) and professional training begins.

What do companies such as Nokia, which lost a fifth of its market share in one year, and search engines which lost the battle for the search market to Google, have in common? These companies didn’t just fail to innovate. They simply failed to learn. Innovation comes second to learning, and companies that fail to learn and fail to innovate can often look to their organizational culture and find the blame – as such cultures did not tolerate mistakes.

People who invent and innovate must not only be very capable technically; they must have the freedom to learn and share what they’ve learned in an open environment. “Capability building” includes creating a management culture which is open to mistakes, building trust, giving people time to reflect, and creating a value system around learning. Companies that adopt these leading practices in learning culture significantly outperform their peers in innovation, customer service, and profitability.

Even though US businesses spend more than $60 billion a year in employee development, some executives question the return on that investment. This is because many may be spending this money on the wrong type of investment when in fact it is digital skills that they should be concentrating their efforts on.

Digital skills are now the fourth key competency alongside reading, writing, and arithmetic. How these skills are instilled is through digital learning: any type of learning that is facilitated by technology or by instructional practice that makes the most effective and best use of technology.

So what are the secrets behind digital learning? Personalization would undoubtedly be one of them. Individuals can customize their learning, from where they study to what device they use to which content they access. Custom e-learning technologies provide the possibility to get a modern education to those people who don’t have the possibility to learn in traditional educational institutions due to scheduling issues.

Web-based software is another part of this learning method which enables the use of educational, organizational learning services without leaving your own house. Chat-based collaboration platforms improve interaction between students. Competency-based education allows creating educational programs according to the individual possibilities of a particular student. These tools are charged by technologies such as artificial intelligence (AI), and their potential gives grounds for hope for greater prospects.

More than 41.7 percent of global companies already use some form of technology to train their employees. This number is expected to grow exponentially in the future, as the e-learning industry has seen a 900 percent growth since the year 2000 and  42 percent of companies have witnessed an increase in revenue through e-learning.

E-learning solutions are also more of a visual affair, and surveys have shown that 40 percent of learners are more likely to react better to visual information than text alone. This is most likely due to the same reasons why we do not dream in text or words – words are relatively new to the human race, we comprehend and memorize information better in visual form, and so when learning is augmented with gamification, scenario-based video training, webinars and simulations we are better at recollecting what we have learnt. So it should come as no surprise that this industry is not expected to stop in its growth: with a five percent increase in the market estimated between 2016 and 2023, exceeding $240 billion in total.

Another bonus worth mentioning is that learning digitally comes naturally to many of its students as it often builds on what they already know as, in most cases, they have in fact been using these skills their entire lives. For example, in the marketing sphere, social media is becoming a more prominent advertising tool. Approximately 2.5 billion people use some type of social media in their everyday lives, and so have an idea of which posts gain better traction through personal experience.

We should not, however, be looking only toward new ways of learning as much as new ways of thinking. This is a lesson that companies must learn if they are to thrive in the next period of innovation. The human species is, after all, a smorgasbord of variety and our mental capacities and intricacies are a testament to this. Mental diversity, also known as neurodiversity, then, is an important reflection of such a reality and can be a positive catalyst for any business. People who process information differently and are wired differently are simply better prepared for bringing new ideas to the table.

This is even more paramount today as the workplace becomes increasingly competitive. In such an environment, approaching problems and work tasks from a different angle can prove to be a huge advantage: neurodiversity, therefore, could hold the keys to the kingdom in that it may support the skills business leaders need for the workplace of tomorrow. Lateral thinking is responsible for creating some of the world’s most renowned inventions, brands and art.

Take Sir Richard Branson, one of the world’s most renowned businessmen. He recently divulged his thoughts on his own neurodiversity, and on a ‘condition’ I too share: “I think dyslexia helped me on the creative side with simplifying things,” he states, “When Virgin advertises, for example, we would not use jargon or things that people didn’t understand. “I think it has influenced the way I talk to people and communicate in articles. I keep things simple. That helped build the Virgin brand and people identified with it much more as a result.”

Branson goes on to mention how the even the British intelligence and security organization GCHQ knows this and actively seeks out people with dyslexia because of their unique, different outlook and their natural ability to connect the dots like few others can: “We need to change how the world perceives dyslexia so that it really understands it properly and dyslexics are nurtured and encouraged to focus on their strengths.”

Individuals with dyslexia are by their very nature ‘hard-wired’ to fill a gap in organizations for creative, different thinkers who can make sense of the rapid change and disruption we are experiencing. I am not the first to note nor notice this. Harvey Blume, an American journalist, in 1998 wrote in The Atlantic that, “Neurodiversity may be every bit as crucial for the human race as biodiversity is for life in general. Who can say what form of wiring will prove best at any given moment?” I would argue that no-one can.

Before the Industrial Revolution, most of the world worked in agriculture. With the invention of mechanized tools, suddenly the world shifted from farm to factory. Today, technology and AI are heralding that same level of change and disruption and job types and skills are shifting, rapidly. Digitalization does not have to mean fewer jobs, simply different jobs. In this sense, companies must adopt new practices, such as e-learning, which as we have seen are already proving themselves capable of providing the skills needed by the workforce of tomorrow.

However, at the same time, corporations need to remain open to all possibilities when it comes to educating themselves and their talents as sometimes it takes an outside-the-box thinker to light up the way. Dyslexic and neurologically diverse minds are just that and are without a shadow of a doubt capable of doing so. It has, after all, always been the combination of the technological with the human which heralds the most success, this new age of innovation will by no means be different.

What is Boston Dynamics?

There are very few topics that tickle the human imagination as much as robots do and they’ve been doing it for quite a while now: the idea of mechanical beings dates back to Greek times, with the myth of Pygmalion, a sculptor falling in love with a statue that eventually comes to life and becomes his lover. The idea of an ‘artificial human’, as paradoxical and fascinating as it is, had only been a purely philosophical manner. That is until 1955, the year that marks the very first AI programs. After the ‘AI winter’, a long halt between the 70’s and the late 90’s, advancements in this technology suddenly boomed, and so did the presence of robots in series, books, video games and YouTube videos. However, when it comes to envisioning a world where humans and robots live together, there has never been room for bright futures: this has not (and probably will not) change. What has changed since the first experiments is that the robots we imagined for so long have now jumped out of the book pages and cinema screens and are right in front of our eyes — and yes, they are as we expected them to be, if not more.

One of the most advanced companies of the robotics sector worldwide for example, went viral for its videos showing the performances of the robots it manufactures: they can open doors, perform warehouse workers’ tasks, dance to a Bruno Mars song, do parkour. And they look so eerie that they’ve been labeled as “creepy”, “nightmare inducing”, “nimble”, “terrifying”  — the list goes on. What’s more creepy than them, is the history behind the American company that produces them, Boston Dynamics: a story where the U.S. army and the most powerful tech companies merge, all mixed together with a sprinkle of secrecy and internet conspiracy. So, what is Boston Dynamics? To answer this question, let’s get back to the very beginning: in 1992, the company spun off from a Massachusetts Institute of Technology lab. The founder, Marc Raibert, had joined MIT in 1986 as an Electrical Engineering and Computer Science Professor teaching a curriculum which included NASA’s jet propulsion laboratory and Carnegie Mellon University’s Department of Computer Science and Robotics. The idea of founding Boston Dynamics came precisely from his experience at the MIT’s ‘Leg Laboratory’, a department specifically devoted, in its own words, to the ‘exploration of active balance and dynamics in legged systems creation’ aimed at the building of cutting-edge, legged robots. A very layered branch of robotics, which involves the combination of different, complex factors altogether: perception software, electronics, robotic intelligence factors and equilibrium, among others.

Since its early days, the history of the company had been operating in the military sector, along with the industrial and rescue ones. One of Boston Dynamics’ first and most known projects, in fact, was to create interactive 3D computer simulations to be used to replace naval training videos for aircraft launch operations. The final client was, through a Naval Air Warfare Center Training Systems Division (NAWCTSD) contract, the American Systems Corporation. The collaboration with an American government organization would be the first in a long series. In fact, after its first two advanced machines RHex, a six-legged robot that can ‘traverse different scenarios such as rock fields, mud, vegetation, railroad tracks, and stairways’; and SandFlea, a four-wheeled robot able to jump 10 meters into the air to jump over obstacles — the company released BigDog, the infamous robot which inspired the “Metalhead” Black Mirror episode and was the first project of a series to be financed by the Defense Advanced Research Projects Agency (DARPA). The final aim of the project was to implement in the US Army a robotic pack mule to ‘accompany soldiers in terrain too rough for conventional vehicles’. Even though the BigDog project got dismissed in 2015 and never reached the battlefield for being, alas, too noisy to be useful on the battlefield, the news of the US Army-Boston Dynamics collaboration had already spread and everyone was quite worried by the idea of having one of the world’s most powerful armies invest in advanced robotics so much. Things got worse in 2013 when, between the beginning year of the BigDog project and its final dismissal, Boston Dynamics was bought by Google X, now simply X, a secretive company owned by the Silicon Valley Giant. This, more than the videos of the company’s robots creepy performances, is what truly went viral and made people worry.

Why would Google, a corporation already powerful enough, buy one of the most advanced companies in robotics worldwide which, incidentally, was investing most of its energies in the military sector? Where did all the interest of Google in robots come from? With Boston Dynamics having been, back then, the 8th acquisition of the company in robotics, it seemed clear that Google had a serious interest in the sector. As the ongoing contracts with military organizations was seen as highly worrying, Google made a promise: to honour the contracts already in place, but to totally dismiss any military project once the preexisting ones were over, and to focus on the industrial sector only from there on out. The company’s goal, in fact, was not to enter the slippery and risky sector of war, but it was meant to make robots a marketable, everyday life product, instead.

The year of Google’s acquisition, Boston Dynamics had quite a varied portfolio of products. Cheetah, a robot released one year before: developed for DARPA, with its 28.3 mp/h run speed it is one of the fastest robots on Earth (its newest version, Mini cheetah, which made its debut in early 2019, is the first robot to be ever able to do a backflip); Legged Squad Support System (LS3), a better, militarized version of the BigDog once again developed for DARPA; Wildcat, a free-running model of Cheetah running as fast as it; and Atlas: uncovered in 2013, it’s a humanoid robot that can run, avoid obstacles and handle objects. The speculation of what would Google really wanted to do with Boston Dynamics will keep on being mere speculation: as the company, after just four years, would soon be on the market.

Here again, the reason is unclear, but Bloomberg reported tensions between the two companies and blamed it on the low profits that Boston Dynamics could generate. On top of this, some argue that Google was lacking vision in its robotics investments after Andy Rubin, creator of Android and manager of the robotics division, was allegedly forced to leave the company due to sexual misconduct.

What matters is that, although Google is still investing in robotics, they pulled a u-turn and sold all of their robotics companies, including Boston Dynamics, over the same period. In 2017, Boston Dynamics was sold to Japanese tech giant SoftBank, and in the same year of the new acquisition, unveiled a new version of its quadrupedal robot, called SpotMini: a very compact, four-legged and headless robot which was capable of smooth and complex movements. It can open doors — and you’d better not try and stop it if has decided to — and pull trucks, too. Twenty six years into the business, and 2019 is the year Boston Dynamics seems to be finally ready to go out into the real world. First, SpotMini will be the first robot the company has ever produced to be commercialized: the aim is to not only have it deployed in factory plants and offices, but to have it deployed in homes, too.

SpotMini will be the quietest robot BD has ever produced, but to actually enter people’s lives their products have to do much more than open doors. That’s why, this year, the company has bought Kinema Systems, a US-based startup specialized in 3D Vision solution for industrial robots, acquiring a vast in-house ability in robot manipulation. At the end of March 2019, the collaboration showed its first results, with the unveiling of Handle, a prototype robot meant to be used in warehouses and factories.

As the military sector seem to be left aside at the moment, the company seems to be ready to have its automated machines enter the normal world. What’s creepier than having a Google-owned advanced robots company work with the US army? Having the distant relatives of these things within our homes. Opening our doors to the uncanny valley.

Not everyone is creative (and that’s fine)

Creativity is like magic – it is hocus-pocus: creating something out of nothing. One moment the world is without, then there is poetry, music, and art within. As a composer of classical music, I have often wondered: ‘Where do these things come from?’, and have decided that it is something you are pre-programmed to be able to do. To a creative person, the act of creativity is not creative, but the norm. If the entire world was creative, as seems to be the current mission of many today, then this too would be the norm: an oxymoronic existence that cannot be described as creative.

Today, many envision and hope for a world that, following automation, will be characterised by everyone playing on the beach and writing poetry all day. This is a utopian, borderline dystopian for creativity. Why they view those who lack creativity with such disdain is unclear – the artist has no spite for his or her audience, so why is the audience today showing such hatred for itself? The fact of the matter is that not everyone can be creative as that would signal conformity – the antithesis of creativity. It would be like if everyone was good at sports, competition would lose all meaning.

We are all born with certain creative talents. But even if we don’t necessarily have the talent for creativity per se we can still manage to be creative. The how is rather simple: just don’t be predictable. Changing your routine, and ensuring that you are doing something different to what has come before is all that is needed. “It’s how it has always been done,” must be the most asinine and anti-creative statement imaginable, don’t ever use it.

In the same vein, being creative is not only related to the arts: an accountant can be creative. An individual can be creative in their day-to-day life. It is possible to be creative anywhere. It simply depends on individual choices and the way a person thinks – their mindset. In a way, if an individual tries to organise their thoughts and ideas in a different way, even if they are not particularly artistic (the way they organise their office, how they work with others etc.), that’s creative in its own way anyway.

Fortunately for those who are not creative, intelligence and creativity have little to do with one another as they come from different sides of the brain. I know some great artists who really are not the smartest people and some very smart people who lack even a slither of artistic talent. It’s the balance of life and is something to respect and enjoy, not fight.

Unfortunately, however, it is something we do fight. This is because the media forces populations to think that if they are not creative, they are worthless. To them, it’s great to be creative. It’s colourful to be creative. Because no-one wants to live a grey life, doing the same things over and over again until they die. Our consumer culture today emphasises that we need to make our lives more interesting – it is the aim of the game today – if you don’t, you’re grey. I saw the word creative used to advertise a burger once, and more recently I heard the word emotion in an advertisement for toilet paper. Both terms are being used so flippantly that they have been watered down to irrelevancy. Creativity for all equates to creativity for no-one. In this superficial environment, it becomes impossible to discern when someone is being genuinely creative, or if someone is simply trying to sell you plastic things.

But we do not need things to be colourful. We can be ‘grey’ and live rich lives raising families or tending to a garden. In fact, being creative often comes with a lot of colourful baggage such as mental illness or addiction which any sane person would gladly pass up for in exchange for the grey life. Sadly this life is a reality that is not glamorised so much in the media as it lacks the ‘X-Factor’. The result is that it is no longer enough to be part of a society with audiences and artists. Instead, at best, nobody wants to be excluded from the consumer tribe, at worst they want to lead it.

This is because the world is getting faster and faster, and people have to keep up else they get left behind. When some inevitably do, resentment grows. Social media is a clear reflection of this and is a cesspool of resentment, hatred, and spite – being used more as a means for people to inject their venom into wider society rather than connect through happy dialogue.

The cognitive scientist Stephen Pinker wrote that the human brain is only meant to absorb so much information over a certain period of time. Important technological advances in the past arrived at different periods throughout history at a pace that made them more digestible for our mindsets, societies and behaviours. Take a man from the 1920s compared to one from the 1950s, in most ways, they were one and the same, using much of the same technologies and expecting much the same from life. Compare a man from the 1950s to today, however, and you are describing an entirely different creature – it would be impossible for the man from the past to catch up with all the nuances that technology has bestowed on us since – and vice versa.

This newfound speed is why people are more into John Grisham than James Joyce. The latter is bitesized, the former requires a repeated reading in order to gain a better understanding. Mendelson took 13 years to write his first symphony, today he’d be laughed at for suggesting such a thing. That’s true creativity. Instead, what we have today is a classic case of quantity over quality: people prefer the prepackaged product, and creativity suffers as a result.

So no, not everyone can be creative – and that’s fine. Just as everyone cannot live forever, else there wouldn’t be any joy in living. Being an audience rather than an actor in relation to creativity is nothing to be ashamed of – one cannot exist without the other. The world would be good to remember that.

Which direction will AI take?

2018 witnessed AI being deployed at an astonishing speed and scale in many use cases, from facial recognition to language-based human/computer interfaces. In 2018 we also saw the first large scale failures, from the Facebook – Cambridge Analytica scandal to the controversy around tech giants selling image technology for enhancing drone strike and facial recognition technology for surveillance, despite studies showing high error rates in these systems for dark-skinned minorities. In 2018 we also saw a Tesla crash on autopilot, killing the driver, and a self-driving Uber crash, killing a pedestrian.

These high-profile failures have helped to remind us that if AI is shoddily built and wielded in haste, the consequences will affect many human lives. They were the first wake up call for technologists, policymakers, and the public to take an active responsibility in creating, applying, and regulating Artificial Intelligence ethically.

The first positive consequence has been that the AI bias, once a little-known concept, is now a well-recognized term and top-of-mind for the research community, which has begun developing new algorithms for detecting and mitigating it. Not an easy feat, but a problem that must be addressed somehow if we want to trust autonomous systems for life-and-death decisions in fields like radiology, credit scoring and crime prevention.

The second positive consequence has been a deeper engagement of social activists, lawyers, and academics with AI. AI is a very technical topic, but technologists cannot be left alone to plot its future. We need a collective, multi-faceted effort to educate the public and policymakers and to elevate the quality of the regulatory debate.

This attention to AI led to the third positive consequence: companies have begun hiring ethical AI officers, and establishing codes and processes for evaluating AI projects and countries like Canada and France or supranational bodies like the European commission have defined agendas for the global AI ethics discussion.

More attention to AI ethics and to the consequences of the deployment of autonomous systems is very good news both for the field of AI and for society as a whole because the sooner we realize we are facing tough problems, the sooner we can start solving them. And we are not thinking of the existential threat that, according to some very important people like Elon Musk and Bill Gates, a general artificial intelligence (or super-intelligence) may pose in the future, but to the very concrete and potentially life-threatening risks that the current narrow AI system can pose already today.

Where art thou, my brand?

I got into communication at an early age and was lucky enough to enter this world through one of its key elements: brand identity. As a young and hungry designer, I got to experience very exciting and innovative moments in the industry,in which brands such as Benetton and Aprilia went global thanks to entrepreneurial and communication intuitions that were as simple as being able to access information in a completely new way. In the previous decade, it was all about making the message as brief as one could: back then, it all depended on the 30 seconds, or the few centimeters, limit of an ad. In the ‘90s, instead, it was about conceiving and designing signs and systems meant to last for decades, if not for a company’s whole lifecycle.

The number one lesson you learn from the multifaceted branding process is how crucial time is: the time needed to define the message that is to be conveyed, the time that is necessary to shape that message in a way that is both tangible and digestible, but more than anything, the time and durability of the brand. Designing and following the evolution of a corporate identity implies being knowledgeable, having patience and following set rules – all elements that seem to be put at risk by the fast pace of today’s media industry. Just like in any revolution, change can disorient, reshuffle hierarchies, introduce new leaders and players – but there’s one thing we can be sure of: its rhythm is set only by those able to decipher its founding elements.

Twenty years later, after the new economy, social media and the various innovation trends that followed disrupted our lives, what role do contemporary brands play? What was unaffected by their growth process and who is most capable of reading this new scenario? In an attempt to understand this evolution we must consider two main facets: on one side, what changed, and, on the other, what is the meaning of ‘brand’, today.

From Logo to Asset

The digital revolution has given the concept of ‘brand’ a crucial position in the current business market. Today, virtual and immaterial aspects are more important than ever, while the competition field has become highly fragmented and catching the consumer’s attention is increasingly more a matter of recognition. That is why a brand today is the main asset of business: in all its different facets, it is the most tangible presence of a company in a consumer’s life. It’s not just a distinctive mark, nor just the quality guarantee of a product or service: it’s an actual, perceived entry point, a determining factor for a company to be credible in the eyes of the client and to guide choices which are taken in an increasingly short time-span and are often guided not by objective assessments but rather by emotions and feelings.

From Mission to Promise

Given that brands have become industrial assets, the revolution we’re witnessing offers a broader opportunity for companies to hold greater importance in people’s lives. Smart services, Internet of Things and the data-driven market give brands the unique chance to take on the role of enablers, problem solvers and life partners in the daily quest of the consumer for a satisfactory personal dimension. That’s why it would be an understatement to talk about a “mission”: in this scenario, a brand becomes a declaration of intent, because it affects our lives as never before, both as individuals and as communities. Contemporary branding has become, in this way, more than simply visual identity, it’s about the complex designing of a role that is, indeed, made up of signs, but also of styles, content and interaction, all enclosed within a promise: that of going somewhere together.

From Product to Playground

The typical boundaries that used to define the moments of interaction between a brand and its clients are now leaving more space to ongoing experiences, where what matters is just how strong the connection between the brand and its followers is. That’s why experiences, which make the brand-consumer connection deeper and memorable, whilst also helping to forge a community, are so crucial. An approach where both physical and virtual spaces host moments of entertainment and of utility concurrently, where products and services are access keys to exclusive memberships and where brand loyalty implies recognition and gratification.

That said, developing a relevant identity in today’s market is a highly complex process that requires a multidisciplinary approach, but is essentially ascribable to two macro-elements: brand as a platform, and brand as a screenplay. The concept of brand as a platform is linked to a brand’s own anatomy and to the necessary harmonization of the different touchpoints that make up the modern, complex ecosystem of relations between a company and its consumers, but also to the very notion of corporate identity. Today, the platform is the paradigm to follow. Tech-wise, this implies building an infrastructure capable of distributing content, activating iteration points and managing the relationship with the audience, from totality through to the most niched, representative clusters, and even to one-to-one relations. Today, brand communication can be conveyed anywhere, from traditional mainstream channels to new global platforms such as Netflix or Amazon, from pop testimonials to micro-influencers, from esports sponsorships to new crypto. It’s essential that such dynamics keep the same efficiency no matter the place, time and device: the more effective they are, the more the fragmentation challenge will be won against the so-called “one audience”.

Strategically speaking, the idea of brands as platforms means being in control of all the wires that connect different players and touchpoints, content and timings of consumption. This makes it possible to have a deep understanding and real-time monitoring of commercial mechanisms. But, most of all, of how the needs and interests of those who are not just consumers anymore, but members of a community, are evolving.

Brand as a Screenplay

A second, huge revolution in the contemporary conception of brands is that of providing this ecosystem with consistent, relevant and in-depth storytelling. It’s true that the communication business and advertising have always had this approach and big companies indelibly entered our lives thanks to effective operations made of metaphors, heroes, and seriality – but the digital transformation has, in the last 20 years, deeply transformed the rules of the game, and, along with them, the strategies to put in place to stay on top of the very game.

Today’s communication process is much more similar to that of the television and film industry, where subject and script are the backbones of any storytelling you put atop, no matter what special effects it will be enriched with. From a practical point of view, what does this mean? If until the ‘90s corporate identity was formally synthesized (shapes, colors, application rules and all) into a “Brand Manual”, today its shaping is regulated by a multilayered approach. Visual and verbal communication are now deeply affected by the technicalities of the different channels they are transmitted over, integrating editorial guidelines and software parameters in the definition of a recognizable aesthetics. But the real narrative twist that transforms a brand and its platform in a powerful ecosystem is the presence of a script (both vertical and horizontal) that reflects the story of an organization and evolves with it, able to represent the industrial goal in the market on one side, and the virtuous impact on its reference community on the other.

This process leads traditional advertisements turning into branded content, which now is actually more content than brand-driven. Companies have a very personal relationship with their clients, and they do so with strategic planning and synergistic actions, using on one side human intelligence and creativity to stir emotions and on the other technology to create a functional proximity that, as said, turns a contemporary brand into a life partner. For those familiar with the concept of “writers room”, the space where authors and screenwriters shape the most complex global entertainment sagas, it’s like opening this space up to collaboration, providing room for to psychologists, data scientists, interaction designers, software engineers, startuppers, and consumers, who become fully involved in the process.

We’ve walked through a story made of complexities, which have become increasingly more challenging to tackle, as technology impacts our daily lives more and more. In a nutshell, we could say that the contemporary branding process is basically the story of a reversal: if, until 20 years ago, identity was a matter of making a mark people would recognize, now it’s about leaving a mark, in the many different ways it is now possible, in people’s lives. Contemporary branding today in essence, is designing that mark.

Black metal AI

As part of our collaboration with Die Graphische, student Valentin Haring writes how music and technology have interacted over the past decades – interviewing DADABOTS, who use machine learning to make music, on what this means for the art today.

Computers have only existed in our world for a few decades, but the impact they have had on culture, art and life in general, is undeniable. With computers profoundly influencing every aspect of our lives music will not, of course, be exempt to this change. Thanks to programs such as Ableton Live, FL Studio, and Reason, making music is more approachable and easier than ever. Basically, anyone with a computer could theoretically produce the next chart-topping hit on it, given that they put the work into learning the software and impact the zeitgeist.

With all these possibilities in front of us, let’s look into how we got here and what has happened in computer music over the last 70 or so years.

January 1st, 1951 – the beginning 

The first documented instance where a computer entirely played audio recordings was with the Ferranti Mark 1. A massive machine that played only three songs. While that doesn’t seem too impressive now, back then it showed what was possible and laid the foundations of what was to come.

August 1982 – MIDI 

MIDI, short for Musical Instrument Digital Interface, was introduced into the lives of digital musicians. The technology allowed different musical instruments to communicate with each other and your computer. Now almost 40 years old, MIDI is still the industry standard today, but there will be more on MIDI and what you can do with it later in the article.

April 1989 – Cubase 

The music company Steinberg released its music making software Cubase, which quickly revolutionized the way all digital audio workstations (DAWs) functioned.

The early ‘90s – Audio recording 

Until the ‘90s, computers were used to sequence external hardware instruments via MIDI. After the release of Steinberg’s Cubase Audio, this rapidly changed, and computers were now able to record sounds. With that advancement, the basics were laid down, and since then the hardware and software to create music continued to improve steadily.

Randomly generated melodies 

That brings us to the main topic of this article, we are in the 2010s now, and recent years have seen a big rise in the use of artificial intelligence across all fields. It’s basically the beginning of computers creating things on their own, but before I talk about computers making music completely on their own, let me touch on the topic of randomly generated melodies using the MIDI technology I mentioned above.

So MIDI, at a basic level, communicates what note is played. You can then sequence these MIDI notes on Digital Audio Workstations to write melodies and songs. Most of the newer DAWs have some kind of way to randomize the MIDI inputs. For example, the feature can be used to let the computer come up with melodies for you, but you still have to input some rules. Without any kind of direction it would sound terrible, so you can tell it to stick to certain scales, rhythms etc. With a bit of work, you can make these melodies sound quite pretty. But that’s still not quite what we were searching for, it’s still not exclusively produced by the computer.

What music is made exclusively by a computer, then? Music made by artificial intelligence. Over recent years, there has been increasing media attention towards this technology, something about it just fascinates all who hear about it: computers capable of learning by themselves. So, with computers becoming more technologically advanced each and every year – the act of training these computers is becoming democratized. So there’s an increasing number of independent programmers and artists, that uses the so-called neural networks technology, to experiment with a new form of creating art. And what is a neural network? Well, simply put, it’s a framework that uses machine learning algorithms to learn from the data you put into it.

But what has all of this to do with the topic, computer-generated music? Well, you’ve probably already guessed it by now, but some people have trained neural networks to make music. How does it work? Basically, you feed the neural network enough material to listen to, then you wait and let the computer learn its characteristics and how to recreate them. Two people who do just this to create music are CJ Carr and Jack Zukowski. Together, their musical persona is called DADABOTS. They train neural networks to make Black Metal, Math Rock and they’ve even trained a network on the music of the Beatles.

But does it really sound like the music that it is based on? Well, yes and no. We as humans are pretty bound to specific rhythms and melody types that get lost when AI creates music, but it does create soundscapes that are similar to the music they’re based on, and that makes them an extremely interesting listen. But don’t take my word for it, I’ve interviewed DADABOTS themselves to find more about how it’s done and what is possible, and they’ve provided some extremely interesting details about both the technology itself and their DADABOTS project.

What is DADABOTS? 

Not sure what DADABOTS is. We’re a cross between a band, a hackathon team, and an ephemeral research lab. We’re musicians seduced by math. We do science; we engineer the software, we make the music. All in one project. We don’t need anybody else. Except we do, because we’re standing on the shoulders of giants, and because the whole point is to collaborate with more artists.

And in the future, if musicians lose their jobs, we will be the scapegoat. We jest: please don’t burn us to death. We’ll fight on the right side of musical history, we swear.

How did you get started working on DADABOTS? 

We started at Music Hack Day at MIT in 2012. We were intrigued by the pointlessness of machines generating crappy art. We announced that we set out to “destroy SoundCloud“ by creating an army of remix bots, spidering SoundCloud for music to remix, posting hundreds of songs an hour. They kept banning us. We kept working around it. That was fun.

How does creating your music with neural networks work? 

We started with the original SampleRNN research code, written using Theano. It’s a hierarchical Long Short-Term Memory network. LSTMs can be trained to generate sequences. Sequences of whatever: it could be text, it could be the weather. We trained it on the raw acoustic waveforms of metal albums. As it listened, it tried to guess the next fraction of a millisecond. It played this game millions of times over a few days. After training, we asked it to come up with its own music, similar to how a weather forecast machine can be asked to invent centuries of seemingly plausible weather patterns.

It hallucinated 10 hours of music this way. That was way too much. So we built another tool to explore and curate it. We found the bits we liked and arranged them in an album for human consumption.

It’s a challenge to train nets. There are all these hyperparameters to try. How big is it? What’s the learning rate? How many tiers of the hierarchy? Which gradient descent optimizer? How does it a sample from the distribution? If you get it wrong, it sounds like white noise, silence, or barely anything. It’s like brewing beer. How much yeast? How much sugar? You set the parameters early on, and you don’t know if it’s going to taste good until way later. We trained hundreds of nets until we found good hyperparameters and then published it for the world to use.

What’s the difference in your approach to generating music and other methods to make music, such as randomizing MIDI inputs? 

We trained it completely unsupervised. There’s no knowledge of music theory. There’s no MIDI. There’s nothing. It’s just raw audio. It’s surprising that it works in the first place. What I love about unsupervised learning is that it gives hints into how brains self-organize raw data from the senses.

Do you think music generated by neural networks will have the potential to reach mainstream success? Is there any specific reason why you are focusing on math rock and black metal to generate, rather than other, more mainstream genres?

For some reason, other AI music people are trying to do mainstream. Mainstream music is dead. Solid. Not alive. Rigor Mortis. Any new music idea it gets has been harvested from the underground. The underground has always been home for the real explorers, cartographers, and scientists of music. The mainstream finds these ideas and beats them like a dead horse until they’re distasteful. Why should a musician set out to do mainstream music? Because they want to be famous while they’re alive?

Becoming mainstream has been important for subcultures that were underrepresented and needed a voice. Teenagers. African-Americans. etc. Whereas tech culture already dominates the world. It’s swallowing the music industry whole. What does it have to gain by making mainstream music?

Math Rock and Black Metal are the music we love. It has a special place with us. Whereas many new black metal bands sound like an imitation of the early ‘90s black metal, albums like Krallice’s Ygg Hurr push it to new places I’ve never felt before. The research is fresh. Rehashing old sounds is like publishing scientific papers on the same old experiments. That’s no way to keep music alive.

 Listening to your music, the voices created by the neural network sometimes sound eerily similar to real ones, do you think there will be a point where the artificial intelligence can incorporate real words and coherent sentences into the generated song?

As of 2016, this was possible. Did anyone try it? Realistic end-to-end text-to-speech is achievable with Tacotron 2 as well as others. Applying the same idea to singing is possible. Aligned lyrics-music datasets exist. Has anyone trained to train this net? It’s expensive to do this. You need hundreds of thousands of dollars’ worth of GPU hours. Give us the resources, and we’ll do it.

How do you think artificial intelligence will influence music in the years to come? 

Think cartography – mapping the deep space between all the songs, all the artists, all the genres. Think super-expressive instruments – think beatboxers, creating full symphonies with their voice. Think autistic children, etc., in the context of music therapy, making expressive music, gaining a cultural voice.

Life inside the gilded cage

In the 2002 Jaume Balaguerò’s horror movie Darkness, the last scene sees a girl and her little brother getting into a car and fleeing the house from satanic creatures after much blood has been shed. It seems that a soothing happy ending is coming to relieve us after a couple of hours of chills, but the very last frames show the car entering a tunnel. The viewer knows what will happen when the darkness embraces them. When speaking of technology, we have cause to think that maybe we share something with the two unlucky characters, as we are blindly and confidently driving towards our doom, unaware of the dangers that lie in wait.

W were told that technology would set us free. Technology charged geopolitical events such as the 2009 Iranian Green Wave or, two years later, the Arab Spring, made us believe that this was certainly the case. We failed to pay attention to the increasingly powerful satellites multiplying in the sky, to the cameras and sensors appearing around every corner and gathering our data, nor even in our local stores which began to spy on our facial expressions. Instead, we willingly offered more by transforming our smartphones into the control room of our very existence.

Paradoxically, we live in a time where data is the new oil, and we are net producers; generating wells-worth as we walk and breath: yet we are not getting richer. So the question is doing what with this data and why?

Precogs are Coming 

The answer is a no-brainer to even those who are unfamiliar with George Orwell’s 1984: surveillance and repression by police and law enforcement agencies. A combination of eye-watering amounts of data, AI and facial recognition software is leading us into a world sinisterly close to that depicted in the prescient sci-fi movie Minority Report. Even more disturbingly, all of this is happening without us being informed.

To make us more aware, several NGOs have started campaigning. The British charity, Privacy International (PI) is one of them and has been monitoring the use of several technologies/practices by police forces. In particular, the charity has examined the growing usage of body cams, facial recognition, hacking, IMSI catchers (IMSI strands for International Mobile Subscriber Identity), mobile phone extraction, predictive policing and social media intelligence.

PI’s Policy Officer, Antonella Napolitano, spoke to us just as her group was set to launch their Neighbourhood Watched campaign. “We believe these technologies should not be bought or used without proper public consultation and the approval of locally elected representatives. But quite often, we learn about them the very moment we start seeing their use’s consequences”, Napolitano stated.

Among the consequences she refers to, there has not been a drop in the crime rates, but a rise in social disparities, as she explains: “The increased use of surveillance technology by local police results in an inclination to do more law enforcement actions against communities of color or targeted groups such as low-income wage earners. This has created an environment in which the members of these communities are treated like prospective criminals. Just take predictive policing, for instance: these programs are used to estimate when and where crimes will be committed, but the algorithms are fed with historical policing data, which happen to be incomplete and biased; risk assessment on the probability of committing a crime is also based on what you earn or where you live. This leads to a ‘feedback loop’ with a consequence is that minority communities are constantly patrolled and over-represented in crime statistics”.

Surprisingly enough, these experiments are not confined to the more technologically-advanced regions of the world. “It’s interesting to notice that to test their products tech companies use real-life environments in countries like India and Pakistan, where they know they won’t be bothered about standards to be respected or guarantees to be given to protect people’s rights”, Napolitano says.

What makes these practices so appealing is that they take up less time and resources than a proper investigation would require, thus projecting an idea of efficiency from which agencies benefit. The problem is that, unlike old investigative tools such as wiretapping, which only violates privacy, the new ones allow for data manipulation, which is an entirely different thing.

“Where evidence is obtained this way, it may interfere with your right to a fair trial. The Italian authorities have been employing hacking powers, without explicit statutory authorization or clearly defined safeguards against abuse, for years. Only recently, hacking has been regulated so that it can be used only for a very limited set of serious crimes.” Napolitano states.

Unfortunately, these new technologies tend to constantly outpace the law, so their use in investigations falls into a grey area. It is not illegal but it’s also not legal, and so it goes on until a judge comes along and says that the party’s over.

Know Yourself (It’s what they want) 

Police and law enforcement agencies are not the only actors who know how to use our data: companies crave for it even more. In fact, data collection has become the dominant business model of today. The most obvious reason is that data is necessary for developing tailor-made advertisements and services, but this is not the only purpose they serve, as Marek Tuszynski of Tactical Tech, a company which helps people and organizations to use technology in the safest possible way, explained.

“This is just the first way to bring in money but there is a lot whole more because data is persistent and can be used for other purposes. It has become a commodity, and can be bought by different actors and then used for different purposes”. Last year, Tactical Tech’s group of tech experts, conducted a study on dating service apps and sites: a major means of gathering data. “We discovered that with just $150 we could buy a very detailed data set of a million individuals, including photographs, information about gender preferences and so forth. More shockingly, we’ve found that many of these companies used to sell their data on the market”.

The data gathered this way is not just those the users share to find a soul mate, but also other sets of data contained within their devices. Analyzed and worked through AI algorithms, they are a treasure trove of information, the keys to deciphering a person’s personality and decoding how they think. Once this process is known, it can be used the other way round, to induce feelings, shape perceptions and alter needs. Here data, from an object of manipulation, becomes a means of manipulation.

This hidden manipulative power generated a very insidious market in to elections. In fact, the Cambridge Analytica scandal could be just the tip of the iceberg. “Actors taking part in electoral campaigns can influence voters either to vote for something or vote against something or not to vote at all. Suppression of voters is as important as convincing them to act and is way easier. Different private companies and consultants have mushroomed, promoting themselves as experts in using personal data, in the profiling of voters and as being able to influence them. They can do this by placing different quality of information, misinformation or fake information through advertising. We’ve analyzed data-driven social campaigns during elections held in around 15 countries, and we’ve seen about 350 different private entities collecting or selling personal data that would have helped parties involved in democratic processes to influence voters”, Tuszynski went on.

Some figures do support this view. According to The Electoral Commission, the UK’s independent body overseeing elections and regulating finance, between 2011 and 2017 parties’ spending on digital advertising rose from 0.3% to a whopping 42.8% of total expenditure. This has enormous implications: should these techniques be honed to the point of creating a surgically precise means of miscommunication, it would deprive the very idea of democracy of any significance by making the word ‘accountability’ meaningless.

The Age of Surveillance 

This is not the exact Orwellian world envisioned however due to missing detail: this process is not being led by the public sector – but the private one, meaning that it will not, as in 1984, be the public institutions profiting from this the most. This is alarm is already being raised by Amy Webb, futurist and NYU professor and author of The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity. In an interview recently given to the MIT Technology Review, she expressed her concern: “Rather than create a grand strategy for AI or our long-term futures, the federal government has stripped funding from science and tech research. So the money must come from the private sector. But investors also expect some kind of return. That’s a problem… Instead, we now have countless examples of bad decisions that somebody in the G MAFIA (Google, Amazon, IBM, Facebook, Apple and Microsoft) made, probably because they were working fast. We’re starting to see the negative effects of the tension between doing research that’s in the best interest of humanity and making investors happy”.

The split between private and public interest is nothing new, but in the tech world became particularly meaningful after the 2000 dot.com bubble. According to Harvard Business School’s professor Shoshana Zuboff, it all began with Google: “It was a fledgling firm, and its investors were threatening to bail—in spite of its superior search product. That’s when Google turned to previously discarded and ignored data logs and repurposed them as a ‘behavioral surplus.’ Instead of being used for product improvement, this behavioral data was directed toward an entirely new goal: predicting user behavior”.

Zuboff, who authored the seminal 1988 book, In the Age of the Smart Machine: The Future of Work and Power, has recently published the 700-page-long The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, in which she describes the coming of a new age of capitalism “invented in the context of targeted advertising”. If the powers behind this “surveillance economy” are left unchecked and unrestrained, it could lead to a world of citizens turned into consumers trapped in a vicious cycle of induced needs leading to induced, never-ending consumption. Something which is already happening, according to Jonathan Crary, author of 24/7: Late Capitalism and the Ends of Sleep, where he describes how we are becoming 24/7 consumers.

For long, we thought that this exponential technology’s only pitfalls were centred on privacy and employment. It seems that there is much more than that: the future we are sailing toward resembles Aldous Huxley’s Brave New World dystopia mixed with the Orwellian all-knowing state. It would seem that technology is building us a gilded cage, yet a cage it still remains.