pluto

Why technology and literature should go hand-in-hand

In Western societies today, the perceived dichotomy between the humanistic disciplines and STEM subjects is often seen as a natural thing: in school, we study them as though they reside in separate realms, and when the time comes to choose which career to pursue, we are given the choice of either path A or path B.

This traditional distinction between STEM and the humanistic disciplines seems common sense to many, but it is much more recent than we think. The Greeks, the Romans, and even nineteenth-century reformers like Wilhelm von Humboldt knew that a good education is comprised of both left-brained and right-brained activities and knowledge; they did not make the same distinction we do today. Now that the digital transformation is reaching its peak, disrupting the labor market in ways we can barely forecast, it is time to get rid of this artificial separation, both at school and in the workplace, and consciously include the lessons the humanities teach us in today’s digital world.

The compartmentalized separation I’ve described affects both our educational system and the workplace. There seems to be little or perhaps even no space for cross-fertilization between the two: scientists will be scientists, locked in their labs, and humanists will be humanists, hunched over in their dusty libraries. It was only fairly recently, however, in the year 1959, that British scientist and novelist C. P. Snow announced that the intellectual culture of our society had split into “two cultures.”

Snow’s famous statement diagnosed the state of society at the time, which unfortunately often holds true even today: many consider the humanities a path with very little connection to the real world, which usually leads to a not-so-brilliant career – if to any career at all. The sciences hold much more promise and prestige, especially when it comes to the labor market. No matter what the relationship of human sciences with STEM has been in the past, however, the digital turmoil that we are experiencing clearly forces us to completely rethink these stereotypes, as well as the dichotomy itself. In fact, in a world where technology is evolving so fast that schools are educating kids for jobs that do not even exist yet, the humanities gain crucial importance as an excellent school for critical democratic and ethical thinking. This is because the humanities deal with core questions that shape what will probably be the core challenge of our future: the relationship between humans and machines as well as all the attached ethical and identity issues that this tension propagates, and the nature of the human experience in a digital era dominated by machine learning.

What the humanities teach is something that might not feel as immediately tangible and practically useful as the knowledge conveyed by STEM subjects: the so-called “soft skills”, even though they are anything but soft. These are skills a humanities education builds and ones which are developed over time: critical thinking (i.e. the ability to think for yourself and evaluate information you are exposed to, and also to identify what you need to learn to get to the next level with some crucial research skills), empathy (the ability to put yourself in another person’s shoes and see things through their eyes), agility of thought, creativity and imagination.

All of these skills are of great importance in instilling in the students a sense of responsible and ethical citizenship, which we need in the digital era perhaps more than ever. The way we behave offline and online often still differs, meaning that people often do not follow the social contract as ethically online as they do when dealing with people face to face (think of anonymous trolling and bullying online, doxxing, illicit data collection, etc.). The critical tools and essential democratic skills that the humanities provide can help students be more conscious, educated, responsible digital citizens, and they can help them develop the digital literacy they need in order to understand the difference between, say, fake and real news, or ideal and not-so-ideal modes of behavior on social media. A great humanities education that addresses the problems of our digital world can also help us tackle the great challenge society already faces when it comes to all the complicated ethical questions contained in our rapidly evolving relationship with (and dependency on) machines. Digital citizenship questions on social media are only one drop in a vast ocean; while many social rules remain relevant, philosophy and ethics will have to be, if not rewritten, widely reimagined for the new machine age, the digital world we now live in.

But the humanities can’t do this job alone and are in danger of becoming more isolated if we keep on teaching them only in traditional ways: in a face-to-face, seminar-style classroom setting. In academia, we have the field of “Digital Humanities” (DH), which is over 30 years old at this point, and refers to digital ways of analyzing and interpreting texts (such as the quantitative analysis of novels through text mining and data mining approaches, counting and comparing word frequencies or groups of concepts and drawing conclusions from them; or mapping geographical or interpersonal relationships in historical documents, for instance). The Digital Humanities are useful and can yield exciting results, but they are mostly confined to academic researchers, and don’t touch the general ways in which books and history are being taught in most school and university classrooms today. For that reason, I am personally more interested in Digital Pedagogy, the purposeful integration of digital media in the traditional humanities classroom (beyond simply lecturing with a projector and a PowerPoint). Teaching students traditional literature or history lessons with the help of social media, apps, VR devices, podcast projects or other digital technologies might sound counterintuitive, but it does provide great benefits. Just to give you a couple of examples: over the last few years, along with teaching the books in my literature seminars at Stanford, I engaged my students and the general public in live role-plays based on traditional novels such as Oscar Wilde’s The Picture of Dorian Gray and Mary Shelley’s Frankenstein. By adding gamification methods, widely available tools and a format that students regularly use out of the classroom, we were able to mash up the students’ learning in ways that deepened and added to their engagement with these ‘old’ texts, and it connected our class to hundreds of other readers from many different countries who played with us on Twitter – illustrating for students the power of a worldwide community of readers of traditional literature.

Digital Pedagogy is only one step in a series of steps that take us closer to bridging the gap between the humanities and technology and having students connect with real-world scenarios early on. Eventually, this will entail a bigger rethinking of the structure of our educational system and its effects beyond school walls – all the way down to the reconsideration of the physical classroom itself. This could even be something as simple as getting rid of current furniture and turning them into open spaces with movable furniture as this already makes a big difference in how students learn, by enhancing collaboration, for instance. It also means reconsidering the very concept of having different subjects being taught in separate lessons rather than in complex project-based scenarios in which both soft and hard skills and knowledge are needed and developed by students working together on a task over time. In this scenario, the students learn by doing and become both more independent and collaborative by design, by working on projects where they can learn different skills from multiple disciplines at the same time in a natural way as prompted by their project, rather than studying each of these subjects in an artificially-separated timetable fashion (Math from 9-10, English from 10-11, Art from 11-12, etc., as our schools currently still do).

Such a reorientation of the way students learn would, in turn, provide them with much more holistic, real-world knowledge. This is something they could greatly benefit from in the labor markets of the future – the complexity of which will require all of us to be more flexible than ever. Having, say, engineers with soft skills, and philosophers with a knowledge of coding, will give us the most well-rounded people and citizens, who are jointly able to face the existential challenges that increasingly complex technology raises both today and tomorrow.

In the end, both our hardware devices and our soft skills are simply tools that help us master the world. If you ask me, I think that the more tools we have, the better the outcomes will be: because, at the end of the day, there is no way we can teach humanities to an algorithm. That’s always going to be on us.

From Satoshi to nakamo.to

Michael Geike is the CEO of Advanced Blockchain AG, a group that focuses on the design, development, and deployment of DLT software for businesses and their operations and services. The company is collaborating with startup nakamo.to in the creation of peaq, one of the only projects working with DAG at its base layer, instead of the blockchain.

nakamo.to: how did it all begin?

nakamo.to’s inception dates back to 2012. At that time, I was working as a Team Manager for the Payment Analytics Team within the Data Intelligence unit of Zalando, and I received a call from Robert Küfner, later to become the founder of nakamo.to, who enthusiastically introduced me to the world of bitcoins and the technology that lies behind them. I had never heard of bitcoins before, but the moment I started digging, I realized that they compiled all of the fields I had worked in: there’s mathematics (I’m a mathematician), algorithms and data science (which is what I was focusing on at Zalando), and finance (I worked as a trader at JP Morgan for six years). The more I delved into this technology, the more I understood how revolutionary it would be. We immediately started a mining operation, which proved to be rather successful. Two years later, Robert and I set up Smart Equity AG, the first publicly listed company in Europe created as a professional Bitcoin mining firm. Shortly later, however, the bitcoin exchange Mt. Gox went bankrupt, and we had to leave this endeavor. We stayed invested and connected to the crypto-scene, and in 2016, when the markets began picking up again, and more people started talking about it, nakamo.to came to life.

nakamo.to has a very specific vision behind it. Could you tell us more?

With nakamo.to, we decided to create our own DLT. Back in those days, we began having the vision that the whole world would be tokenized. Tokenization means that anything in the world, from tangible goods, like houses or clothes, to intangible ones, such as services, will have at some point in the future a digital unit that exists solely for the purpose of representing such assets in the digital world. This makes it much easier to trade, transfer ownership, prove, stake, lend, verify and validate a certain condition of an asset, opening up a whole world of new possibilities. And we considered DAG (Directed Acyclic Graph) to be the most suitable concept to build the technology that would make this tokenization happen.

What’s the difference between DAG and Blockchain?

DAG (Directed Acyclic Graph) is a mathematical concept which can be applied to a new type of blockchains if you will. Blockchains are composed of blocks in a chain, DAGs have more of a tree structure, with different nodes at different levels. This peculiarity gives them an efficiency advantage over traditional blockchains.

Why is that?

Well, there are four main advantages of our DAG-Technology when compared to traditional blockchains. First, it makes mining a thing of the past: no blocks mined means no need for miners, nor mining equipment or waste of energy. Second, the non-linear tree structure makes it highly scalable. Third, transaction fees are lower, as there’s no miners’ cut, making even zero-value transactions (just sending information for example) possible. Finally, it can tackle complex applications and use cases that traditional blockchains can’t take on yet.

So how can DLT be applied in, for example, the fashion industry?

When it comes to the fashion industry, DLT can help tackle the business of fakes. For a client of ours, for example, we’re developing a system to insert near-field communication (NFC chips) in their handbags. Data can be retrieved via this chip, which is stored in a decentralized database and verified by distributed ledger technology through a blockchain. This makes the data immutable: it cannot be hacked or changed, giving the buyer 100% trust that the handbag is authentic. DLT also aids supply chain management: the fashion industry’s supply chain is a complex one, with various suppliers at different levels. A DLT-based fashion supply chain would allow a diffused and transparent distribution of verified data among the different parties.

And in automotive?

The same principle above can be applied to the automotive industry, where authenticity is a crucial matter and the supply chain is composed of many stakeholders, just as in the fashion one. Cars are made of many components, all of which can be counterfeited. One of the most common manipulations is that of the mileage: with a DLT based architecture, the buyer could verify if the mileage displayed in the car is the actual one by checking the corresponding data in the blockchain. Furthermore, now that electric cars are increasing in usage, we will, for example, need an infrastructure through which people can charge their cars anywhere conveniently. This means being able to source energy from anyone, anywhere, and in the best case scenario that requires a decentralized platform, where consumers and businesses can exchange electricity for money peer to peer, where the users don’t have to worry about trust and payments happen automatically, and securely. These kinds of platforms are exactly what DLT is being developed for.

DLT has been around for ten years now. By when do you expect it to be normalized, and part of our lives?

I think we’ll have to wait from 5 to 10 more years to see it widely used. This technology is now at the same stage the Internet was in the ‘90s: most people have heard about it, but not many have used it, and there’s still a lot of skepticism around it. Now, the industry is past the hype peak, and it’s becoming a more grounded technology.

So you think that in 5 to 10 years we’ll manage to have a clear regulation as well?

We will. I think we’ve learned the lesson. The time of ICOs (Initial Coin Offerings), the method of raising funds in an unregulated environment that disrupted the finance industry is being replaced by a new wave, that of STOs (Security Token Offerings), which are basically the sale of tangible securities in a regulated way. Jurisdictions and countries around the world are working to find good regulations for STOs to happen in a more protected manner. My wish is that we accelerate this process in Europe, as it is a great opportunity to attract talents and young businesses.

What about cryptos? Do you think that the adoption by central banks and governments will mark the end of digital currencies as they were first conceived?

In the future, cryptocurrencies will remain – and will actually get stronger, it’s only a matter of time. Their volatility is simply due to the fact they are not widely used in everyday business yet, and speculators cannot agree on whether they ever will be. Fiat currencies have a much bigger problem than volatility, especially in the long run. When you begin printing them, you get stuck in a loop that you can’t get out of: you need to print more and more to keep them stable, all the while knowing they will eventually collapse. But traditional currencies are doomed to fade away: the future will only belong to cryptos, which are just way more efficient, transparent and programmable. These currencies won’t be related to local boundaries such as countries, but to functions: we’ll buy cars with one crypto, book flights with another, and so on. But highly centralized power cryptos might be a short term improvement over Fiats, but are doomed to fail, and won’t be part of this scenario in the long run.

Renaissance of humanity

AI, with machine learning and robotics, has become omnipresent in daily life and across headlines. While innovators are ravishing that it can put an end to nearly all of humanity’s problems – with data and machines making all our decisions in the future – at the same time, the public debate about the impact of these technologies is often fundamentally pessimistic: painting a picture of superintelligent machines taking over the world to dominate and eventually eradicate the human race.

The response to this view is often defensive, trying to fend off disruptive forces, preserve the status quo and even idealize a world without technology. So we are basically presented with a choice between a world of technology or a world of humanity.

Fundamentally, this framing is neither helpful nor constructive because it relies on creating an irreconcilable polarization. Instead, the following ten reflections are an invitation to envision a scenario somewhere between a technological desert and a “Heidiland” without tech. They provide some food for thought on how we can overcome this dichotomy and create a future where not only it is possible to reconcile a love for technology and humanity, but one where technological advancement leads to a meaningful rediscovery and deepening of “humanness” and humanity.

1) Time to delegate. “Machines are taking over human work” is something we often hear from people concerned with the automation of certain tasks. This is an incorrect way to view the issue, however. Instead, we can consider the possibility that humans have been doing the work of machines for way too long. So, actually, it’s about time technology relieves us from the dull, dangerous and dirty jobs and frees us for jobs involving more creativity and problem-solving. There is no reason for us to risk our lives for work if a machine can do the job. Why should we not enhance the diagnostic abilities of doctors with machine learning algorithms to detect health anomalies more effectively? Technology can significantly enhance human capabilities – let’s unleash that potential.

2) Jobs for everyone. The destruction of jobs and resulting mass unemployment is viewed left, right, and center as one of the worst AI horror scenarios on the implications of AI and on society and economy. However, we need to move away from black and white situations and have a more nuanced discussion. While it is true that, according to a McKinsey study from 2017, on average 45% of all work activities can already be automated with current technology, that is not synonymous with saying 45% of all jobs are obsolete. Rather, it creates space in already packed work schedules. In a wide variety of jobs, people complain that they are so busy with day-to-day tasks they cannot plan for the future or deal with new vital issues. So let’s see this as an opportunity to focus on human ingenuity and to make time for the topics that really matter.

3) Make technology the solution, not the problem. There is a widespread fear that the mass adoption of technologies will deplete the planet’s resources, alienate people from one other, take away jobs and even fuel weaponized AI war. And yes, as with most innovations, there will also be harmful effects. However, machine learning is a powerful tool for creating solutions for the many, increasingly complex issues we are confronted with in geopolitics, climate, demographics, economics, etc. It is a means of allocating scarce resources more efficiently. Think of the time caregivers or doctors have to deal with paperwork vs dedicating it to their patients. Algorithms can perform many of those tedious tasks and free them up for what patients need: empathy, comprehensive information, consultation, and care. As a result, patients get better care, and those professionals can focus on what often made them choose their profession in the first place, in this case, helping people. The same goes for optimizing the use of resources and energy. AI can create functionally equivalent designs using much fewer materials to optimize power generation cycles and grid usage, saving on cost and scarce resources.

4) The problems are human. Many people worry that we are optimizing for machines. Let’s not forget that algorithms are designed around humans – to personalize experiences and make them more interesting or relevant, to automate processes and facilitate certain tasks for people. Algorithms don’t have needs and desires; at the end a human has to be convinced by an argument, product or service. So, in a way, it’s a very human-centric technology catering to the needs and desires of humans. Secondly, this also means that to create useful algorithms and machines, we need to understand humans and their behavior better. So, a rise in AI will also fuel the quest to understand people better and increase the demand for skills focused on this: anthropologists, sociologists, etc.

5) Reconnect. Often technology is associated with a lack of meaning. And it is true that technology has no purpose in itself, but it gives us the opportunity to look at humans for their humanness. Too long have we viewed humans solely as means of production, optimizing the education system and workplace in ways similar to how we designed factory floors. This is a very mechanistic view of the human. But as machines will be able to carry out more and more “human(e)” tasks, we will have questions about what defines us as human beings. The mathematician and first recipient of the Turing award, Alan Perlis once said: “a year spent in artificial intelligence is enough to make one believe in God.” This is a genuine opportunity for a renaissance of purpose, where people can rediscover what is the essence of human being, connection to ourselves, to other people and to nature around us.

6) And Disconnect. As more repetitive tasks begin to be taken over by algorithms or machines, humans will be left with a higher value, but also more demanding assignments such as complex problem-solving, creativity and creation. Numbered are the days of switching off a little while performing routine tasks during a busy work schedule. There is ample evidence to suggest that among others overall happiness, healthy sleep, and mindfulness positively impact performance in the workplace, especially when it comes to more complex responsibilities. Despite an OECD study revealing that productivity is highest when people spend fewer hours working, corporate culture still celebrates people. While this may not be a new point, the capacity to be more balanced, more connected and in tune with oneself becomes more pressing as technology advances. The Latin proverb mens sana in corpore sano – a healthy mind lives in a healthy body – is still and will remain pertinent. We will have to seriously up our game on getting better at leisure.

7) The machines are slaves. Skeptics believe that AI will enslave the human race and eventually use us as a means of production. But many of the business models powered by AI today have led to global platform companies that have disrupted some encrusted markets to create higher customer satisfaction (Uber for example). At the same time, they can pose a genuine threat to the working conditions in certain lines of work and social cohesion overall. As a society, we need to ensure that we do not create a class of workers without social and economic buy-ins. Just as when the loom was invented, and there were widespread uprises against bad working conditions, we need to make sure we find the right conditions for people to work and interact with technology. To be fair, this is becoming more and more difficult in an increasingly interconnected, globalized world, but we cannot shy away from this duty. To the contrary, we have the responsibility, but also the opportunity to rethink how we think about work models.

8) Let’s build a better polis. Many think that AI-powered chatbots and social media create echo bubbles and are the source of public manipulation to the detriment of our democracies and institutions. Adoption of AI will entail a whole suite of ethical decisions about the boundaries of technology that society has to address as a whole. Most of these problems are inherently complex and not black-or-white, right-or-wrong. Already today, we see that different societies are striking a different balance between the protection of the individual, e.g. data protection and unconditional technological adoption, e.g. social monitoring. Ultimately, it is a question of finding the right benchmarks. In some cases, they are absolute and universal, for example in respecting human rights. In other cases, they will be relative, and the human will be the measure of all things. For example, when we decide under which conditions to authorize self-driving cars, we could say it has to be 100% safe for humans before we authorize it, but algorithms cannot ever be 100% correct. So, the accident statistics produced by humans could be a good benchmark, and then the discussion could focus on whether AI can produce better results and how much better they have to be before being released to the market. These societal and democratic discussions will gain in importance as technology touches more and more aspects of our lives.

9) Embrace uncertainty and imperfection. In today’s fast-paced world the disruptive power of new technologies continually surrounds us. This entrenches an increase in the perceived instability and uncertainty people have to cope with. A common reaction to this is often trying to cement the status quo – be this at a micro level in a person’s job or at a macro level as society as a whole. However, this will not prevent technological progress in the long term. Instead, we need to build up a higher tolerance to uncertainty and an ability to adapt to new information and demands. This also means we need to accept that our responses will not always be adequate from the get-go. Instead, we need a true spirit for experimentation and need to A/B-test our way to successful solutions. Embracing these inherent imperfections is the ultimate truth to humanity. After all, nobody’s perfect, right?!

10) It’s all about Education. Silly-minded people worry about the future of their children and wonder how we can prepare them to live fulfilled lives professionally and personally. With changing demands, it is evident that our current education system is not equipped to equip people for the future. It was designed to train high numbers of people and moved away from a highly individual private teacher model for the privileged to a standardized education system for the masses. Obviously, democratizing knowledge and education was and still is the right goal. Yet, in its current format, it is geared more towards feeding standardized knowledge instead of sparking people’s creativity. Yet skills like independent thinking, creative problem solving, interaction and collaboration, resourcefulness, leadership, resilience, and empathy will be crucial in performing more demanding tasks. At the same time, we have to include in education effective methods for emotional and mental resilience, e.g. through mindfulness exercises, sleeping habits and relaxation methods. And AI can also be part of the solution here: allocating the limited resources of teachers to those areas in which each individual student has the most potential to thrive.

No technology determines how humans work, but humans determine how technology works. So we can use AI to the service or disservice of mankind. It carries great potential for a positive impact on all of our lives and can give us a real opportunity to connect to ourselves, build better relationships as well as address societal issues. We have the opportunity to shape the future with AI and not only optimize tasks and build cool applications but make it a real driver to re-humanize the lives of all of us.

The education chasm

The past few years have witnessed a renewal of focus on education as a whole with continuous attention on both adults and how their learning pathways are to be designed, and children and the adequacy of school systems. However, along the way, many have begun to notice that the skills which are needed for work and are most sought-after today are the skills that we as humans naturally possess in our childhood. So what could be the cause of this disconnect?

Upskilling, reskilling, mindset change, leadership, learning programs, education journeys, talent development programs and many other variations of what humans refer to as an individual or life-long personal learning or basically, education. Such growth is very closely tied to organizational  growth, as organizations are only as good as the people who make them and learning is the way through which we grow, develop and evolve.

When we talk about learning, we are not only referring to the development of hard skills (specific and technical vertical abilities). More often than not, the real request is in relation to soft skills, and it seems that today there is a growing understanding that soft skills and human interpersonal abilities are actually what makes the difference, and they are taking a more leading role in the face of an automated age.

Now, I’m not a big fan of the term soft skills (how can a skill be soft, anyway?) but it seems to have taken a common meaning and shared understanding when referring to people and their personal and interpersonal abilities or character. So let’s use this term knowing that we are really referring to those very specific human and social abilities which support the creation of relationships and culture within an organization, creating an interconnected environment from which companies can build their competitive advantage.

As business leaders understand its importance, what we are actually witnessing is an overall increase in the investment in learning and personal growth. According to Udemy, 53% of surveyed companies reported that their L&D budgets increased between 2017 and 2018. The same survey also showed that highly engaged companies spent $2000–$2500 per employee annually on learning  and that  59% of high-growth companies spend above average per employee on L&D.

One interesting development is that the top 3 skills that executives and employees alike are both investing and showing interest in are, in fact, soft skills: namely leadership, communication, and collaborationa LinkedIn report published last year showed that over 60% of employees listed these skills as a top priority. The other interesting thing found in this report is that the top challenge for both executives and recruiters is how to develop and grow these soft skills.

Through their talent acquisition plans, open innovation strategies, or even simply by observing their own children and younger generations, many of these leaders realize that the skills required to be a successful entrepreneur or the ‘new citizens and employees of tomorrow’, are intrinsically tied to the love and desire of learning and growing – namely curiosity and courage. More than that, the ability to learn is crucial for success today, the realization of the age-old Aristotelian maxim that: ‘the more you know, the more you know you don’t know’. Both factors are strong indicators of the type of person any company wants, or better, needs: in this case being humble on top of being curious and courageous.

Every so often then, as part of corporate education journeys, companies design programs that also focus on skills which the World Economic Forum defined as “skills of the future”. These ‘soft’ skills are now widely accepted as being creativity, entrepreneurship, emotional  intelligence, cultures of failure and experimentation, critical thinking and problem solving – which coincidentally are abilities and skills typical of younger children.

The newfound importance of these skills of the future has led companies today to invest in reinstalling these qualities that both education and life have seemingly managed to hammer out of us. Today more than ever we are looking to re-teach childlike traits into employees, as it is children who seem to encapsulate these skills in the most natural and intuitive way. Children are entrepreneurial by nature, risk takers, resilient, experimental, sociable, wisein their creative and critical thinking prowess, sensitive with high emotional intelligence and ultimately, they never take no for an answer, instead, dreaming big and believing that anything is possible.

So if by nature kids already possess these skills, then this begs the question: what happened along the way? If organizations are now spending good money on teaching their employees what kids already know, should we not be making sure that our own school systems do all they can to maintain these skills and develop these talents which somehow seem to disappear once we reach adulthood? Why and how are these skills for the future eroded over time to make them the skills of our past? The paradigm shift in corporations, now seeking out these skills, raises serious questions we as a society must address when evaluating our educational systems.

Sir Ken Robinson in one of the most followed TED talks of all times, Do schools kill creativity?”  says that “We don’t grow into creativity, we grow out of it. Or rather we get educated out of it”. In a different interview, he says that we humans live in two worlds: the external world and the inner one. Through the external world we learn how things physically work, and through the inner world, we explore our emotional life and individuality. Schools, it seems, do not focus on the second world, leaving adults to search for it themselves and possibly rediscover it at later stages in life.

As the information era continues to flood us with never-ending streams and sources of knowledge, the arguments for schools as the only place for learning historical dates, complex mathematical equations, and dense passages of prose, seem to lose a certain level of relevancy. Instead, being able to discern mass amounts of information and understanding their relevance rapidly becomes more important. More so than this, we should be going to school to learn about others’ experiences, unique as they are and impossible to find elsewhere. School is the environment in which we should be able to cultivate and enhance our talents and soft skills, spend time on building relationships and expand our cultural horizon – a more viable endeavor.

In favor of schools, many will argue that school is still social. However, I would ask how many hours (or perhaps minutes) in the school day do children spend interacting with one another? Between lessons and breaks getting ever shorter the truth is that kids have little time to socialize, let alone play. Recent surveys in the UK show that more than 74% of children spend less than 60 minutes playing outside. The reality is that with ample opportunity outside these educational systems in the form of extracurricular activities, kids get much more social interaction outside than they do inside their own school where they actually spend most of their day sitting in a classroom.

This is why education seems to be on everyone’s agenda and why bigger companies are starting to invest in education and learning (or perhaps re-learning). Alternative schools are founded with the idea that it takes soft skills, together with the hard ones, to create open-minded individuals who are capable of making their own informed opinions and using their talents. It is truly interesting to see how once again our working world is looking to reshape our education world just as it did with the last industrial revolution.

As we enter the digital age, an evolving human consciousness continues to change our beliefs. It is soft skills – our essence, the understanding of what is us, of what is human – that is more important than ever. Somewhere deep inside we all already know this, and the sooner we get back to the basics and build our learning practices with humans in mind, the less we will have to worry about and the more we will have to look forward to in the years ahead.

Findings

Acts of sabotage have surprising longevity. From an etymological point of view, the word derives from ‘sabots’, the heavy, wooden shoes that French, textile factory workers used to wear in the early nineteenth century. ‘Saboter’; in this sense, means ‘to walk noisily’. The first material sabotage acts date back to the Industrial Revolution, when the English Luddites, exasperated by their harsh economic conditions and by their falling wages, destroyed sixty textile machines at a factory in Nottingham. It was 11 March 1811, and shortly thereafter England was rocked by a wave of violence.

During World War II, in the United States, the Luddites movement became a subversive strategy to slow down production in factories and offices, as well as in the Allies’ logistics centers. In 1944, the CIA Office of Strategic Services (OSS) secretly circulated a short book titled Simple Sabotage Field Manual, an enlightening read for anyone who wants to understand how the modern concept of leadership developed. Some of its instructions are pretty old-fashioned and boring, but a chapter titled Organizations and Conferences outlines the profile of the ‘worst possible leader’, who is busy throwing sand in the gears rather than increasing his company’s revenues.

 In brief, here’s what the “worst possible leader” should do:

  • Never permit short-cuts to be taken in order to expedite decisions.
  • Make “speeches.” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
  • When possible, refer all matters to committees, for “further study and consideration”.
  • Bring up irrelevant issues as frequently as possible.
  • Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
  • Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable”.

What is most surprising here is the match between the recommendations in the sabotage manual – declassified by the CIA in 2008 – and the conservative approach of many companies when faced with the numerous challenges posed by innovation (not just technological). Challenges that, in fact, require a perpetual adaptation of one’s borders, to support change. Narcissism, organizational logorrhoea, bureaucracy, an over-structuring of the processes, inertia, lack of courage, poor empathy and inclusivity, are diseases as widespread as the common cold, behaviors that threaten the evolution of work structures. One wonders what would happen if the CIA Manual were proposed to a global consultancy firm with the title “How to become a great leader”. How many decision-makers would find a confirmation of their approach to organizational management? Self-sabotage as principled leadership.

The third shift: What comes after digital?

In the mid 1800s the development of analogue wave technology transformed the world. A series of discoveries set the stage for important advancements in communication based on how we transmit energy, allowing engineers to develop modulating electrical signals that could be converted into information. While we can argue that the introduction of electricity made a huge difference in all aspects of life, it was this modulating analogue wave which gave rise to recorded sound and changed the world completely. The analogue age then ushered in radio and television, which changed the fabric and replicability of communication, giving rise to untold productivity gains, invention and general societal advancement. This experience has been almost as profound as the Gutenberg press in terms of its impact on shared human knowledge.

By the 1940s analogue technologies had built the base platform for the digital world which later emerged, moving from a proportional wave technology to an alternating current technology of off/on, 1/0, which could be more quickly and reliably interpreted. In the words of Wikipedia, “these two types of signals are like different electronic languages; some electronics components are bi-lingual, others can only understand and speak one of the two.” From this simple alternating current came the transistor, and the chip, and digital life – embodied today by the Internet and its roving enabler, mobile technology. To say that our modern world is built on this digital current would not be an understatement.

At this moment in history it is almost impossible to think beyond digital, to imagine a world where ‘digital’ is the foundation for something new, as analogue was the foundation for digital. Can digital be disrupted?

There are emerging signs that yes, digital can and will be disrupted, and the question is just a matter of when. From a technical evolution perspective, it is possible a pervasive new technology could arise built on digital, or via some insight provided to us from it, similar to the way that digital itself arose from the analogue. In this thinking we can identify the generation shift between these very particular mathematical constructs – the modulating wave for analogue, and the alternating pulse of 1/O for digital.

It is too early to say where the dividing line will emerge to the next generation of pervasive technology, but the impacts of such a shift are bound to be immense – rewriting everything. Just as record albums today exist on the fringe, and analogue has its comparisons in history, so too, digital will someday be obsolete – which means the Internet in its current form, blockchain, chips, IOT and general hardware will also become outdated.

From today’s vantage point, two areas provide a glimmer of potential, but we do not yet know if they are the answer. The first, most linear solution, is quantum. The second, more radical, is consciousness itself.

On Quantum

Quantum provides a good potential for the evolution beyond digital because it is not digital as we know it, but can be seen on a continuum of development from digital. In the quantum world, we move from status to state. Our status in digital is known, present, and alternates between the 1/0 pulses that make up our digital world. In quantum, the state becomes a potential manifested reality, and this is defined by the probability of existence relative to the observed status of existence itself. In quantum states things both do and do not exist, and it is observance that tips the probabilities toward or away from manifestation. Instead of 1/0 we suddenly exist in a range between 1 and 0 (a little like analogue but expressed differently), in which anything between them could exist depending on the state we observe. This has profound implications for technology and how we construct, because if one can learn to construct quantum states (which we nearly know how to do already) one can potentially form situations where things both do and do not exist depending on a circumstance. This shift away from yes/no 1/0 toward maybe/any is set to impact what is actually created and the states by which we create. It is a state bigger than status, at least by equally large factors that digital has proven to be larger than analogue.

Beyond Quantum to Conscious 

From an even wilder perspective, managed consciousness itself could be the next big thing after digital. The field of consciousness is not widely understood beyond our own species, and even among humans it is understood poorly. We do not know what we do not know, but we do know some things. We know consciousness is connected to energy in the form of electricity, particularly at the atomic level, where it is evident that consciousness is somehow linked to the density of atomic electrical energy passing between atoms, cells, and structures. The more complicated the connections between the atomic structures (biological) the more likely some form of consciousness exists. This has implications for a kind of distributed consciousness that exists beyond humans and to animals, toward all biological life, (at rapidly simplifying rates) and even to all matter. Hello, dumb rock.

In this ‘all matter consciousness’ scenario, all things would have some level of consciousness, though almost all of it would not be readily accessible to be understandable or even relatable at the human level. In such a world, connecting by electrical transference of atoms between systems would dictate the level of such consciousness. At some level, this could imply your chair can feel you, but not in the same way you can feel your chair. We are talking very abstract base consciousness here, far from our experience of consciousness in the human form.

But this implies that if we humans can begin to impact our consciousness by learning about how it arises and how it is formed, and then learn to adapt, strengthen, manage or otherwise shape our consciousness, we may eventually have a new method for conveyance beyond digital, and one that is potentially influenced by the quantum spectrum itself.

In the near term the growing convergence between neuroscience and technology, biology and atomic energy transference is like the late days of analogue, when we couldn’t quite see or imagine digital technology – let alone what it could unveil – but could sense something there. The wellness and monitored self movements are poised to unlock more data about how we operate and who we are. Electrically monitiored meditation proves the ability of mental exercise to control the rates of measurable electricity moving in our bodies. Especially where these ebbs and flows can be measured to manipulate conscious states, we can now see the faintest outlines of what may be to come.

Life immediately after digital looks set to be ruled by quantum. If quantum is ruled by probability states and observance, then consciousness – and our ability to shape and form our consciousness, could be our next destination. The interim linkages of brain-neural interfaces and consciousness techniques applied to digital could be the bridge between the two. On the other side, the ability to shape and adapt consciousness to inform the quantum presents untold possibilities.

In the end, all progress is built on the thinking before it. Vladimir Vernadsky (1863-1945) and other scientific thinkers described ideas of biosphere, noosphere and zeitgeist as living holistic awareness, observing that ideas emerge because they can, because it is their time in history. Such emerging ideas seem to be gathering pace, slowly unveiling scope for radical new human possibilities which we never dreamed were possible only a short time ago.

Getting on the same wavelength

The German philosopher Immanuel Kant believed that education differs from training in that the former involves thinking whilst the latter, in his view, does not. A staunch advocate of public education, was Kant correct to make the difference? Are education and training not one two sides of the same learning coin? Further still, how does one learn?

How the Brain Learns

The knowledge we now have on learning is mostly based on research on animals whose brains we could investigate directly to observe the traces of learning. A tiny worm, the C. elegans, houses only a few hundred brain cells and neurons. Still, it has taught us a lot about learning. Because of its simplicity, we can track its entire learning process, so from this humble creature we have learned many complex things on the learning process.

What we have been able to examine is that every time the worm makes a choice, a feedback loop begins. If the action is perceived as positive (rewarding, pleasant, that decreases pain, etc.), then the pathway and connections that determined the action are strengthened. Otherwise, they are weakened.

A simple example of this would be placing your hand in a fire. In this case, a negative result (pain) would lead you to refrain from repeating that behavior in the future. This is something we learn quickly, yet there are other things, such as deciding between two meals we like almost equally at a restaurant, which we learn much more slowly. The connections are still reinforced, but it takes more time due to both having positive outcomes.

Overall, this form of learning is called reinforcement learning, and it is how much of the learning in the brain occurs. Why is it relevant to technology? Because in today’s world, technology tries to mimic the brain in order to achieve our ability to learn. The most well-known and common technology that is attempting to do so is machine learning (ML). Machine learning is effectively an implementation, in machines, of the same basic rule of learning. No instructions and rules. No supervision. Instead of me teaching it how to identify cat videos, I show it millions of examples of videos with/without cats and hope that it will derive some rules that would enable it to classify any future video as part of the right group, based on its similarity to the previous sets. In a world where data exists in abundance, we can do this and let computers do the heavy lifting.

However, what is less known is that this technology is actually also teaching us a lot about how our own brains work. And the best way to explain this involves learning about chickens and sex.

Sex and Learning

When a chick is born, we have to wait for it to grow into an adult chicken before we can eat it. This costs time and money. If a farmer spends money raising a rooster rather than a hen, it is both money and time wasted since hens, unlike roosters, can also lay eggs: you get more bang for your cluck, so to speak. Because of this dilemma, there is a process called sexing (note: not sexting) which is done just after hatching. This is where workers are told to determine the gender of the chick, to avoid the above problem.

To find out if a baby chick is male or female, workers use their fast hands to check. However, there is an alternative way to simply checking the chick’s genitals. Hold tight. As it turns out if you hold a baby chick tight – as in squeeze the chick a little – it makes a little squeaking sound. This sound tells you if it is male or female and is the fastest way to distinguish the two. If you take a bag of little chicks and show them to an expert sexer, they can do this sorting with no problem: they will be very fast and accurate.

Now, if we took two experts, we would expect them both to be able to tell us which chick is male and which is female. But, if you asked them to explain the logic they used to determine the sex, then you would more likely than not be surprised. While they agree on the outcome (nearly 99% agreement between two experts classifying chicks) they are likely to not agree on the description of the rules they used. One would say it is more related to the length of the sound, or the pitch. The other may argue that it was the duration and vibrations in the sound. Many options. Very few agreements. Yet near-perfect synchronicity in successful classification.

Why? When training new workers on their first day in the job to acquire this poultry talent, you get them to squeeze the chick and guess whether it is a male or a female. If the worker makes a mistake, an expert taps them (for example) on the shoulder twice, without telling the worker why they were incorrect. The rules of the game are not explained at any point, and every time the new worker makes a mistake they are tapped this way. This may seem strange but, by the end of the day, the person who began with no knowledge that same morning is now an expert at sexing chicks. The worker somehow learns to become an expert, but they never understand how they did it. Tomorrow they will already be 99% in agreement with their trainer. They will also have their own set of rules as to how they do it. Those rules may well be very different than their trainer’s. But they work. This is basically how machine learning works. You do not explain the rules. Instead, you provide many examples and let the computer find out its own rules in order to obtain the correct answer.

And the incredible surprise is that recently we have learned that we can now reverse engineer this process and bring it back to our biological brain and take lessons from the computer. In a field I would call “sensory addition” we show that we can train humans to learn by using positive/negative inputs without even trying to come up with any rules: eliminating the common need humans have to build a narrative, a story, around their choices and solving complex problems using the power of the brain, specifically our senses, in order to find patterns and signals in complex data – without having a rationale to what we actually learned.

Learning With Your Feelings

To explain how this is done we will use another very intriguing example: an experiment where participants were asked to wear a vest that was fitted with motors that created pressure on the participants’ upper body. The vest delivered a specific pressure pattern in every trial. At the end of the brief tactile experience, the participant was asked to choose a pattern on a tablet placed in front of them: left or right. Participants had no idea why they were being asked this question, nor which direction would be correct. But they tried anyway because that was the experiment.

The participants picked a direction and were told if it was correct or not – but were never told why this was the case. If they were right, they won a dollar. If they were wrong, they lost one. This went on for a while, and, over time, participants got better and began to get more correct answers. They then started to find order and meaning in the patterns – an order that predicted the correct choice. Just as with the sexers, our participants came up with their own rules that worked. At the beginning of this experiment, there are growing pains, but, over time, the body becomes attuned to what is and is not correct, even if their conscious mind was struggling to reason as to why this was the case.

So what is the catch here? What determined if a participant was correct or not? The truth is that the participants were not just getting random patterns through their vest. They were actually strapped up to the S&P 500 stock exchange which was translated into a feeling on their body. They were, in fact, sensing the market, and their choices were actually buying and selling stocks. In a small period, the participants were able to get to grips with the stock market. So much so that they performed much better with the vest than when they were given the data in the standard form – a screen with loads of stock tickers running quickly. In fact, some of them performed better than savvy brokers who sat in front of a Bloomberg screen and tried to analyze the same data in the standard format.

What this shows is that the sheer power of the brain to quickly identify patterns and make sense of them is at the core of learning. Instead of thinking about the market, our brains can learn to tap into varying forms of learning. Some are buried deep under the hood and are not fully accessible to us. Instead of “thinking about the market” we can instead begin “feeling the market”.

Once we realize this power of the brain – to find meaning in complex data through nuanced signals buried deep within – we can begin to do lots of fun things that benefit from this remarkable tool: make you tell if a film is going to be good for example (and outperform Hollywood executives in gambling on film successes based on common attributes), navigate a complex cockpit of a plane, or feel your car so you can know if it is running properly (which we can now implement together with big car companies as a way to help race car drivers gain an advantage by ‘feeling’ the track, car, competition, weather, etc.). Essentially, if you can feel it – you can tell immediately if something is not right.

Practically, the sky’s the limit when it comes to using sensory learning to solve analytical problems. We can use our skills for much more complex tasks than we used to think, even more so than our cognitive abilities. This is good for both workers and businesses. Why? Because businesses use data analytics increasingly to solve complex problems, and this is why they frequently use machines. With this transformation of data analytics into a sensory affair, we are bringing the capabilities back into human hands.

Aligning Teachers and Students to Transform Education

But our renewed understanding of how learning happens and the variety of tools that can be used to aid learning are not limited to analytics. There are many additional applications for the novel tools that neuroscience gives us now that benefit learning and ‘meaning-creation’ across various other fields. Our understanding of learning through neuroscience could completely transform education. In the past, teaching was frequently done in a ‘one-to-many broadcast’ format: one person dictating information to many other people in a classroom. New teaching methods have not only failed to change this – they have actually worsened it by increasing amplification. E-learning, podcasts, or any online content delivery simply increases the numbers of listeners. It doesn’t change anything substantially in the way content is delivered or accessed.

This model is littered with problems. If, for example, I were to teach a class of 200 students, I would probably speak too fast for many, too slow for a few, and just right for some. Information would get lost within these disconnects or become harder to grasp for some people at different times during the lesson. The problem here is that I can’t see where these gaps occur; there is currently no real-time feedback model available for teachers in a classroom. The current model relies heavily on teachers reading their audience, but with neuroscience, they can do it so much more accurately.

One way this can be done is by analyzing the professor’s brain and figuring out whether the students’ brains are perfectly aligned. Comparing the professor’s brain with that of the students can help us understand if match. By matching I mean whether or not they speak and understand the same language, idioms, metaphors and all the other nuances of communication. Do they communicate at the same bandwidth and speed, for instance? If we match students to professors through brain alignment, we can then tailor the classes to enhance learning. These bespoke lessons will be based not on competency or age, but different brain profiles, aligning classes and professors’ minds and the understanding of one another. If we understand how learning takes place in the brain – we can start aligning the communication paths and optimize learning. Not only making sure the brain finds meaning in patterns in itself – as we did with the vest – but actually making sure that the signal being sent (the content) is optimal for the receiving brain.

And, of course, once we have brain data from teachers and students, we can do more. We can also solve the delayed feedback problem students face by having their neural signals decoded instantly and fed back to the professor. A teacher would know immediately whether or not the lessons and ideas actually went into the students’ brains. If everyone’s memory is underperforming when a message is communicated, the teacher will be aware immediately and can try a different way of explaining an idea. Maybe use a different language or different example. If an idea landed in everyone’s brains already – then the teacher can move on. This allows those teaching to impart an idea the optimal amount of time necessary instead of focusing on the wrong topics that were already easily digested by the students: making learning more efficient and allowing those in charge of education to identify the real problems within the system.

In our lab, we have also developed tools that allow us to see how engaging a topic is, not only whether or not it was understood. By using this tool, the professor can see which is the most engaging way to teach, and adapt their teaching behavior accordingly.

Ultimately, these are all variations of the idea that learning is effectively transferring content to the brain in a way that allows the recipient of the knowledge to find meaning in it, i.e., planting it, in ways that match their way of thinking. What we need to do is use the brain’s mighty powers of encoding content and channel the content optimally so that the brain can quickly find order in the content delivered.

What these technologies amount to is a measure we call cross-brain correlation: allowing us to view in the classroom what is interesting and understandable to students by seeing whether or not their brains are looking the same, or more colloquially, ‘getting on the same wavelength’. To expand this to the realm of corporate learning, we can now look at different team members brains and match them accordingly, making sure that teams are aligned and can think the same way – or of course the opposite, if that is what is desired. Outside of learning, this can and is being used for other things such as advertising, movies, and even assessment of politicians’ speeches – whatever correlates with the most brains is what is then used. Thanks to neuroscience, we can find the best way to deliver ideas, match professors with students and gain real-time feedback, leading to better value in the content delivered. In short, as we begin to understand better how the brain learns we can make learning more efficient which leads to even more understanding.

The Capacity to Learn

There is currently no evidence that there is a limit to the brain’s capacity for learning, so in theory, there is no reason why a person should not be able to speak many more languages or remember many more ideas. The only limits we have found are time limits (how many hours are you willing to give to learning new ideas versus using those you already have), an effort limit (how much energy do you have to give to learning), and… boredom limit (ultimately, sitting in a room with a vest that delivers pressure and making choices is quite boring…). But right now, capacity-wise, we are underusing our resources.

This means that maybe neuroscientists should start looking at ways to amplify learning in times where time, boredom, energy are in abundance. One of those timeframes that we are exploring now is… sleep. What can we learn when we are sleeping.

As it turns out, when you go to sleep your brain is, well, awake. It actually works hard. It does a lot of things in preparation for the coming days, removing unnecessary leftovers from previous days, shuffling ideas, rethinking them. A lot. One of the things the brain does at certain moments in the night is strengthening the connections that were already made in your brain before going to sleep.

However, even when we are sleeping, there are certain time frames when information can flow in. Where external input can alter thinking. Make our brain strengthen one memory at the expense of another. Select topics to rehash or reverse. Recent studies on learning while we sleep show that while we can hardly teach you new content when you are sleeping, we can certainly make your brain get better in knowing things that you learned during the day – when you were awake. Harness all of these ‘down’ moments to improve your knowledge. You read about the French Revolution when you are awake and put it in, and we will make sure that your brain does the heavy-lifting in solidifying the connection that will make you know it tomorrow – while you sleep. Early studies in our lab and others in progress have shown remarkable results. Maybe soon enough you will be able to go to sleep and wake up knowing Kung-Fu!

In short, we are now learning more about learning than ever before. Starting from the tiny C. elegans who learns to find a sugar drop in a room full of distractions, to our complex brains learning complex ideas with senses, engaging content, sophisticated teachers, or repeated efforts – what is clear is that technology and neuroscience can help learners maximize their infinite capacity, help businesses improve their performance and analytics, and help all of us maximize our brains’ potential for finding meaning in information.

Tooso: Making e-commerce search great again

A conversation with Ciro Greco and Mattia Pavoni, founders of the startup Tooso, on how when it comes to online business you should really, really care about semantics.

Why, if I search for “sleeveless t-shirts” on a search bar, all I get is … t-shirts with sleeves?

To answer this question, we have to get back to the notion of meaning. We ourselves are not really sure of what ‘meaning’ is and the mental process our brain goes through to make sense out of a sentence. It goes without saying that not having a clear idea of how meaning “happens” makes it quite hard to have machines understand us as naturally as we understand each other. This given, there are two main perspectives to the notion of meaning that regulate the functioning of search engines. The first one can be labeled as ‘full text search’: it’s about counting strings of characters in documents and records and ranking the items that have been fetched in some order that is somehow relevant. The second, instead, can be labeled as ‘semantic search’: meaning, here, is considered as a function that comes from putting together different pieces of language and making sense out of them all.

Now, traditional search engines are mostly based on the full-text perspective of meaning. In order to make sense of our inputs, machines translate them based on mathematical models which are not as sophisticated as natural languages are. In simple terms, if you search for “t-shirt without sleeves” on a traditional search engine, this will take every single word individually and look for it in its corresponding e-commerce indexed catalog. Incidentally, the most important word of your search (“without”) is a so-called ‘functional word’ and will be left out: since its meaning is built only in relation to the sentence it is in, it doesn’t fit the statistical model of the engine and it is therefore discarded. Finally, outside of the items tagged with “t-shirt” and “sleeves” it finds, the engine will show you those with the most instances. With the result of having the final outcome being exactly the opposite of what you were looking for. Old fashioned engines are based on this logic. In the best case scenario, they can have some AI components, possibly based on neural networks to learn how to optimize some patterns. The problem is that most of these approaches cannot really think in a symbolic way, so you lose the edge to treat some facts about natural languages in a principled way.

Is semantics the reason why, say, Siri correctly processes complex requests such as “Is there an open pizzeria within my current location,” while most e-commerce websites can’t give relevant results to the “sleeveless t-shirt” query?

Possibly. I don’t know exactly what Siri does behind the curtains. But there’s also another very important factor that can act as a game changer and that is the amount of data one can process. Big Data is powerful. No doubt about that. The world is basically split in two: on the one hand, there are companies, usually tech giants, that have enough data (and probably will have always more) to fuel Big Data AI, Deep Learning application is a great example of this; on the other, there are those who don’t, and probably never will. If it’s backed with Big Data, we can find a way to brute force the optimization of a traditional search engine. The problem comes when businesses don’t have or don’t generate enough data: in this case, traditional search engines are hard to optimize without doing an enormous deal of manual non-scalable work. And the truth is that most companies are in this category. A quick note: there are also many businesses that do have a lot data, but whose search engines are not as good as one might expect. So it’s a very widespread problem.

How can we make search engines have relevant results, for those businesses that don’t have enough data?

Getting back to the beginning, they might want to try a different approach, not based on ‘full-text,’ but on the idea that we should model meaning somewhere. In other words, since for most companies Big Data is really not an immediately viable option, we need to find a way to have search engines understand and mimic the way human beings process meaning. And this is where formal semantics, the discipline studying the instruction set in which the bricks of our language can be put together, comes in handy.

Let’s get back to the “sleeveless t-shirt” example and see why and how humans make sense out of it: to put it simply, our mind has some sort of “internal map of the world” from which we retrieve the meaning for “t-shirt” and “sleeves” and use a grammar that tells us that the word “without” or the suffix “-less” switches the polarity of the words that follow: so sleeveless means something like ‘not-sleeves. This process is impossible for a traditional engine: we can make it understand that six words are different than five, but if it doesn’t process the sentence as we do, making sense out of the combination of words and not operating on each word individually, it will never understand us.

How can you make a search engine understand this, and other complex queries, then?

You tap into formal and computational semantics. The engine we have built at Tooso is rooted in this concept and it basically is a model of formal semantics that’s joined forces with Machine Learning. What we do is take a piece of semi-structured data, like the product catalog of a retailer or a brand, and turn it into an ontology, a representation of a set of concepts within a domain with all the relationships between those concepts. Then, on top of this ontology, we use a Natural Language Processing (NLP) engine, so that the end-user can make a query and expect the engine to grasp part of the meaning. At that point we can apply more traditional Machine learning techniques like neural networks: we use them to personalize search results and improve the customer experience, but in our case, they have little to do with figuring out the meaning of words.

What will be the next big AI leap?

AI is amazing, but de facto, as it works today, it is limited, because most people think AI is about modeling predictions. The reason behind this is that the biggest commercialization of AI began with the Deep Learning revolution. Don’t get me wrong: that was phenomenal. AI became so much more efficient at so many tasks. As a matter of fact, that’s what still works best at the moment, especially when it comes to prediction problems. But in AI there’s much more than that: its potential when it comes to concept representations and modelization is still widely unexplored, and actually quite neglected. This is where AI’s next big leap can be. And I personally follow very closely what is happening around Cambridge and Boston right now.

The Harvard Business Review recently stated that the future is going to be about less data, rather than more. Do you agree with that?

When it comes to Big Data, there are two trends. To a certain extent, of course, things won’t change: the more data the better and tech giants have an unfair advantage there. But not all data can be collected from the world, like in B2C (business to consumer) scenarios. I see great opportunities in the B2B (business to business) space. Let’s say that an enterprise company wants to automate some of its internal processes: for instance, we want to automate the process with which an insurance company reimburses its clients. Or we want to automate the helpdesk of a company that has thousands of employees.

The company data that we have access to, to solve this kind of problem, might not be enough, both in terms of quality or quantity, to apply techniques like Deep Learning for example. And external third-party data wouldn’t help. In these cases, the added value can be provided by learning algorithms that can make the best out of a smaller pool of data points. In this sense, yes: the future, will (also) be about less data. To put it another way, data can be knowledge—we build exponentially on that—or like rubbish—we just keep on accumulating more and more. The question is: how many and what kind of levels of abstraction do we need to make use out of them? There are bottom-up techniques and top-down levels. At Tooso we strongly believe that we should do our best to get the best of both worlds.