Mobility is not a one way street

The need for collaboration across all industries is on the rise. One of the clearest examples of an industry with collaboration opportunities is the car industry. Today, automotive incumbents have widely accepted that collaboration with innovative companies in other business areas, such as software, is necessary for future success. Yet this process is a two-way-street, and the cross-pollination of ideas between all parties is paramount to their mutual success.

It has been undoubtedly positive for the traditional car industry to have newcomers challenging their business. By shaking up the industry, they have reinvigorated traditional car makers in welcoming more radical innovation than in the past. Before, we saw typically more linear improvements whilst today we are seeing significant and exponential innovation.

It is important for these players to take these challengers seriously. Many still do not entirely understand how disruptive this shift may be. However, the newer players are also on track to be surprised by how much work lies ahead. A worthwhile car is not that easy to make. And producing such a quality capital good in high volumes, whilst meeting all of the necessary durability and safety requirements, is extremely complex. Many of the new players will begin to see some challenges, especially in terms of quality, that they have not really factored in yet.

A car needs to operate under extreme conditions. Whether that be in 40-degree heat or 20 degrees below freezing, in heavy rain or in dry snow, a car needs to perform reliably and under a lot of strain. This is not at all easy, and there is still significant hardware based elements in mobility today – the car itself, the fulfilment part of mobility, is something that requires a lot of a lot of experience to build. After all, lives may depend on it.

So although the pendulum has initially swung a lot towards data companies, we see it swinging back towards the car manufacturers – putting them also in a strong position going forward as a partner in the mobility ecosystem as new players struggle to build a sustainable business.

Collaboration is key even in the broadest sense, and a key factor in increasing adoption rates in businesses is the creation of industry standard interfaces. Now is the moment where the mobility industry needs to define such standards in order to mitigate hurdles for innovation – we can all recall the nightmare of incompatible power adapters for mobile devices until it had been whittled down to two: Apple and the USB. With electrical mobility, for instance, the Combined Charging Standard (CCS) is such a standard interface facilitating a more rapid adoption and investment in infrastructure without the need to carry a car boot full of adapters.

Finally, there are partnerships. I consider them to be increasingly important as there is a lot of innovation and expertise across different entities. We are bringing together data driven companies with engineering driven companies and their respective skill sets are very different. Yet they are both essential for new products and services – so we should see a lot more partnerships emerging in this ecosystem. One example of this changing face of mobility is HERE, the open location platform acquired by Audi, BMW and Daimler that also lists Intel, Bosch and Continental as its shareholders.

In short, the story of mobility today isn’t the collapse of the old and the rise of the new – it’s both. The faster both parties understand this, the better able they will be to navigate the untraveled roads ahead.

How Jack Ma is changing the Chinese retail game

Many people speculate about the future, but only a few have more than an inkling of an idea of what it is going to look like. Even less are those who are shaping it. Alibaba founder Jack Ma is indeed sculpting the world of tomorrow and is better equipped to know what the future has in store than most. More importantly, this is a man who knows what is going to happen in-store.

The Chinese billionaire has spent recent years working on his “New Retail” concept – which aims to revolutionise the way we buy things. The English lecturer turned internet tycoon has already proven he does not make plans haphazardly, yet upon announcing that e-commerce giant Alibaba was ready to bet on the traditional brick and mortar concept, several eyebrows were raised.

To many analysts, this sounded insane. Why did the brain behind the behemoth that was devouring the retail sector want to jump right into it? The answer, as often it is, is money: the Chinese retail market is currently valued at $4.9 trillion. Online shopping is growing, but an astonishing 82% of total retail sales still come from physical stores.

However, Mr Ma isn’t only trying to lay his hands on a bigger slice of cake – he’s going after the entire bakery. In January 2016, he opened the first Hema Xiansheng, a supermarket store where the future is already happening. The supermarket is a retail experiment which innovates the concept of a physical store in many ways and wherein technology plays a pivotal role.

In this smartphone-powered store, customers gain access to a world of new activities once they have downloaded the Hema app. They can then scan groceries, fresh fruit and foodstuffs from all over the world, revealing information on the what they may go onto purchase. The information ranges from the origins of products to suggestions on similar or related items. Customers can then add the products to a virtual basket and pay thanks to the fact that the app can be linked to both Alipay and Taobao accounts.

In-store, there are no heavy baskets to carry, buyers can have their food prepared specifically for pickup, opt instead for home delivery or even ask their food to be cooked in one of the in-store food booths (which pay a rent plus a 20% commission to the store). Facial recognition software can also be used to analyse customer reactions to certain products.

Where ordinary supermarkets are designed to maximize the time spent by clients in-store, exposing them to as many items as possible, Hema stores turn this idea on its head. Instead, the stores aim to shorten time spent and speed up the entire process between when an item is ordered and when it is handed over to the buyer.

Recently, Hema started experimenting with a 24-Hour delivery service to capitalise on the overnight online shopping phenomenom. Through Taobao and Tmall platforms, Alibaba Group discovered that around 80 million people, mostly women over 30, habitually visit e-commerce sites between midnight and 4 a.m.

In short, Alibaba has found a way to ensure an enjoyable and personalized experience which seamlessly merges the best aspects of both online and offline shopping. People have the online advantage of immediacy, information and personalized proposals together with traditional, more tangible gains such as touching, smelling and examining the product they are going to buy. With the inclusion of food booths, Hema turned the routine chore of buying food into more of a social experience that can lead to a lunch or dinner with friends.

Every store is both its own warehouse and logistics centre. Additionally, stores are also designed to host events that facilitate interaction with customers, recruit fans, and work as spaces for dining events too – a system fuelled by a small army of item pickers, packagers and couriers.

Alibaba is continuing to invest enormous amounts of financial resources into brick and mortar retail. The giant has bought stakes in companies including Sanjiang Shopping Club Co. Ltd (one of the main supermarket operators in the Zhejiang Province) and developed strategic partnerships such as the one it has now with Shanghai Bailian Group Co. Ltd, one of the largest retailers in China.

In January 2017, it announced it was going to become Intime Retailing Group’s controlling shareholder through a $2.8 billion acquisition, and, later that year in November 2017, Alibaba acquired a 36,16% share in Sun Art Retail Group Ltd (a $2.9 billion contract) – the country’s largest hypermarket chain which operates around 446 complexes in 224 cities under the banners of Auchan and RT-Mart.

The Chinese e-commerce giant has paid attention to parental stores too, which usually are family-run, neighbourhood businesses. They are small but they are many, with around six million in China alone. These stores were targeted by the Ling Shou Tong (Retail Integrated) program, which aims to reinvent cramped and outdated stores across China. The owners who decide to embrace this change are aided by a consultant who helps modernise signage and interiors. Most importantly, the programme integrates the store in the net – once a store’s inventory is digitised, the brick and mortar venue becomes a terminal of a centralised warehousing and logistics system.

The proprietors need not worry about negotiating with multiple distributors nor keep a record of what items have to be ordered. The Ling Shou Tong app provides them with real-time, detailed information about their clients’ preferences, showing trends and helping them to order the correct products, at the correct time, in the correct quantities.

Ma and his team are building something which is much more than simply an omnichannel. They are creating an entire ecosystem which centers on Alibaba’s apps and platforms. The Chinese colossus has developed a strategy to not only win the battle for both online and offline, but also to fuse them together in a unique environment. Instead of chasing users once they are online, it is connecting them to the offline world.

Data makes the difference. The more customers use Alibaba’s apps or buy things inside its physical stores, the more of their data – today’s business world’s most valuable asset – is sourced. The continuous gathering of data helps it to understand the hearts and minds of its customers.

In just over 2 years, 46 Hema stores in 13 Chinese cities have opened. Mr. Ma however, aims to increase this number to 2,000 in the next five years. It may seem then that Jack Ma is on his way to consolidating his monopoly further, but there are others on his heels. His main rival, Jeff Bezos, the man behind worldwide e-commerce Amazon, currently has a market cap almost twice as big as Alibaba’s. By acquiring the supermarket chain Whole Foods, a world leader in natural and organic food, Amazon has mimicked its Chinese competitor – but still ebbs behind Alibaba when it comes to integrating brick and mortar venues into its own ecosystem.

Alibaba also must be weary of competition in China, too., another e-commerce company which owns the largest drone delivery system, has started investing in brick and mortar stores – the competitor has now launched their high-tech supermarket 7Fresh as well as developed an array partnerships with more traditional partners.

In such a rich market, competition is always high, but Jack Ma has a notable advantage: he didn’t stumble into a new market, he alone created it from scratch and encapsulated it within his own philosophy – New Retail. The new retail landscape in China, is dominated almost entirely by Alibaba. From logistics to warehousing, inventory and distribution – it doesn’t matter how big your brand is, if you want to succeed in retail in China, it would be wise to get along well with Mr Jack Ma.

The future of voice

Since the invention of the computer, human-machine interaction has always been conveyed through the physical: in the beginning, the message was transmitted by a piece of hardware (a mouse, keyboard, or joystick). Then, when the smartphone revolution arrived circa 2007, “touchscreen” technology moved interaction further in favour of touch. This is an unnatural form of interaction, an artificial and univocal language that has been designed specifically for a single purpose.

However, this physical barrier that translates human language into an input that machines understand has a sell-by date, and a sudden change is rapidly underway. Voice Assistance, a technology that makes sci-fi human-to-robot conversations real, is the interface of the future.

As we all know, Artificial Intelligence has made great leaps in recent years, opening up possibilities that were inconceivable just a decade ago. Voice assistance is among the applications that have greatly benefited from this exponential growth. The software combines AI’s machine learning with voice recognition technology, and it is a software agent that performs a number of tasks when activated via voice. Without machine learning, the development of a speech-recognition engine is almost impossible.

However thanks to these developments, today it is possible — and it is causing a shift in the way we interact with machines. Before, the burden of communication was on us, and we had to learn how to interact with machines. Now, it’s the other way around, and it is machines who are having to learn human language.

Apple’s Siri, released in 2011, was the first voice assistant software to enter the market. Afterwards, this market grew so large it caught the attention of all of the other major tech companies, who consequently invested in building their own voice software (Alexa by Amazon, Cortana by Microsoft and Google Assistant by Google), which all have been implemented into smart speakers as well as smartphones.

The speakers, for example, are becoming very common in American and European households. Research suggests that 20% of the US population has access to smart speakers. Alexa and Echo Dots are stand-alone, non-battery powered, WIFI enabled speakers, mainly intended for domestic use. It may be the naturalness of voice that has enhanced the adoption rates of voice assistants — along with the need of controlling our devices hands-free. In any case, we have witnessed a ‘hyper-adoption‘ of voice assistants already. This is what my partners and I anticipated when we founded Future Of Voice, a company that develops voice interfaces and works with tech giants such as Amazon, Google and Microsoft.

There is murmurs in the news about concerns that these devices are always switched on. However, this does not mean that they’re “listening” at any given time. They are activated with voice and only begin to send the recorded audio file to the cloud when you complete a preset command such as “Alexa” or “Hey, Google”. Otherwise the audiofile of the last few seconds is automatically deleted. In this moment in the digital era in which privacy issues are becoming a growing public concern, this is an image-problem that needs to be dealt with. However, tech giants will do all they can to ensure user privacy. If they do not, then they will see their voice assistants silenced for good.

Will voice assistants change the way we interact with each other? This is currently too difficult of a question to answer. One point of interest we have noticed is that this software gives better feedback when addressed “in a rude way” (more directly). Of course, nobody wants their children to be rude, but currently voice assistants do not require any “please”, or “thank you” – it isn’t essential information that affects their understanding. But this is just food for thought, and as this technology is in its infancy we do not yet fully know what users might want from this technology in the future.

Even though there is still room for improvement, especially when it comes to contextual understanding, Voice Assistance technology is undergoing a “silent revolution”. This doesn’t mean screens will completely disappear, at least not in the next decade. Graphical interfaces will certainly still be present as sight is just as important as voice when it comes to humans interacting with the world. But I am positive that as the argument for this new interface gains momentum, eventually it will become the only voice in the room.

Healthcare and AI should be a force for public good

What will the future of healthcare look like? Bart De Witte believes it will centre on the symbiosis of machine algorithms and human decision makers – emboldening experts supported by artificial intelligence. However, with the 70% of the world’s population still without access to healthcare, how will these new applications of technology serve the public good?

What is the AI black box problem?

Four years ago, 18-year-old Brisha Borden and a friend spotted an unlocked bicycle in the street on their way to pick up a family relative from school. They were late and, as teenagers often do, acted by jumping on the bicycle and cycling down the street. Immediately, a woman came running down the road and informed the two girls that the bike belonged to her children, the girls returned the bicycle but it was too late – the police had been called, the girls were arrested and charged with burglary and petty theft for the items – a total of $80.

A year before, one Vernon Prater, 41 years old at the time, was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. This was not Prater’s first rodeo: he had already been convicted of armed robbery and attempted armed robbery (for which he served five years in prison) in addition to another armed robbery charge in the past. Borden had a record, too, but it was for misdemeanours committed when she was a juvenile.

You may have heard this story before, as it was brought to light by ProPublica’s award-winning report Machine Bias back in 2016. If you haven’t, you are probably wondering why these cases are worthy of a mention. What’s so odd about being arrested for criminal activity as, unfortunate, and ambiguous as the circumstances may be?

The reason is that this story has a strange conclusion with sinister consequences. When the two were in jail, a computer programme was tasked with predicting the likelihood of each committing crimes in the future. The result? Borden, who is black, was rated as a high risk. Prater, who is white, was rated as a low risk. Two years later, Prater was serving an eight-year prison sentence for breaking and entering into a warehouse, stealing thousands of dollars worth of electronics in the process. Borden, on the other hand, committed no more crimes, the computer programme, and in particular, its algorithms, had gotten it completely wrong.

When one makes a wrong decision in life, or perhaps comes to the wrong conclusions, more often then not, through self-reflection and reevaluation, the individual concerned can reassess what went wrong, how to improve in the future and grow as a person. This is not the case with many of today’s algorithms, they act as a ‘black box’, shut away from the world wherein we can only assess the outputs of the inputs we feed them, with no explanation given as to why they act the way they do.

Today, we put a lot of faith in these faceless algorithms. We may not be able to see them, but we know they are there and, most importantly, we believe them to be forces for good. Automated algorithms show us the most relevant products to our interests, guide us through cities, power the searches that answer our queries and even determine where we deploy our police forces. However, our deal, based on good faith, is beginning to look more and more Faustian by the day as we surrender more and more of our data to entities which may be less enlightened than we once assumed.

Back in 2015, software engineer Jacky Alciné pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Three years later in 2018, Google ‘fixed’ its racist algorithm by removing gorillas completely from its image-labelling tech. This may, of course, be down to Google not putting any resources into fixing this, but when dealing with such a highly sensitive area such as race, and with as progressive a company as Google, it seems reasonable to believe that the reason they did not solve the problem was because they were unable to understand why the problem was occurring in the first place.

Algorithms then, run the risk of reinvigorating historical discrimination, encoding and reinforcing it once more into our societies. The fact that Google, seen as a forerunner in the AI-sphere, cannot overcome such a problem illustrates the deep complexities of machine learning and algorithms and how little we understand them. There is a saying in the coding world: if you input garbage you will get garbage. If you input bias, even if unconsciously, you will output bias at the other end.

Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, and quoted in Ed Finn’s What Algorithms Want, states that: “The automated systems claim to evaluate all the individuals in the same way, avoiding discrimination” however “prejudices and human values ​​are incorporated into every comma of the development phase. Computerization can simply transfer discrimination further upstream”. So what can we do to mitigate these malgorithms? How can we shine a light into these black boxes and reveal their secrets?

Currently, it is almost impossible to determine whether or not an algorithm is fair, as in many cases, they are simply too complex to fathom. Furthermore, they are often considered as proprietary information with laws in place that protect their owners from sharing the intricacies of the programmes they use.

In 2016, Wisconsin’s highest court denied a man’s request to review the inner working of COMPAS, a law enforcement algorithm. The man in question, Eric. L. Loomis was sentenced to 6 years in prison after being deemed high-risk by the algorithm. Loomis contests that his right to due process was violated by the judge’s reliance on an opaque algorithm. In an attempt to understand how states use scoring in their criminal justice systems, two law professors probed the algorithms for a year, with the only discovery being that this information is well hidden behind staunch nondisclosure agreements.

But there is hope: a team of international researchers recently taught AI to justify its reasoning and point to evidence when it makes a decision. This form of AI is able to describe, through text, it is reasoning behind its conclusions and is one of few developments in the progress of ‘Explainable AI’. According to the teams recently published white paper, this is the first time an AI system has been created that can explain itself in two different ways. The model is the first “to be capable of providing natural language justifications of decisions as well as pointing to the evidence in an image.”

The researchers developed the AI to answer plain language queries about images. It can answer questions about objects and actions in a given scene and provide answers which require the intelligence of a nine-year-old child. It doesn’t always get the answers right (it mistook someone vacuum cleaning a room for painting one for example) but that is precisely why this development is important. It gives us a glimpse and understanding of why it got the question wrong.

It is not only in the lab where there is growing concern about the unintended consequences of AI. Last April, Harvard Kennedy School’s Belfer Center for Science and International Affairs and Bank of America announced the formation of The Council on the Responsible Use of Artificial Intelligence (AI). A new effort to address critical questions surrounding this far-reaching and rapidly evolving application for data and technology, the Council is focused on issues ranging from privacy, the workforce, rights, justice and equality as well as transparency.

“It is difficult to overstate AI’s potential impact on society,” said Ash Carter, Director of the Belfer Center and former Secretary of Defense (2015-2017). “The Council will leverage Harvard’s unmatched convening power to help ensure that this impact is overwhelmingly on the side of public good.” As more and more money pours into AI, it is paramount that we understand the processes behind the processes, and build awareness of the risks misunderstood algorithms bring.

Even so, until this technology is finessed, many will continue to find their lives determined by the damning sentences and evaluations these black box algorithms inflict upon them. As we have seen, it is almost impossible for citizens to bring a case against the algorithms’ creators, and these unseen forces continue to remain unaccountable for the consequences of their actions. Algorithms are certainly one of the keys to the world of tomorrow, but what that world looks like, and what values it is built upon, remains to be seen.

How the future of mobility will take many forms

Until recently, the transportation industry had never been exposed to any major change, always opting instead for smaller tweaks and enhancements that focussed on 20th-century notions of what a car should be. However, now it is undergoing deep disruption, becoming the testing ground for different technologies and much broader approaches.

The transport industry then, is an extremely competitive one, with both startups and traditional companies fighting to be the main protagonists in this new chapter of an old story.

Today, owning a car is normal and, at least in many countries, more of a necessity rather than a commodity: in the future, it won’t be like this. As technology shifts its focus from customised vehicles, the private automotive sector will become more and more conservative: cars will become a status symbol concerned primarily with design and the driving experience, an object for enthusiasts – you can see the same in how horse riding is viewed today. At the same time, the sharing economy will continue to obtain an increasing foothold in the market – making public transportation more sustainable, efficient and low-cost.

Contrary to the views of many, autonomous driving will not be the sole driving force in the mobility sector. Instead, today, it is being shaped by three main dynamics: electric propulsion, fruition models inspired by the sharing economy and self-driving technology. However, there is a fourth force which is going to have a huge impact on the field: modularity. As the mobility of people is becoming more and more fluid, this characteristics should apply to vehicles as well: Next Future Transportation, the company I founded in 2014 after my career study in Physics and Innovation Design, is based on this vision

The modules can drive autonomously on regular roads, join themselves and detach even when in motion , and that when joined, the doors between modules fold.

Imagine a car or a bus, that is ‘composable’: made of self-driving, electric modules that are attached to each other and can freely assemble and disassemble with the help of a robotic arm and an optical alignment system. Each of the modules we’re working on is made of aluminium, is two and a half meters wide and can transport up to ten persons as well as goods. They can also redistribute according to their destination and the needs between one vehicle and the other, reducing traffic congestion, fuel consumption, commute time and running costs. The modules would not only function as a means of transport: they could offer “services” too, such as toilets, shops and restaurants.

The ‘modular’ system is completely scalable and versatile and, even though it can be applied to private use, would find its best applications in the public transport sector, with different opportunities shaping up from market to market. In the Middle East, for example, an area we are currently working in, the difficult climatic conditions push people to use taxis rather than public transport: Next vehicles could, therefore, find their best application as a shared taxi that goes where needed and combining with more modules if/when necessary. All without any form of human driver.

In the same area, there many ongoing projects for creating new cities from scratch: these can be built and designed to match the needs of this new ‘modular’ system. A bus of the public transportation could, for example, in peak hours be provided with more modules, which would get back to the main station if under-used, therefore saving space and energy.

An application in Italy, instead, wouldn’t be so easy: it’s a country that has completely different issues when it comes to infrastructures – just think of the way its cities are shaped, wherein cars and parking lots are a big component. The inclusion of a modular system in public services in such an entangled structure that is characteristic of many Italian cities is complex, but still possible in the future. Here, we could seal retail deals and partnerships within the tourism sector, using next cars as alternatives to, say, trains, especially in the many under-connected areas of the peninsula.

Business-wise, that is I envision the future of mobility: a dance between traditional players who will continue to make the vehicles with innovative companies such as Google who will provide them with technological solutions to integrate them. Volvo will provide the vehicles, and Google will sell a kit to make it self-driving for example. Which is also our company’s aim: to integrate the modular system into the pre-existing public transportation space. In the long run, this means that the automakers’ strategy will increasingly be to work on acquisitions and partnerships with new startups and innovative companies, combining the fields of expertise of both.

We’re all wondering when the “year zero” of self-driving cars will be: we’ll probably have to wait less time than we expect, as what typically follows digital exponential technologies is a sudden adoption on a massive scale. Once this happens, with an integrative system we’ll be able to accelerate the process by applying our idea of modularity into shared mobility world of tomorrow. This, I believe, is what will come Next.

R+ is the future of architecture

Virtual Reality has long been perceived as a deluxe toy for avid gamers who want to cross the line between “living the game” and “living in the game”. Numbers never lie, and according to a March 2018 study from the international law firm Perkins Coie, 2018 Augmented and Virtual Reality Survey Report, the gaming industry will continue to absorb the majority of AR/VR-related investments for the near future. Its dominance, however, is weakening, since the attractiveness of other industries, such as Education, Healthcare and Medical Services as well as Real Estate is growing rapidly. Yet there is also another, promising and partially unexplored, area for both tech companies and investors interested in R+ technologies: AEC (Architecture, Engineering and Construction).

The numbers reveal that, in 2016, 78% of all the money poured into the R+ sector was related to the gaming industry. Two years later, it accounted for only 59% of invested resources, whilst in Real Estate (a subsect of AEC) grew from 18% to 21%. This slice will become bigger in the coming years as R+ revolutionizes the AEC world. Today, we are able to reenact one of Mary Poppins’ best tricks of her trade and step straight into a drawing. R+ will breathe life into a 2D plan and make possible things which so far have been restricted to existing solely in our imaginations.

VR now can provide a real preview of the project well before a single wall is even erected or the foundations are laid down. They are necessarily always 3D scale models, but can also be holographic representations that create an immersive experience which allows architects and designers to see the project from within in the highest degree of realism. One can appreciate the decor’s elegance, observe the curtain’s motif, wander (and wonder) through rooms, peer out of an upper-storey window and enjoy the landscape from a rooftop terrace of a building that doesn’t exist.

It’s a turning point for everyone involved, from architects to engineers, designers to buyers. Some gains are pretty obvious. As an instance, communication of the overarching vision is sped up. When it comes to architecture, designing and construction, sometimes it’s not easy to convey such an idea. Plans and drawings may lead to different perceptions to different viewers. Sometimes they may say nothing at all. But by bringing these people into the same virtual space, this problem is solved.

The technological revolution is also pushing architecture firms to create roles that didn’t exist before. As Archdaily reported, until a few years ago it would have been unthinkable for an architecture company to search for Chief Technology Officers, Immersive Reality Modelers, Virtual Simulation Designers, Haptic Interface Designers and Data Scientists. Now they do and are even in the market search for Digital Development Managers or Building Information Modeling Directors as well.

In fact, to make a 2D plan become a 3D space that can be explored, an impressive amount of data collection and programming is needed. It’s not just choosing the right software and hardware. Firms need people to make them work and to teach users how to move in the virtual world once they adorn the R+ headsets. Samsung Gear VR has been among the first devices to feature both AR and VR functionality. In architecture, these two versions of immaterial reality are both equally precious, since they have different strengths as well as weaknesses.

Whilst VR guarantees a totally immersive experience in a wholly fabricated world, AR offers something a bit less captivating as it restricted to creating virtual spaces on top of real spaces. The former is the obvious choice if an architect wants the client to fully comprehend what she/he has in mind. It gives an extremely realistic and engaging experience which, on the other hand, is individual and not easily sharable (right now anyway). Furthermore, making the brain come to terms with the idea of movement without movement is easier said than done. It’s not rare for people to take off their headsets and feel queasy (but progress is being made here too). Augmented Reality, instead, doesn’t isolate the user and can be experienced by many people at the same time. It doesn’t require headsets, and it can be accessed through apps such as ARkiPair and Smart Reality – a simple smartphone or tablet is often all which is required.

As said, AR and VR can speed up communication and therefore boost sales: here lies part of their economic potential. Real Estate giant Sothesby has recently launched Curate, a visual staging AR app developed by Rooomie that helps buyers to figure out how their homes will look like once furnished. Ikea as well has launched its own AR app, Ikea Place, which allows customers to visualize how their living rooms or bedrooms and kitchens will look like filled with whichever furniture they might be interested in.

Prevention is another keyword for those who are heavily investing in R+ in AEC. By allowing a team of engineers enter a building or a facility before it is built, they can spot any flaws and weaknesses which might be immediately evident whilst there is still time to fix them. The same technology can be used to test the structure’s resistance to gale-force winds, floods or extensive fires. Engineers and architects, working side by side, can project hospitals, health care facilities and emergency rooms specifically designed to make medical staff save time, and most, importantly lives. They can do this by simulating routine activity during an emergency and even evacuations. In such an intense environment, speed and planning is everything.

R+ can help see if the balcony of a theatre might obstruct the view of those sitting in the last few rows by recreating the view from each seat. They also help see through things too. DAQRI, a company which developed AR hardware that overlays virtual gauges onto physical spaces, uses sensors which gather data on the temperature and pressure within pipe systems in order to spot anomalies or malfunctioning components that could damage the plant and put workers’ lives in danger.

Another factor driving investment into R+ is that the size of the market is growing at an astonishing pace. According to the report ARCore and ARKit: The Acceleration of AR mobile, by 2020 there will be 4.2 billion AR-compatible mobiles (they were approximately 500 million in 2017). It is also becoming more profitable, and its value is expected to reach $560 billion by 2025 and $2.16 trillion by 2035, according to the worldwide financial institution Citi.

However impressive and groundbreaking R+ seems, these technologies are far from perfect. The constant need for programming is a hindrance, and the amount of interaction for visitors of a virtual space is limited. They can look around but they cannot touch. Everything has to be preprogrammed and this means that, when a flaw is spotted, it can’t be corrected immediately. The user has to take the headset off, disengage into the real world, ask a reality modeller to modify the project and then return to virtual space at a later date to check the result.

But this is a minor problem typical of any emerging technology, and, as the technology keeps evolving and progressing, it won’t be long before this is solved. In the near future we will be able to spend much more time in virtual worlds. Worlds in which we can interact with tangible objects, move freely and enjoy physically-responsive design.

12 common mistakes startups make


Startups think too small and do not consider the implications of becoming global at their inception.They are too focused on regional/national markets and have little knowledge of how to do business internationally. Consequently, they miss the huge global business potential.

Recommendation:  Think global and enter the market where your potentially biggest customer is located. Focus on getting a global deal first from a tough early adopter.


Startups often lack confidence about their own abilities, although they have all of the ingredients for success. The founders are very well educated and have developed great technologies or products but do not fully believe in their own big success.

Recommendation: You need to want to conquer the world and be super confident even if you fail in details – never lose the “north star”. (In Silicon Valley, however, the reverse condition is a more common mistake).


Many European founders want to become rich and be recognized as serial entrepreneurs whilst maintaining their current pace, believing they can have the the same energy and life. They want to become famous and super rich, but such motivations will limit their success: others will see selfishness.

Recommendation: Forget about becoming rich and famous, the reality is that you will work like crazy and have no or very limited time for anything else. Prepare yourself for this ongoing battle, it will last several years. If you are not ready for this fight, do not start a company.


Many startups are created by friends who come together and have a “great idea”. At the time of the foundation, most of these friends do not know each other’s true character, abilities to execute and deliver, and often forget the “ego issue”. Many startups fail because of personal problems between founders.

Recommendation: Do not found your startup with your friends or family members but choose professionals who know their area, sign written agreements. Shaking hands is not enough. You still can have a lot of fun, and the chances for success are much, much higher.


Many startups believe that they have such a great idea that they need to careful not to share too much of their idea. The (sad) truth is that almost every idea already exists somewhere in the world. Having an idea is not a business!

Entrepreneurs who would rather “stay in the garage” and keep developing their technology in isolation are often surprised when customers or investors do not want/need it.

Recommendation: As soon as possible, go to market, meet potential customers and experts and get their valuable feedback. Forget about NDAs in the first phase, this might make access to the right people difficult.


Many startups completely underestimate the competition. Even worse: They have not done a proper competitive analysis which limits the ability to learn from the mistakes of the competition.

Recommendation: Take a deep dive into the competition, try to meet competitors and discuss common market trends and issues, try not to replicate their mistakes and constantly watch your competition.


Many startups do not know how to make money, and they are not clear who their customers are. They also miss a clear value proposition and risk not capturing the full value from their customers.

Recommendation: Get support from industry experts who help you to find the right business model and pricing. Do not rely on your own thinking.


Almost all startups create the hockey-stick-like revenue plan to attract investors or partners without thinking twice why the third year will always be “the big year” with profitability. The Excel is always right, but not the thinking behind the spreadsheet, which can destroy your startup.

Recommendation: Be very realistic about any revenue projections beyond year 2; no reputable investor believes those numbers anyway and will discount your numbers. Do not be surprised. Focus on the next 18 – 24 months and have a solid “bottoms up” revenue planning process.


As many startups are started with friends, leadership is often absent because the founders feel that they each have equal rights in leading a company. This might work at the beginning, but as soon as the company grows, this becomes a critical limiting factor. Employees need strong leadership – not a “feel good” atmosphere. Weak leadership affects all areas and can jeopardize future growth and capital funding.

Recommendation: Have a clear role and responsibility matrix amongst founders and one leader must be the CEO with all of the power to make things happen. Customers and investors do not like the two-headed startups, they want clear structures that are accountable.


Many startups fail at the first early adopting customers phase (innovators and early adopter’s) as they don’t have energy to jump the chasm in order to reach the early majority in specific markets. The main issue is that startups tend to not focus on selected markets or have selected the wrong market.

Recommendation: Spend enough time select the right target market and then really focus with all of your resources on cracking it – reach a a good number of customers so you can state that you have reached an early majority in this market, don’t give up until you have achieved it!


A lot of startups find out in a later stage of their life cycle that the investor they have chosen is the wrong one, e.g. different expectations on business or finances.

Many startups are not careful enough when selecting their first or second round investors—they treat all money the same, when investment capital for startups is most influenced by the quality of the investor writing the check.

Recommendation: Be very careful when selecting, you can ask a lot of questions to the business angel, corporate investor, or venture capitalist, don’t be shy. The key is to align expectations because the investor has a horizon for his return of investment… and be aware these horizons might change due to other events outside of your control.


Almost all startups raising too little capital, especially at the beginning in the Seed or A phase, as they think too optimistically and cannot foresee all of the execution problems at this early stage.

Founders tend to look too much at the cap table and worry about the potential dilution—they don’t think about having the benefits of having a smaller slice of a very large pie.

Recommendation: In the early stage you should raise enough money for at least 18 months without revenues and be sure things will happen that force you to go back to investors too early. You want to avoid a down-round. Dilution of founders is inevitable if they need to raise more money. You should do everything possible to have control over this process and not be forced to take new money on other people’s terms.

Built on trust: Distributed Ledger Technology

In the same way that most people who use the internet seem to be more interested in what it does rather than how it does it, the technological genius behind Bitcoin and other cryptocurrencies often goes unexplored. While understanding the way Distributed Ledger Technology works isn’t essential to one’s ability to use applications built on it— having a basic knowledge on how it works will help you understand why it’s considered to be revolutionary and why the hype surrounding DLT is well-founded.

What’s a ledger and why are we distributing it?

Ledgers are the foundations of accounting  —  a system by which people establish who owns what, who has what, and who owes what to who. While the concept has remained the same, the medium used to record transactions has varied over time and through technological advances. From cowry shells to papyrus, books to computers  —  the goal has always been to keep records as efficiently and effectively as possible.

Humans have been maintaining ledgers for thousands of years, and while the medium and methods have changed over time, one element of ledger-keeping has not. From Mesopotamia to McGladrey, a third party has always had to register and oversee transactions and maintain accounts. This makes sense as it provides a basis for validation, and allows people conducting a trade of value to trust one another. The growth of global trade and commerce has led to the creation of a vast network of ledger systems, which are vulnerable to downtime, misinterpretation and fraud, the repercussions of which can be catastrophic and far-reaching  —  just think back to the 2008 Financial Crisis.

Distributed Ledger Technology is the first form of ledger to eliminate the need for a third party. It makes it possible for a ledger to be distributed among all those using it, putting the responsibility to maintain and validate it in the hands of those using it. The result is a decentralized system of data registry where transactions are transparent, reliable and incorruptible. It’s often referred to as “the trust protocol”, as DLT is the first system to bypass the need to trust one another when conducting transactions of value —  the implications of which will be profound and far-reaching.

What’s the difference between blockchain and DLT?

The rise of Bitcoin and other cryptocurrencies has brought the word ‘blockchain’ to headlines, book titles and dinner table discussions all over the world. The word has become synonymous with the idea of tokenization and cryptocurrencies, so much so that the crypto movement is often referred to as the ‘blockchain movement’. Few of us take the time to understand blockchain or the way it works, and fewer still dive deep enough to understand the distinction between blockchain and Distributed Ledger Technology.

Distributed Ledger Technology is an umbrella term used to describe technologies which distribute records or information (the kind you might find on accounting ledgers) among users, either privately or publicly. Blockchain was the first fully functional Distributed Ledger Technology and the only one people knew about for close to a decade. This likely led to people coming to the conclusion that it was and would forever be, the only form of DLT, therefore making it acceptable to use the two interchangeably. Blockchain is a type of DLT, a subcategory of a more broad definition, much like how the word “car” falls under the umbrella term “vehicles” and “Satoshi Nakamoto” falls under “geniuses”.

As the cryptoworld continues to grow and change, we’re seeing an abundance of interesting projects eager to test, tune and tamper with our idea of DLT. This has led to the creation of several variations of the original Bitcoin blockchain, but also DLT systems which have ditched the idea of a blockchain altogether, such as IOTA and the Tangle Network, Hashgraph, RaiBlocks (now NANO) and the peaq project. As entrepreneurs become more aware of their options, we could see less reliance on traditional blockchain systems, and a shift to other forms of Distributed Ledger Technology . Knowing the difference between blockchain and DLT could prove invaluable.

What impact will DLT have?

Distributed Ledger Technology (DLT) is poised to disrupt industries that have long held sway over society and signal the beginning for new ones based on a distributed, decentralized future. While the technology behind Distributed Ledgers is complex, its benefits are very real and easy to grasp on both an industrial and societal level.

There are loads of articles circulating about the effect DLT will have on the world of finance. It’s a widely discussed topic. So let’s consider another industry, the manufacturing industry, for instance. The manufacturing industry is often referred to as being ripe for disruption, yet in comparison to other industries, it has been relatively slow to embrace change. Practical barriers, heavy regulation, and complexity of work are some of the issues holding the industry back – but the tide is turning. 

DLT’s most evident value proposition for the industry lies in how the technology can streamline processes. Businesses can achieve a simpler, less costly, and more efficient way to establish trust in manufacturing value chains. In doing so, it can slash the trust tax on manufacturers and suppliers, thanks to its ability to create, validate and audit contracts and agreements. We can expect blockchain based systems to simplify policies, monitoring capabilities, and control mechanisms, leading to more flexible control over operations. DLT could be used to automate interactions with external parties and internal processes, dramatically reducing costs and complexity. It could also be used to create new ways to track the flow of materials, contracts, and payments, as goods are transported around the world, and since it has no single point of failure, it would also boost overall security.

This is just the tip of the iceberg for the manufacturing industry and similar benefits can be seen across several other industries, both traditional and more recent. We often seem to think that it’s only long-established industries and practices which are subject to disruption by emerging technologies, but the truth is new tech can often permeate deep into the nooks and crannies of society. In fact, one could argue that the more recent an industry is, the more likely it is to be built on digital foundations, and therefore the more likely it is to be affected by emerging technologies such as DLT.

The Internet of Things industry or IoT, is one such example. The Internet of Things is based on the notion that in the near future, just about everything will have some sort of computer chip or sensor in it, laying the foundations for a digital era in which everything communicates and transacts with everything else, either semi or fully autonomously. One of the biggest hurdles in the way of the IoT is DLT’s main selling point – a verifiable, transparent, secure, digital trust system. Indeed in an unlikely turn of events, security, trust, and identity are the unlikely pillars upon which the Internet of Things is set to be built on.

For over two decades now, the internet has lacked a system which could verifiably prove identity or origin. It has lacked an accountability feature, a transparency mechanism. DLT is that missing building block of the Internet . DLT is the trust protocol. It’s giving us a way to be sure of our information and data, such that we can now think in terms of adding our identities to blockchains that eliminate the possibility of all kinds of fraud, write self-executing smart contracts in computer code or send value of all kinds directly to one another.

This is where the idea of DLT as the “New Internet” comes from. The addition of a trust layer, of a consensus feature, represents a whole new way of interacting online. Given how pervasive the Internet has been beyond the Internet connection, one would not be blamed for thinking that the New Internet might do the same  —  that the trust protocol may change the way we interact on a fundamental level.