Automation is the capability to apply artificial intelligence for the purpose of productivity. Its importance to human progression is such that automation is widely billed to be the third revolution, after industrial revolution in the 18th-19th centuries and the production revolution of the 1950s.
And excitingly we’re living through it now. AI is getting better and better at automating things that humans do.
Automation is already allowing us to shop quicker with self-checkouts, work out the perfect temperature for our homes and much more.
In addition, by introducing greater automation into the workplace, AI is empowering business employees to spend more time on more profitable and enjoyable activities. Such automation is driving efficiencies in day-to-day operations. Repetitive, low skill chores such as documentation processing and simple aspects of customer service are often the first areas that businesses experience with.
Early adopters are also trialling automation for more highly skilled tasks like fraud prevention and copywriting. For those companies yet to consider a degree of automation, there is a very real risk they will soon be losing out to competitors on superior customer experience and value creation.
Arguably one of the most advanced in the business, Boston Dynamics cause a viral sensation every time they post a new video showing off their robots’ skills in running, jumping, even doing back flips. Established in 1992, they are a robotics lab who spend their time developing smart robotic solutions for a variety of fields and verticals. In 2014, they were bought out by Google and are currently their largest robotics partner.
You might have seen some of their more popular robots including ‘Atlas’ - the world’s most dynamic humanoid, ‘Spot’, a robotic dog who Black Mirror no doubt drew inspiration from, and ‘Big Dog’, a larger version of Spot that drew the attention of the American Military for its capability to carry and travel over rough terrain.
Boston Dynamics are at the forefront of the robot revolution. Their use of AI in creating systems that allow movement and detection of context are cutting edge. Their robots indicate Narrow AI, in that they focus only on being highly functioning at one specific task. For instance ‘Spot Mini’ is able to open a door in one video, using image recognition and a series of specifically programmed movements. However, Spot has no real context of what a door is and is focussed only on that task.
In a more day-to-day example of robot adoption, Amazon bought Kiva Systems in 2012 having already introduced a fleet of their robots as stock pickers throughout their warehouses. They have now augmented this process with robots selecting products and humans reviewing their selections. This has subsequently been extended with pilots for Prime Air – automated drone delivery – and Scout – robot delivery.
Conversational AI refers to the use of messaging apps, speech-based assistants and most prominently, chatbots. Chatbots are seen by some as a bit of a fad. When they were built they were recognized initially as a creative outlet for marketing teams, but they’ve had great impacts on the efficiencies of business-as-usual operations for many companies. Gartner predict that AI bots will power 85% of customer service interactions by 2020 and will drive up to $33 trillion of annual economic growth.
Nascent conversational interfaces also have existed for some time and are only becoming more popular. Devices such as Amazon Echo and Google Home Assistant use technologies such as applied machine learning and natural-language processing to enable a voice interface. Sales of the Amazon Echo in its first year were comparable to that of the first iPhone, and with the advent of Sherpa (a predictive virtual assistant) we believe that we’ve only scratched the surface of how conversational AI can transform businesses and their services.
Conversational technology will be empathetic by nature and able to use real-time data analysis to create the sort of context-aware, memory-based conversations we have with each other. That means your customers can be treated on a completely individual basis by a machine that draws on reams of data to enrich the experience. From providing simple facts to answering questions and imparting knowledge, conversational AIs will be able to do everything a human customer-service agent can, but faster and with immediate access to much more information.
There is a lot of excitement behind the future of AI. Deep learning is the technology construct that underpins this. It’s part of a broader family of machine learning methods, however deep learning goes a step further than task-specific algorithms and uses neural and belief networks to learn from the patterns and input that it experiences, continuing to learn as it runs.
Though narrow in principle, deep learning as a technology is the closest we have got so far to creating machines that exhibit human behaviour. They are still a long way from our own capabilities.
Deep learning solutions are already being implemented in a number of commercial ways from aiding detection of dialects in translations and image classification to creating better customer service experiences.
One of the most notable uses of deep learning is in Autonomous Vehicles, though this is still at prototypical phase. Rather than use one single model, helping these vehicles get from point A to point B, some deep learning models specialise in street signs while others are trained to recognise pedestrians. As a car navigates down the road it can be informed by millions of individual AI models that allow the car to act.
Empathy is the ability to put ourselves in someone else’s shoes. With the rise of Artificial Intelligence and smarter products that can complete more complex tasks, a desire to tackle empathy will be crucial. As the complexity of a problem increases, so does the level of requirements different humans may have. Artificial Intelligence services will need to grow to understand that humans are multidimensional, or they will not be able to transform and differentiate themselves from the products and services of today.
If you own an Alexa, I am sure at some point you have muttered something completely unrelated, only for Alexa to chime in and play Coldplay as a suggestion. We’re now surrounded by hyper-connected smart devices that are autonomous, conversational and relational, but they’re completely devoid of any ability to tell how annoyed, happy or depressed we are. The problem is such services do not understand your emotions and so cannot react to your shouting, no matter how much you hate Coldplay.
Artificial Empathy could be the answer to this. What if systems and products sensed nonverbal behaviour in real time? Your car might notice that you’re tired and take the wheel. Your fridge may work with you on a healthier diet. Your wearable fitness tracker and TV might team up to get your off the sofa and other products would start to sense changes in your mental health.
The basis of these technologies is already here. Facial tracking can detect whether you are smiling or frowning. Image recognition can estimate your body-mass-index. That extra understanding of what these things may mean to people and what is going to suit the individual best are the final piece of the puzzle.
Making recommendations based upon the analysis of data sets is the premise of Artificial Intelligence, so it’s not a surprise that it’s being utilised for the forecasting of everything from the likelihood of natural disasters to the performance and evolution of financial markets.
Some of the world’s most destructive earthquakes - China in 2008, Haiti in 2010 and Japan in 2011 among them - occurred in areas that were deemed relatively safe by scientists, showing that understanding where earthquakes are likely to strike has never been a sure science. Scientists are hoping to bridge uncertainty by using machine learning to scan ground-motion measurements to predict more accurately the likelihood of an earthquake happening and when they are most likely to occur - hopefully saving hundreds of thousands and maybe millions of lives.
If successful, these AI tools could demonstrate a system that can perform more effectively than human experts.
Financial market movement is also on the verge of being transformed by AI. Until now, most firms have focussed on using technology to cut costs and make efficiencies, but AI is showing the potential to create value for organisations by automating market prediction and financial investment. So-called robo-advisors have been adopted by a number of leading investment companies including Merrill Lynch and Fidelity.
The World Trade Organization, however, has warned that new, deep learning-based financial systems will instead provide a back door for large institutions to influence markets in unscrupulous ways.
Artificial General Intelligence (AGI) is a level of intelligence where a machine can successfully perform any intellectual task that a human being can. It’s the objective of artificial intelligence research and a common, often chilling topic in science fiction.
Fortunately (or unfortunately, depending on how you view it), we are far from building robots with a level of AGI, though there has been a dramatic rise in the rate that AI has improved in the last few years, and so the debate is being renewed.
Will robots with the capability of human intelligence be a good thing for us? Or as Stephen Hawking and more recently Elon Musk have stated, will this level of artificial intelligence threaten our very existence?
We at Nimbletank are optimistic and feel that the creative world’s tendency to paint machine intelligence as a negative advance doesn’t help. AI entrepreneurs see reality differently and many of them are creating future-facing solutions that will benefit the lives of people around the world in great ways.
When it comes to health and matters of life and death, the possibilities that Artificial Intelligence can deliver are more than intriguing. And in a time of recent austerity, the potential for efficiency and advanced treatments has been a shining light in a somewhat gloomy decade for the NHS. The use of AI in the health industries has been more prominent and more advanced than most, in areas including detection of diseases, administration, training, diagnosis and recovery.
In detection and diagnosis, a growing suite of AI-powered applications that can spot cancers or the early signs of eye disease are being used by doctors around the world. Recently, researchers have created a system that can diagnose early-onset Alzheimer’s disease in young people.
Though overlooked for more ‘creative’ and ‘sexy’ applications of technology, the impact AI has had on medical administration has been massive. Electronic medical records, according to some studies, represent a turning point in improving quality of care while increasing productivity as well. From the point of view of a patient experience, they mean shorter treatment times during appointments and over a course of treatment. They also allow for an increase in doctors and nurses’ face-to-face time with patients.
Some of the most amazing and impactful technologies may not be ready yet, but their implications could be breathtaking. One is precision medicine, an ambitious discipline that uses deep genomics algorithms to scan through a patient’s DNA, looking for mutations and anomalies that could be linked to diseases such as cancer. People like Craig Venter, one of the fathers of the Human Genome Project, are currently working on a new generation of computational technologies that can predict the effects of any genetic alteration, paving the road to individualised treatments and early detection of many preventable diseases.
IoT is a prime and categorical example of how far Artificial Intelligence has come in recent times. Its impact can be seen by the fact that a recent Gartner study predicts that there will be 20 billion IOT Devices by 2020. The Internet of things is a phrase coined to group smart products and services that use data and connect with each other in some way to become or perform...smarter. IoT products have become increasingly popular over the last few years and are driving smart home and indeed smart city possibilities - a series of connected devices reacting to each other and the environment around them.
There is a clear intersection between IoT and AI. IoT is about connecting machines and making use of the data generated from those machines. AI is about simulating intelligent behaviour in machines of all kinds.
AI is most commonly implemented in IoT in the following 3 ways:
Prescriptive analytics - ‘What should we do?’ - Think of your morning brief from Alexa. A connected weather app tells your Alexa that it’s going to rain today, and so in turn, Alexa recommends you take an umbrella.
Predictive analytics - ‘What will happen?’ - Think of an autonomous vehicle. If an incident happens in front of you, your vehicle system can analyse what will happen if they continue on the same path and adjust.
‘Adaptive/continuous’ analytics - ‘How should the system adapt to the latest changes?’ - Think of Nest, the smart thermostat. It learns about your home - for instance how long it takes to warm up and how draughty it is. It considers the weather and adjusts accordingly and can even sense when you’re on holiday and reduce wastage.
Driving in and around cities has become one of the largest problems of our time. The varying nature of busy roads and cities is difficult to predict. It fluctuates with changes in human behaviour, road conditions, time of day, weather and traffic accidents, providing myriad conditions and with many possible outcomes.
But because of its abilities to analyse vast amounts of information quickly, AI is being tested to keep traffic flowing more smoothly by taking control of traffic signals, predicting accidents and forecasting potential snarl-ups. New ways of responding to crashes, controlling traffic lights and creating diversions are being researched to keep traffic moving.
With recent work on Smart motorways taking place throughout the UK, we have seen highway agencies using drones, sensors and predictive AI technology in a bid to avoid bottlenecks and keep traffic flowing nationwide.
As this technology and that of autonomous vehicles develops and becomes more integrated, the prospect of jams becoming a distant memory is a very real possibility.
Knowledge engineering is a field of AI that tries to emulate the judgment and behaviour of a human expert in a given field.
Expert systems involve a large and expandable knowledge base, integrated with a rules engine that specifies how to apply information to each particular situation. The systems may also incorporate machine learning so that they can learn from experience in the same way that humans do. Expert systems are used in various fields including healthcare, customer service, financial services, manufacturing and law.
One famous example of successful knowledge engineering was built by Google and defeated the world's best ‘Go’ player in 2017. ‘Go’ is an abstract strategy game, invented in China, and is arguably humankind’s most complicated board game.
AlphaGo, Google’s AI was designed to study different playbooks, changing its strategy after every move. It won by a narrow margin of 0.5 points. Google say AlphaGo was designed to win, so its margin of success was simply a stopping point. It didn’t need to win by anything else. The project was an example of how Artificial Intelligence can be programmed to think like humans.
Using algorithms to emulate the thought patterns of a subject matter expert, knowledge engineering tries to take on questions and issues as a human expert would. Looking at the structure of a task or decision, knowledge engineering studies how the conclusion is reached. A library of problem-solving methods and a body of collateral knowledge are used to approach the issue or question. Depending on the task and the knowledge that is drawn on, the virtual expert may assist with troubleshooting, solving issues, assisting a human or acting as a virtual agent.
Spoken language, or voice, is fast becoming the main focus for research into how we interact with our tech. Natural Language Processing and Natural Language Understanding, usually shortened as NLP/NLU, is a branch of artificial intelligence that deals with the interaction between computers and humans using natural language.
A typical human-computer interaction based on NLP might go as follows:
1. The human says something to the computer
2. The computer captures the audio
3. The captured audio is converted to text
4. The text’s data is processed
5. The processed data is converted to audio
6. The computer plays an audio file in response to the human
Virtual assistants like Amazon’s Alexa and Apple’s Siri use NLP to understand our voice requests. The ultimate aim here is for AI to understand language as successfully as humans do – not just the words but also the context-based meaning of them. Most NLP techniques also rely on machine learning to derive meaning from human languages, even adopting, and mirroring, users’ speech patterns, use of slang, and accents.
With these advances, it’s no surprise that eConsultancy is predicting that 50% of all search traffic will be driven by voice in 2020.
In short, machine learning (ML) is based on algorithms that learn from and make predictions on data without the need for continued programmer input to achieve this. These systems have a concept (a pre-programmed goal or function) which provides the desired outcomes within certain parameters, and just like humans, machine learning systems will perform a task once, assess the outcome, and endeavour to achieve the most optimised route to achieve this again.
When machine learning has been applied to health, science and manufacturing, the results have been outstanding. ML has found cures to illnesses in a fraction of the time and budget of the previous human-led research. The automotive industry has seen its production lines augmented with robots that learn from and share these experiences with all its robots throughout the world to optimise and develop its manufacturing. Science has created digital environments based on real data that allow ML to explore likely outcomes before humans have yet to even discover this ‘physical’ environment in the real world.
The ability to learn from real-time data, be present in live environments and respond to manufacturing optimisation is proving instrumental in the reimagining of the workplace and our roles within it.
Narrow AI is artificial intelligence that operates in a singular, or narrow task set and has a limited pre-defined range of actions. Examples would be Siri, Alexa, heating systems and website recommendations based on previous purchases. They all perform their main purposes well but would fail to do even the simplest of tasks out of their immediate specialism. Sorry, the computer says ‘no’.
They have been developed to augment our lives, they ‘supercharge’ the human and empower us with the right data at the right time, by working constantly on our behalf 24/7, 365 days a year. AI removes tedious, repetitive tasks and performs these generally with a better level of consistency and efficiency.
What groups all these Narrow AIs together is their reliance on pre-programmed responses and actions, a prescribed interpretation of data output. Narrow AI is generally regarded as the first level of 3 areas of AI. The second level is General AI: the artificial intelligence level achieved when robots can perform any task just as well as a human. The third level is Super AI which goes well beyond our own limitations and surpasses us on every level. But we are nowhere near that… just yet.
Artificial Intelligence over recent years has played a large role in improving the efficiency and capability of the police service. Considering there has been a cut of approximately 20% of active police officers in the force between 2010 and 2018, advances in AI have been timely for the UK.
It’s being used in a number of different ways. Most notably, advanced facial recognition and video technology is helping police forces with crowd control and surveillance of heavily populated areas such as sports venues, train stations and protest marches. In the US, facial recognition is being more positively employed to recognise and assist with missing persons and to process suspected sightings from members of the public, creating efficiencies in active searches.
The use of AI and machine learning is slowly spreading into police work, though it remains controversial in areas such as predictive policing. Durham Police have been experimenting with AI to assess the suitability of suspects for release on bail. Elsewhere a system called The National Data Analytics Solution (NDAS) a prototype program to reduce serious crime has been met with concern. The system uses a combination of AI and statistics to try and assess the risk of someone committing or become a victim of gun or knife crime, as well as the likelihood of someone falling victim to modern slavery.
Experiences with Facebook, Google, Netflix and Amazon has led to a big shift in user expectations when it comes to personalisation. A recent study found that more than 83% of customers reported that they expect brands to personalise experiences for them.
Personalisation allows companies to deliver individualised content through data collection, behavioural and predictive analysis, and the use of automation technology. This is no longer just about right message, right time, it’s about brands being able to offer more curated and contextual experiences, recommendations, propositions and indeed products, in real-time on a genuinely one-to-one basis.
Personalisation and improved customer experience has also been proven to increase customer loyalty, advocacy and value, purely by the notion of ‘Hey this brand gets me!’ The continued quest for creating a single customer view through omni-channel opportunities is fast becoming a reality and for some, an expected outcome.
Netflix have claimed that their investment in AI is saving them $1b a year and personalised recommendations are sending engagement and box-set binge-watching through the roof. Our clients Lumesse were able to increase engagement with their elearning product, by creating a simple but robust recommendations engine that learns from the interaction preferences of users.
To better understand the differences between a regular computer and a quantum computer, Eric Ladizinsky, co-founder of quantum computing company D-Wave, gave a very ‘real’ world example as reference. He suggested imagining that you only have five minutes to find an ‘X’ written on a page of a book among 50 million books. You’d never find the ‘X’. Now imagine if you had 50 million parallel realities (the quantum computer) and you could look at a different book in each of those realities. You would now find the ‘X’.
Common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). Quantum computation uses quantum bits or qubits which are infinitely more powerful.
But what does that really mean? By entering into this quantum area of computing we will be able to create processors that are significantly faster (a million or more times) than the ones we use today, using less power, and working on many levels and tasks simultaneously.
Possible applications include quantum encryption methods for increased security and data protection. In medicine, quantum computing would allow for a person’s genes to be sequenced and analysed much more rapidly than the methods we use today and would allow for personalised drug development. Meteorologists, through massive real-time data analysis will have a much better idea of when bad weather will strike, enabling them to alert people and ultimately save lives, anguish and money.
Which brings us back to AI. Information processing is critical to improve machine learning. Quantum computers can analyse large quantities of data to provide artificial intelligence machines with the feedback required to improve performance and shorten this learning curve.
The word ‘Robot’ first appeared in 1921, in Karel Capek’s play ‘Rossum's Universal Robots’. “Robot” comes from the Czech for “forced labour.” In the play they looked like humans, were far more efficient and eventually caused the extinction of the whole human race. Inspiring stuff!
Cut to 2019, and our main fears from robots is that are going to steal our jobs. Take San Francisco, for instance, which is exploring the idea of a robot tax, forcing companies to pay up when they displace human workers. In reality, you may be more likely to work alongside a robot in the near future than have one replace you. This idea of multiplicity sees robots working in tandem with us, allowing people to focus on the more rewarding human elements of jobs.
Robots have remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. It wasn’t until the 1980s that Honda started up its humanoid robotics program. It developed P3, which could shake hands, wave, bow and walk pretty well. The world was captivated by the possibilities.
What humanity has done is essentially invented a new species. Increasingly sophisticated machines may populate our future world, but for robots to be really useful, they’ll have to become more self-sufficient. Crucially they will need to learn on their own, which is where advances in artificial intelligence are bridging this gap. Until then, we will have to continue doing the less rewarding human elements of our jobs.
Singularity is the hypothesis that the invention of artificial superintelligence will create incredibly advanced technological growth, possibly resulting in humans being superseded by their own invention on the evolutionary food chain.
Futurists like Author Vernor Vinge and Inventor Ray Kurzweil have argued that the world is rapidly approaching this tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities.
They believe that once these machines exist, they will possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are much smarter than us, in the same way that we are smarter than our pet dogs and cats. How will our sentient masters treat us? What will we have taught them about our relationship with living creatures of lesser intelligence?
Kurzweil, a bit more optimistically, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever.
The convergence of human and tech is fast becoming a reality, but what that reality looks like is still unclear.
The Turing test was designed to be a way of determining whether or not a computer counts as "intelligent". It was created by computing pioneer and arguably the godfather of AI, Alan Turing, in the 1950s.
The test is simple. On one side of a computer screen sits a human judge, whose job it is to chat to a number of people via the terminal. One of the ‘people’ in the chat sessions will be a computer program (essentially a chatbot) created for the sole purpose of tricking the judge into thinking that it is the real human.
Each of the judges has five minutes to talk to each ‘person’ communicating through a machine (terminal), and the computer program passes if more than 30% of the judges think that it was a human.
In 2014 at a test set up at the University of Reading, a program passed the test for the first time ever.
A Russian-designed program called Eugene had 33% of the judges "he" spoke to convinced of his humanity. Though obviously impressive, this is still a long way off achieving the gold standard of modern Turing tests, using rules laid out in 1990 by the inventor Hugh Loebner which require a much longer test of 25 minutes.
“The rise of the useless class” as historian and author Yuval Noah Harari calls it, could be a possible outcome of our eternal quest for ever improved technology. He believes that as AI gets increasingly smarter, more humans are not only pushed out of employment but lose their place in society also.
“I choose this very upsetting term, ‘useless’, to highlight the fact that we are talking about ‘useless’ from the viewpoint of the economic and political system, not from a moral viewpoint,” says Harari. For centuries, political and economic structures were built on humans being useful as workers, scholars and soldiers for instance. But with those roles taken on by machines, will we simply stop attaching so much value to humans as their potential diminishes?
So how do we prepare for this world if most of what people learn in school or in college will probably be irrelevant by the time they are 40? And what is the reason for attendance at colleges and universities to learn these skills, knowing that they are to become obsolete, maybe even before they’ve achieved their final grade? In this post-work world, what then is our purpose? What gets us up in the morning? Does this actually render us ‘useless’?
With visions of happiness and purpose controlled by leisure and virtual reality, are we walking into a world of anxiety and a need for universal basic income? Does the future of the useless class spell an end to humanity as we know it, or the birth of Human 2.0?
A virtual assistant is essentially a software agent that can perform tasks or services for an individual. The first assistant to exist was an IBM Shoebox - a digital speech recognition tool, able to recognise 16 spoken words and digits 0-9. Assistants have come a long way since then, with the most prominent release of recent years being Apple’s Siri, released in 2011, and Amazon’s Alexa in 2014. In early 2019, 18% of UK households now have a smart speaker (with Alexa powering most of them).
Virtual assistants have got to where they are through AI and speech advances and their future is bright. As the world becomes more connected, enterprise and consumer solutions will begin to merge as users will have the option to manage the events of their entire day via an AI assistant.
As the number of devices increases, we’ll find different contexts for their use. A user could start their morning at home and ask their voice assistant what meetings they have that day. That assistant may then be picked up via a different device to automatically transcribe a meeting, before being utilised to automatically turn on the heating as the children of the user begin to walk home from school.
Although voice is the most well-known application of virtual assistance, the real beauty of their evolution will be found in the devices, services and connections that will be built around them. Sherpa.ai, a conversational interface, is bringing that connection and really pushing assistance. Sherpa will integrate your interests, news, places and restaurants to give you informed recommendations and predictions of what you need, and can be applied to many different contexts, having recently been integrated into Porsche vehicles.
Developed by IBM, Watson is an AI question-answering computer system capable of answering questions posed in natural language. It was named after IBM's first CEO, Thomas J. Watson.
The AI system was initially developed to answer questions on the quiz show Jeopardy and eventually ended up winning first place against previous champions of the show.
In recent years, the Watson capabilities have been extended and the way in which Watson works has been changed. It has evolved machine learning capabilities and optimised hardware available to developers and researchers. It’s no longer purely a question-answering (QA) computing system designed from Q&A pairs but can now 'see', 'hear', 'read', 'talk', 'taste', 'interpret', 'learn' and 'recommend'.
The Royal Bank of Scotland has used Watson to create its own digital assistant, able to answer over 5,000 customer queries a day (completing over 70% of these interactions without human intervention), whilst an online therapy platform use Watson’s cognitive computing and self-learning capabilities to support the decision-making capacity of the therapists.
In 2017, an anonymous Reddit user under the pseudonym "Deepfakes" posted several porn videos on the internet that appeared to contain a series of celebrities including Emma Watson, Katy Perry, Taylor Swift and Scarlett Johansson. These were all eventually debunked as fake.
The origins for this phenomenon can be traced to 2014 and a graduate student, Ian Goodfellow, who invented a way to algorithmically generate new types of data out of existing data sets. Three years later, and the software system is capable of sourcing video content on the web and face matching it to pretty much any pre-shot footage. The combination of the existing and source videos results in a fake video that shows a person or persons performing an action at an event that never occurred in reality.
The creator of FakeApp, a face-swapping video app, says he wants to improve the app to a point where users can simply select a video on their computer, download a neural network correlated to a certain face, and swap the video with a different face with the press of a button. Scary stuff!
This technology has now spilled over into, even more worryingly, ‘fake news’. With the power to be able to skew information, manipulate beliefs, and push extreme ideologies using a whole library of politicians and celebrities both alive and dead now a reality how can we protect ourselves from this possibility?
The next generation are coming of age just in time for the AI revolution. This is especially true for millennials and now Gen Z, the true digital natives, having grown up in an entirely digital world from birth. There is a huge opportunity for organisations to harness the power of the younger generation to play a guiding role in how AI and tech, are used and developed.
Research suggests that the biggest perceived driver of change to future work is technological innovation. There’s tremendous anxiety about the future workforce and our education systems are struggling to cope with these rapid changes in its requirements. Artificial intelligence is set to change not only what teachers teach, but also how they teach it and, possibly in the future, whether they teach it at all.
Firms that provide mentorships, apprenticeships and entrepreneurships for young people often find that the learning happens both ways, with managers and executive gaining greater exposure to emerging digital behaviours and AI-powered solutions.
Zero UI is a coin termed by Designer Andy Goodman. By his definition, “Zero UI refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment.” This concept changes how we communicate and behave with our tech and also how our tech augments with us.
The main goal for Zero UI is to eliminate, as much as possible, the need for a user to have to focus on screens to achieve tasks independent of these requirements, for example setting up a direct debit, arranging a calendar invite or ordering food. This can be achieved by having machines understand users in their own natural words, behaviours, gestures and even emotions.
User interfaces of the future are going to be integrated with the physical world. By pulling the experience away from screens, design is providing users with a more natural and human way of communicating with devices. Soon, we’ll all find it common to talk to our devices as if they are our own personal assistants.
While this guide is a by-no-means exhaustive list of AI, its applications and its advances, it does start to paint a picture of what’s possible, both now and in the future. Of how you may be able to start to integrate this sort of intelligence into aspects of your business, whether it be driving automation or putting your customer experience on steroids.
The importance of adoption is echoed by a recent Deloitte report, ‘The state of AI in the Enterprise’, where findings include the fact that 42% of executives believe AI will be critical in three years’ time. And, 88% plan to increase investment in cognitive technologies in the next 12 months.
Netflix are just one of a growing number of businesses who invested early and are now reaping the benefits, claiming to save £1b a year through AI technologies.
It also raises questions around ethics, and we believe that this, along with rules governing bias, are important considerations when plotting your AI strategy and framework. Just because you can, doesn’t mean that you should. And as such, certain ‘red lines’ should be defined early on.
These considerations are certainly front and centre for Nimbletank when working with clients to conduct Intelligence Audits, to map where AI could enhance business and customer experience.
Nimbletank is an award-winning AI and service design consultancy. We create exceptional customer experiences to help transform businesses and set them up for future success.
We drive efficiencies
Defining new systems to increase augmentation, automation and self-serve, freeing up our clients’ resources for more revenue-driving activity.
We increase revenues
Achieving customer experience excellence by offering genuine utility and personalisation, increasing value and advocacy.
Our Fusion process, that utilises our proprietary Ask, Learn, Try methodology, allows us to work collaboratively to rapidly gain alignment regarding; ambition, possibilities, requirements, solutions and measure of success.
We map the end-to-end customer journey. Next we identify where AI automation and machine learning can be deployed/integrated to enhance the customer experience.
Business case development
We action this in conjunction with an AI audit. This is to help define the investment and payback scenarios for AI implementation, efficiencies and increased revenues/customer.
We carry out full service design delivery, from research and requirements gathering through rapid prototyping, brand development, UX, UI and tech development.
We choose and integrate the right AI partner ecosystems, products and solutions to drive business success.
We track and assess the impact of our interventions, delivering ROI reports and optimisation recommendations.
For more information about this report or Nimbletank, or to discuss your own Intelligence Audit please contact Paul Vallois, Managing Director.
020 3828 6440