Mapping the herd: the UK's unicorn companies in numbers. Download the full report

Everything you need to know about Artificial Intelligence and Machine Learning in 2022

| Beauhurst

Category: Uncategorized

Over the past 70 years artificial intelligence (AI) has gone from being the muse of science fiction to a must-have for any ambitious business. And given that we track all of the UK’s ambitious businesses, it’s no surprise that AI is a sector of increasing interest to us here at Beauhurst. In this article, we’ve compiled an introduction to AI research and how it feeds into today’s high-growth landscape. 

From the Turing Test in the 1950s to the companies pioneering a new age of machine learning and innovative algorithms for AI systems, we’ve got it all. Don’t worry if you didn’t get an A* in Science class, this guide is for people with an interest in the field rather than oodles of knowledge (although we can’t promise we won’t nerd out at points).

What is artificial intelligence (AI)?

A useful definition of artificial intelligence (AI) is given by computer scientist John McCarthy, who coined the term in his 1956 seminal paper: 

‘the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.’

Today, artificial intelligence systems rely heavily on the use of machine learning, which refers to the automated detection of meaningful patterns in data through algorithms set up by humans. Machine learning is inspired by the way in which human intelligence is built on ‘learning’ from past experience.

How we define an artificial intelligence business at Beauhurst

To us at Beauhurst, an artificial intelligence business is one that openly and widely makes use of AI technology in its operations. For instance, an academic spinout that carries out a lot of research projects in the field of deep learning or natural language processing (NLP) would be tagged as an AI company, whereas one that uses some AI in its internal operations would not be tagged as such. Each of our classifications is done by our Data Research team, meaning that human discretion is used.

How different types of artificial intelligence work

Narrow AI (or weak AI) vs artificial general intelligence (strong AI)

It might be a good time to separate AI into two broad types: narrow AI (or weak AI) and artificial general intelligence (strong AI or AGI). Narrow AI is concerned with building AI algorithms that perform specific tasks (e.g. speech recognition) extremely well. They may outperform a human being at this very specific task, but they are operating under many constraints—it is impossible for them to take a step back and see the ramification of their actions as a human would.

On the other hand, strong AI is achieved when machine intelligence can be applied to any problem, not just the narrowly defined ones. This type of AI remains contained within science fiction novels for the time being. 

It’s also worth noting that most AI research is aimed at developing the narrow AI type. They are concerned with automation or problem solving rather than building all-round machine intelligence. However, there are some AI researchers at the forefront of the industry that have made it their mission to build artificial general intelligence—most notably, Demis Hassabis, co-founder of DeepMind, whose company’s mission is to “solve intelligence” and then use intelligence “to solve everything else”.

Machine Learning and Deep Learning

The dominant technology in artificial intelligence today is machine learning, and within this discipline, deep learning in particular. 

Traditionally, programmers write code that gives a computer an exact set of instructions that will solve a task when followed. However, it is not always straightforward to put together this set of instructions. For instance, we can draw an elephant, but it’s a different thing to write a code that tells a machine how to do it. To bypass this issue, machine learning creates an additional program that writes this set of instructions for us: these are called machine learning algorithms. They are used to identify statistical patterns in the data, in order to make inferences about it.

Deep learning is the most common type of machine learning, inspired by neurological observations taken from the human brain—although in practice, AI algorithms work very differently from human synapses. Applications of deep learning include speech recognition, health diagnostics, and facial recognition. 

Natural language processing (NLP)

A good example of an adaptive AI application is in natural language processing (NLP). Most often, NLP uses machine learning to  develop ways of understanding human language. For instance, an AI system could use NLP to scan documents and understand the content. 

NLP technology was first used to translate Russian sentences into English in the 1950s. Machine translation is still a hard problem, heavily researched in our time, so this was a particularly groundbreaking advancement in 1950. 

Other NLP applications today are in speech recognition and other text-to-speech functions. For instance, electronic personal assistants such as Alexa make extensive use of NLP in their daily functioning.

The history of artificial intelligence

The Turing Test 1950

As with most disciplines, it’s hard to pinpoint an exact start to AI research. But to a lot of people, the beginning of AI started in 1950 with the Turing test, originally called the “Imitation Game” where Alan Turing asked the important question of “Can machines think?”. His investigation led to the formalisation of the Turing Test, which still stands as a method of determining whether a machine is intelligent. The test involved having a machine answer questions, observed by an interrogator. If the interrogator could be fooled to believe that the answers came from a human being, then that is enough to call the computer program “intelligent”.

The Golden Years 1956–1974

Turing’s question sparked a wave of scientific interest around the question of whether computers can imitate the human brain. The next pivotal point in the history of AI is the 1956 Dartmouth conference sponsored by the Defense Advanced Research Projects Agency (DARPA). Here, John McCarthy proposes a two week, ten man investigation into what he describes as “artificial intelligence”, thus coining this term. 

John McCarthy went on to co-found the MIT Artificial Intelligence Project with computer scientist Marvin Minsky in 1959, which became the focal point for AI researchers at the time. Minsky was one of the “founding fathers” of artificial intelligence, developing early advancements in robotics and artificial neural networks. He was even an advisor to the 1968 Stanley Kubrick film “2001: A Space Odyssey”, brought on to predict the state of artificial intelligence in 2001. 

The First AI Winter 1974–1980

The late 70s and early 80s saw a pessimistic outlook from AI researchers, followed by reduced funding, in turn followed by reduced academic interest. DARPA had stopped its $3m a year funding in 1970, perhaps fuelled by public concern that not enough practical AI developments were happening. In the UK, government funding plummeted following the 1973 Lighthill report, similarly outlining lack of progress. 

There was also an implication that AI algorithms weren’t tractable and could only be applied to “toy” mechanisms rather than real life problems. Nevertheless, some researchers such as Marvin Minsky and Roger Schank continued their work and survived the winter, but warned that as with most multi-billion industries, such boom and bust cycles are inevitable in the future of AI. 

AI Resurgence 1980–1987

Interest in the field of AI spiked again with the proliferation of expert systems (computer programs executing specific tasks) across universities and in business. In 1981, the first IBM PC was launched and by the early 80s, two thirds of Fortune 500 companies were implementing this technology in their daily business activities. As a result, funding for AI picked up in Japan and then spread to Europe. 

Additionally, insights from other sciences—such as Physics and Biology—also made advancements in connectionism possible, which aims to explain cognitive patterns using artificial neural networks (ANNs). Geoffrey Hinton and David Rumelhart were early developers of deep learning through a method called “backpropagation”. This technology was later used in speech recognition, medical diagnosis and data mining, among other applications. 

Second AI Winter 1987–1993

The first sign of a new AI winter was a collapse in the market for specialised LISP computer systems, accompanied by growing enthusiasm for simpler and cheaper alternatives like Apple and IBM systems. 

There was also an increasing belief that robotics was the ‘next-best-thing’, based on the idea that intelligence can only be achieved in a human-like form. In his seminal work, ‘Elephants don’t play chess’, Rodney Brooks outlines a method rooted more in physical reality than traditional AI tools. It also started a tradition of portraying artificial intelligence in science fiction as machines in human bodies. 

Consistent progress 1993–2011

The slow reintroduction of artificial intelligence in mainstream academic discourse post-1990 was based on a series of practical applications of AI developments. In 1997 IBM’s Deep Blue computer system beat grand master Garry Kasparov at chess, and consistent progress was being made in the field of self-driving cars throughout the 90s and 00s. 

Geoffrey Hinton’s groundbreaking research into deep learning was also a turning point in the history of AI. In a rather obscure research contest in 2012, called ImageNet, teams were challenged to design computer programs that would recognise 1,000 objects. Hinton’s team won by a landslide using breakthrough techniques in computer vision through deep learning. Many consider this the start of the modern AI revolution

In the 2010s, there was also increased interest in chatbots and virtual assistants such as Siri (2011) and later Alexa (2013). They also inspired a growing artistic interest in the emotional impact of AI systems in our daily lives, and were key themes in Hollywood films such as “Her” and “Transcendence”. 

Increased funding post-pandemic 

The pandemic unsurprisingly accelerated the role of technology in our day to day lives. Since March 2020, we’ve tracked £17.9b worth of funding into the AI space, more than double the figure of £8.56b for the corresponding period before the pandemic.

Not only was AI more widely adopted across the board in companies, AI research was also instrumental in the development of a COVID-19 vaccine, using machine learning to detect potentially effective molecule combinations. And it seems that AI interest is still on the rise, with numerous applications in a post-pandemic world, ranging from solutions to remote working, to eHealth and cybersecurity.

Key benefits of artificial intelligence

Many consider the rise of artificial intelligence as the latest technological revolution. The processing capabilities of a computer program are so much higher than a human’s in certain cases, so that it’s not hard to see the massive potential improvements AI would bring to society. 

Speed & Scale

A key benefit of artificial intelligence compared to human intelligence is simply the speed AI systems operate at. Thinking of something as common as the Netflix suggestions list, an AI can scan over hundreds of films and rank them based on what it’s learnt so far about your preferences in real-time. 

Not only are we often able to complete tasks quicker using AI, artificial intelligence is often used for automations that can solve larger scale problems than a human brain. 

Different to natural intelligence

Apart from the obvious efficiency gains, artificial intelligence is simply different from natural intelligence. It processes data sets differently to how a human would, often taking an unexpected path to problem solving. 

AI systems are better than humans at pattern recognition in some cases, making them very well suited to tasks such as image recognition—but it’s worth noting that humans still outperform AI at most image recognition tasks. 

Human brains also have a limited attention span. In contrast, machines have no limit to the length of time they can focus for or any aversion to ‘boring’ tasks. 

Potential for Artificial General Intelligence

Although this is a scenario that doesn’t seem feasible for at least 70 more years, constructing Artificial General Intelligence (AGI) could lead to massive leaps in socio-economic conditions. Applying superhuman intelligence to problems such as world hunger, poverty, and unemployment could result in a massive reduction in human suffering. Whilst it’s true that only a minority of artificial intelligence researchers today are focusing on achieving AGI, the massive implications for humanity can’t go unmentioned.

Ethical concerns of artificial intelligence

The topic of artificial intelligence has always been a controversial one, be it displayed on the silver screens or in real life. The critiques range from AI posing an existential risk, to it giving rise to inequality and other socially inefficient outcomes.

Intelligence explosion

On the extreme end of the scale, the most obvious objection to developing artificial general intelligence is that it would create the kind of superhuman intelligence that we can no longer control. A common misconception often comes up in sci-fi films, in a world where the AI suddenly becomes ‘evil’ and turns against its creators. In reality, this is one of the least convincing scenarios. An intelligent machine does not possess a moral compass, nor can have any emotional motivations. Instead, some scientists and philosophers argue that a more realistic scenario (yet holding an equally big threat to humanity), is one where the artificial intelligence is programmed to achieve a goal in a way that misaligns with human intentions. Confusing? Bear with me.

One of the main names associated with this problem is Nick Bostrom, who talks about “technological singularity”: where a machine achieves a kind of exponential superintelligence that will destroy humanity, whether that be deliberately or as an unintended consequence. He calls this an “intelligence explosion“, whereby an AI technology will develop ways of problem solving that are unavailable and unpredictable to humans. This unforeseen path could potentially be destructive to humans—to the AI, the most important thing is to reach its goal, disregarding the means through which it gets there. The most common analogy to this existential threat is that of a building site located over a field that’s home to a large ant population. Builders didn’t set out to kill the ants, in fact they have no aversion to ants, yet they are still crushed in the process. To put it very bleakly, in Bostrom’s scenario we are the ants.

Bias adopted from humans

Whereas the “superintelligence” objection predicts that the destructive root is due to the AI surpassing human thinking patterns, the other main objection to relying too widely on AI is that it might adopt our own problem-solving biases. Notoriously, an AI recruitment technology was used at Amazon to scan over applicants’ CVs after ‘observing’ previous successful human-selected examples. It became obvious that the AI adopted a bias against women, rejecting all CVs where women’s colleges were mentioned. 

Issues with establishing causality

This problem gives rise to another thorny issue of establishing causality. If machine learning is based on spotting patterns and correlations between events in order to learn and discover better ways of doing things, then a machine might become reliant on correlations that are random rather than causal (known as overfitting). A famous experiment that inspired this line of reasoning, known as the pigeon superstition, is sometimes quoted to illustrate this problem. Pigeons were fed at regular intervals with no reference to their behaviour, yet after a number of repetitions, the pigeons were found spending more time in the same positions they found themselves in when they were first fed, thinking their actions led to them getting fed. This is also a self-fulfilling prophecy, since spending more and more time in the initial position makes it more likely that they will be found in those positions when the food does arrive. And so there is also a fear of machines creating potentially harmful correlations that could lead to disastrous results. A prominent name in the field of computer science that tried to resolve this bias through probability theory is Judea Pearl, who sadly passed away in 2021.

Sectors that AI is transforming

With artificial intelligence becoming more and more essential to any ambitious modern-day business, its uses have adapted to a very wide range of industries and needs. We’ve used the Beauhurst platform to identify the top four sectors that artificial intelligence is transforming today. 

Big Data

Big data and artificial intelligence are, in many ways, a match made in heaven. Understanding and interpreting big data is something humans are naturally very bad at. If you think of companies like Google, that possess an extensive amount of data, it makes a lot of sense why they would choose to use artificial intelligence algorithms to make more sense of it. As mentioned earlier, AI systems are much better than people at spotting patterns in big data sets and processing them in real-time. 

Today, most search engines use artificial intelligence to prioritise website rankings and to spot ‘spam’ content. These are self-improving tools that use learning algorithms to become more and more efficient. 

Fintech

Unless you’ve been living under a rock, you’ll be familiar with the rising popularity of fintech (financial technology), but if you’d like to brush up on the topic, make sure to read our ultimate guide to the fintech sector. There are a few ways in which artificial intelligence is aiding fintech today. 

AI can help process financial data sets to determine creditworthiness, or to detect fraudulent behaviour. As with big data, artificial intelligence systems can spot patterns across large sets of data in real-time, and thus have potential to prevent financial crime. 

Another main area in which AI overlaps with fintech is in customer service, with the rise of chatbots helping move financial advice away from the high street and into the realm of online technology. 

Lastly, AI technologies can scan across historical data to learn more about consumer behaviour and where our spending goes. For instance, challenger bank Monzo makes extensive use of artificial intelligence to scan across millions of potential outlets to classify people’s spending into neat categories. 

Digital Security

Artificial intelligence is now used extensively by government departments and other watch dogs in preventing cybercrime. Especially since the pandemic, investment in digital security skyrocketed following an increase in online fraud linked to more working from home.

Similarly to other use cases, AI is applied to large data sets and learns to recognise patterns in criminal activity in ways that humans couldn’t, not just because of the scale on which they’d have to operate, but also because no matter how many times we watch Sherlock, we will never be able to spot and trace criminal activity like a machine would. 

Healthcare

eHealth is one of those industries that always comes up whenever we write about the latest developments in technology. Allowing a computer program to diagnose a health condition would have terrified us a decade ago, yet it is now willingly adopted by a growing proportion of people that are dissatisfied with more traditional health systems—and AI has of course made it all possible. 

One surprising discovery has been that AI systems not only learn how to diagnose certain diseases through supervised learning from humans, in some cases the AI is able to pick up on problems that go undiagnosed by the human eye, using computer vision. In an experiment where an intelligent machine was asked to identify cancerous moles among a large number of images provided by e-patients, it outperformed doctors looking at the same images: a significant number of early stage cancers were detected by the AI and not by humans.

The biggest AI companies

DeepMind

DeepMind was founded by AI researcher Demis Hassabis, together with Shane Legg and Mustafa Suleyman. The company was acquired by Google in 2014 for $500 million. 

Its main focus is on building sophisticated AI systems such as AlphaGo, the first machine to beat a professional Go player. Go is a 3,000 old Chinese board game requiring several layers of strategic thinking. It is therefore a lot more complicated for AI systems to play compared to other games such as Chess. 

DeepMind is also expected to start a medical revolution after launching its AI system AlphaFold, which accurately predicts 3D models of protein structures. 

Amazon 

There’s no surprise that the e-commerce giant Amazon is one of the biggest adopters of AI technology. 

The acquisition of Boston-based Kiva Systems in 2012 enabled Amazon to launch robots. They now have more than 200,000 robotic vehicles in their warehouses, which they rely on heavily for improved efficiency

Apart from internal processes, the main area Amazon uses AI in is customer service. Its AI algorithm scans over consumer queries to better understand what people are looking to buy in order to make the best recommendations. The highly successful personal assistant tool, Alexa, was also launched in 2014, and is made up of a voice-activated interface. 

More recently, Amazon Web Services (AWS) introduces machine learning (ML) and artificial intelligence (AI) services to businesses, where they help customers with no formal training implement artificial intelligence technologies to improve their operations. 

Apple

Although initially predicted and researched in the late 80s, and one of the last projects Steve Jobs was involved in, Apple made its first mark in the AI space through the acquisition of Siri for $200m and then the release of Apple’s Siri in 2010—the first AI-enabled virtual assistant. 

AI and machine learning are at the centre of Apple’s activities with the user experience at the forefront of decision making. The AI interface and human-like support system enabled Apple to compete with Google whilst also enhancing the user experience, improving efficiency and making everyday tasks easier through features such as shortcuts. 

Apple’s AI didn’t stop there, with various functionalities added to their portfolio including FaceID, handwriting recognition, sleep tracking and various recommendation abilities. Research carried out by GlobalData identified that Apple acquired the most AI companies between 2016 and 2020, highlighting Apple’s dominance on the global AI ecosystem. 

OpenAI

OpenAI was founded in 2015 by Elon Musk and Sam Altman with the aim of developing “friendly AI” that benefits society. Responding to criticism from people such as Steven Hawking, who have expressed a concern for the existential risk that strong ai poses, OpenAi started out as a non-profit organisation that researched how AI could make a positive long-term impact. 

OpenAI’s research focuses around reinforcement learning, with applications ranging from motor skills to music and gaming. In 2019 Microsoft invested $1b and partnered with OpenAI to build GPT-3, a natural language processing tool with novel capabilities such as supporting humans in creative pursuits like writing and composition. 

Google 

Google makes extensive use of artificial intelligence, from its search engine algorithm, to its own R&D projects. Known for investing in AI startups, Google is clearly committed to remaining a massive player in the future of AI. 

Google’s search engine has made several leaps of progress from its launch in 1998, using learning algorithms to spot scraping sites and more general spamming behaviour online. Today, it continues to develop and improve its offerings in terms of user experience, seeking to provide the most relevant search results for user’s queries. Google Translate is also powered by natural language processing systems that keep evolving to become more sensitive to context and nuance. 

Alongside its primary search engine offerings, Google also plays a huge role in the development of AI technology. Its free tool, TensorFlow is aimed at making machine learning and deep neural networks more accessible to everyone. 

Facebook

It would have been hard to anticipate what a big role social media would play in our lives when Facebook was first founded in 2004. With over 2.8b users, the platform holds an unbelievable amount of personal data. Of course, with privacy limitations in place, Facebook understands the potential this data has and invests in AI tools accordingly. 

DeepText, Facebook’s own NLP system, is able to process comments, analyse behaviour and read subtext to predict people’s preferences in their newsfeed. In 2015, Facebook’s AI system was revised and improved, allowing it to detect inaccurate information following the “fake news” controversy surrounding the US elections in 2016. The algorithm is also now able to pick up on hate speech and suicide triggers. 

Nevertheless, there still is a huge amount of controversy about the use of AI in social media. “Surveillance Capitalism” by Shoshana Zuboff criticises this application of AI, arguing that it is an invasion of privacy that we never signed up to. 

DJI

DJI is the global leader in drones and aerial imaging technology (with more than 70% market share) with both France’s Parrot and China’s Yuneec struggling to keep up with the pace of DJI’s advanced innovations. 

In 2018, DJI announced a strategic partnership with Microsoft, which enabled DJI to integrate Microsoft Azure’s leading AI and machine learning capabilities into its drone production. Through this partnership DJI could take advantage of cutting edge AI allowing many businesses (especially those in agriculture and construction) across the world to benefit from greater aerial imagery and video data e.g. with a drone farmers can be taken into the sky to analyse their crops and terrain. 

DJI invests heavily in AI research for its new models including the DJI Mavic Air 2 which has introduced features such as image recognition. Utilising AI principles has benefited users by automating routine inspections and capturing consistent results. 

Rumours suggest that DJI is expanding into new markets including robotics and autonomous vehicles. 

IBM

One of the original companies to get involved in AI since the 1950s, IBM remains an active player in developing expert systems such as the Watson enterprise AI system. IBM Watson’s applications in business range from IT operations, to risk & compliance, to post-pandemic “return to work” processes. 

In 2017, IBM joined forces with MIT to set up the MIT-IBM Watson AI Lab that undergoes exciting research into artificial general intelligence, computer vision and graph deep learning among other themes.

Emerging AI companies

Using the Beauhurst platform, we’ve compiled a list of five emerging artificial intelligence companies that have raised over £20m since the pandemic. These companies are all at the ‘seed’ or ‘venture’ stages of evolution.

Palta 

Palta develops a range of health and wellness applications, such as period trackers. Its products aim to deliver early detection of health problems through AI systems. For instance, Flo.Health (the number one period tracking app in the US on Apple store), works by building algorithms based on user’s cycles to spot irregularities. 

Palta has received $100m (£72.0m) of equity funding from VNV Global, Target Global, among others in August 2021. The CEO of VNV Global, Per Brillioth added that “Mobile and preventative health services are the future of the health industry”. 

Constellation AI

Constellation AI develops software aimed at improving AI conversational abilities for use in customer service centres. Its technology uses natural language processing tools to make it possible for humans to communicate with machines. Its AI system constantly evolves and adapts to each user to capture emotions, intentions and behaviours. 

Constellation AI has so far received four rounds of funding, its biggest having taken place in January 2021, where it secured £62.1m from undisclosed sources. 

Ultromics

Ultromics uses machine learning to compare ultrasound images taken during echocardiograms with heart images in an existing database. Its latest technology, EchoGo, enables physicians to diagnose heart failure and coronary disease through less invasive methods, earlier and quicker. 

Ultromics has received £31.7m R&D funding from Google Ventures and Oxford Science Innovations, alongside two other investors, to support the development of AI-backed diagnosis tools, such as echocardiograms.

Harbr

Harbr develops an internet based platform designed to make data more accessible. Its aim is to make better use of the world’s data, treating it more as a product with a particular lifecycle. AI technology is of course essential in managing and processing the data exchange in real time. Gary Butler, Co-Founder and CEO believes “Data is an increasingly important asset. Our work to transform how we realize its value, as well as facilitate data innovation through collaboration, will literally change how businesses, governments and society interact.”

In November 2020, Harbr received $38.5m (£29.1m) equity funding from Backed VC, Seedcamp, Crane Ventures, Dawn Capital, Tiger Global Management as well as several business angels and topped up by management participation. 

Cervest

Cervest develops software that allows users to measure risks in industries such as manufacturing, city planning and real estate. In particular, it focuses on calculating climate risk to assets in real time, using its proprietary platform EarthScan

Cervest has developed technology by combining artificial intelligence and machine learning practices with statistics and physical insights. The AI systems are useful in gathering and processing insights from historical data in real time, whilst the statistical models help forecast future outcomes. 

In May 2021, the climate intelligence company secured a £21.2m funding round from a range of investors, including Draper Esprit and Future Positive Capital.

Discover the UK's most innovative companies.

Get access to unrivalled data on all the businesses you need to know about, so you can approach the right leads, at the right time.

Book a 40 minute demo to see all the key features of the Beauhurst platform, plus the depth and breadth of data available.

An associate will work with you to build a sophisticated search, returning a dynamic list of organisations matching your ideal client.