Skip to content

Understanding the recurring challenges faced by AI scaleups

Posted 30 Oct 2018

At Digital Catapult we have the privilege to meet lots of leading-edge companies every week and, after a while, the things they have in common start to surface and become more noticeable. In the last few months, a few recurring challenges among early stage deep tech AI companies have caught our attention.

One of those challenges is around deep tech companies who want to develop platforms to license their AI solutions. Like many SaaS companies, they find that they often need early foundation clients to help build their foundations and prove out their concepts and gain those really important early revenues. Interestingly, AI businesses in particular, which are rooted in deep tech, tend to be quite diverse in the range of verticals they find themselves entering. This is frequently because the solutions their algorithms offer may be equally applied across many domains. The process that leads a company to find its first customer with a dataset they want to share is often a somewhat arbitrary and very time consuming process. That’s what gets the company kick-started and it is only when a client with data to be trained shows up, that it leads the business into a particular domain specialisation. So one of the challenges is how to retain focus within one domain or industry sector, rather than being diverted off into various different sectors which don’t complement each other and which, although they might happen to yield data-rich clients, will ultimately make it difficult for a small AI company to scale.

The argument is always made that the early customers of small tech companies will essentially pay to build the product and then the company can roll it out to everyone else and scale that way. This is a tried and tested method, and a sensible approach to pursue. Inevitably however, trying to find those early clients takes a long time and when you do sign them up, they want the work to be a pilot and to pay heavily discounted fees. Then they swamp the development teams with very particular and granular demands about how the integration with their legacy systems will work and what kind of APIs need to be created – in other words lots of questions which are very bespoke to that customer rather than generic and applicable across the board. So although useful cashflow revenue is generated, the company becomes almost entirely focussed on the needs of one or two early customers at the expense of developing a scaleable solution.

Inevitably, these challenges which are familiar to many platform businesses, get more complicated in the context of AI companies. One reason for this is because the development team is almost invariably made up of a team of highly motivated, extremely well-qualified, sometimes academic folk, who love getting their brains around gnarly intellectually challenging problems. Clients of course love that they are getting groups of PhDs or post-doc’s focussed on their problems. The challenge for the company which wants to scale up, is that building out the “rinse and repeat” product that will enable it to scale requires an entirely different set of skills in a development team. Very often the platform development team needs to be made up of a totally different group of people and not the group around which the company was originally formed. The skills needed are very different from those of the algorithmic design and training team, and from the data cleaning team and indeed the data science team (who at the beginning of course are all the same people). The product development team requires strong user experience knowhow, great workflow design, and rigorous organisation of the information architecture. These are generally things that Machine Learning PhDs are less interested in. So in fact, very often, a product development team does not exist at all on the org chart and the company will require further capitalisation in order to be able to hire them.

Few of these considerations figures large in the minds of founding teams, who very often form around eminent academics or groups of graduating doctoral students. Yet, if they had stopped to consider how to scale the high-grade output of their AI dev team, they would recognise that other more established skills at equally challenging levels of accomplishment, may also be required in order to make scaling their output viable.