How to growth with AI ? (october 2024)

This month I propose 2 topics : companies growth prospects with AI, Testing AI engines

1/ How digital and AI could drive companies growth in France and Europe?

The size of an economy is a combination of productivity and the number of hours worked. When productivity or hours worked increase, the size of the economy grows. This is the supply side of the equation. On the demand side, there is the quality of goods and prices. Prices and costs determine the profit margin and the ability to invest.

The promise of digital is to increase the productivity of companies, so the rising wave of AI. Over the past few years, companies have spent a lot of money, increasing operational costs as they move to cloud services.

Companies are investing heavily in digital and AI

These increases are obviously in the baskets of digital service providers, which have impressive margins.

The real question, therefore, is whether digital transformation and AI will generate enough profits to cover the costs. Daron Acemoglu, the MIT economist who recently won the Nobel Prize, predicts that annual productivity gains from AI will be only 0.07% over the next decade. One of his points is that beyond a certain threshold of automation, it is better to employ a human.

So it seems that the real challenge for companies is to make better use of their employees’ brains, rather than replacing them with an artificial one. AI, seen as a means of augmentation, is able to help complete the lack of training of employees on ancillary subjects such as, for example, the mastery of the written language or a foreign one, or the ability to close an offer, while the experience of the employee covers all the risks.

Why not think that the human brain is a real human capital of the company and that AI is the fertilizer of it? Isn’t it a good strategy that companies should consider to get real value from AI?

2/ Will testing AI add a new burden to all AI user organizations ?

Traditional deterministic softwares are tested using trial data sets that represent their expected target behavior. Conversely, AI engines are not deterministic, so they are evaluated according to false positives, called specificity, and false negatives, called sensitivity.

Then, ad hoc testing methods should be applied to meet these new requirements, and most of them involve data, especially on :

Coverage, since training and test datasets do not always represent the real behavior of the target user population, especially for bots, LLM, videos,…

Biases resulting from the divergence between the expected target population, the expected usages and the real ones.

Real-time testing to track the impact of changes to AI engines that occur during use and change performance (specificity and sensitivity), coverage, and biases.

Even if AI itself can help test processes, for example by generating synthetic data that overcomes the privacy complexities that are very common in AI, testing AI is a new big challenge for qualification teams.

Some companies have already outsourced complex testing of traditional software. The proliferation of AI engines will expand this trend, as the burden of testing will fall not only on the AI development companies, but also on the user companies, since most of the problems of AI come from their data.

Therefore, the cost of testing should not be forgotten when assessing the global cost of AI. Of course, it will be up to data managers to manage all these critical aspects of data use, and all of this will require them to improve their best practices.

Laisser un commentaire

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.