8. listopad 2021 / Jan Klesla
Jan Klesla: Let's not suffocate artificial intelligence with regulation
In the last newsletter, David asked how you would regulate Matilda, an artificial intelligence. I'm happy because this topic is really #crucial for the future of the Czech Republic and the EU and, as the aforementioned leaders' pre-election debate with Matilda showed, it is not exactly in the centre of interest here, diplomatically speaking. So how to regulate AI in Europe? The short answer is nothing. Or better yet, nothing. The regulation that the European Commission has come up with is literally redundant and will stifle the competitiveness of European companies in a sector that, on the contrary, will be the engine of the global economy, especially in a time of rapid digitalisation post-covid. How is this possible? Take Rossum.ai, named in honour of Karel Čapek. A great Czech startup that focuses on automatic invoice reading, saving office work, money and people. They just announced a $100 million investment on Wednesday and are on their way to becoming the first Czech AI unicorn. An outstanding achievement, one just wonders if their investors were counting on having their money in a heavily regulated industry within two years? What kind of crap, you say, who would regulate the reading of invoices? But beware, Rossum also offers similarly automated CV reading. Handy stuff if you have hundreds of candidates applying. Only, we're already in HR systems, and the European Commission considers those to be so-called high-risk. All Rossum has to do is have their system pre-screen whether job applicants have at least basic English and they literally have to slap a CE sticker on it. Seriously, a CE mark on a neural network. Or the conformity assessment that's done for, say, electrical products or flammables. This means translating probably all the documentation into German or French and giving hundreds of thousands of euros to some big authority in the west who will be able to do the same thing as with the electric motor for the neural network. On top of that, hire a bunch of expensive people just to keep an eye on it, fulfill a whole bunch of other obligations, or have the competition look at the source code. Or the investors will simply strongly advise them not to develop this direction and the recruiters will continue to sort by hand. Why? Because someone in the US exploited a bug in AI to discriminate against African-Americans. What does it matter that in Europe this is a somewhat unlikely problem and discrimination by race and even age is already covered by our strict anti-discrimination law? Could... Same with, say, fintech, where Twisto is the star again with his AI named Nikita. What does it matter that they have long been in a regulated industry and their every move is watched by the CNB? The Commission will impose more obligations on them. It's not Capek anymore, it's Kafka.
But perhaps the worst thing is the change in mentality. Every new AI start-up will invest in lawyers first, instead of developing the product itself, to see if it makes sense to proceed at all. Or the best ones will simply pick up and move to the US or Britain, which has already announced outright that it will not accept anything like Brussels. This is a really high price to pay to regulate a potential threat that has not yet happened (ex ante). Or it hasn't happened in the EU, precisely because the regulations already cover it. Wait, wait, you may argue, but what about protecting our rights so that we do not end up like China? Well, we already have not two but three systems of protection, two European (the Charter and the Convention) and one national (the Charter). And a pile of case law to go with it. The last time HomeCredit tried to come to the Czech Republic years ago with a simple and Chinese-origin algorithm, it got a big brother award and quietly backed out again. I really don't think such an unwanted post-Brexit gift to Boris Johnson in the form of our best companies is worth a French gentleman in Brussels making a completely unnecessary memorial.
Yes, there are plenty of areas that we should regulate. Data, for example, the inputs to any AI. But there's already GDPR and other forthcoming regulations for that. Similarly, the internet is also regulated through inputs, such as telecoms. For example, there is not even a definition of the Internet as such in the legal system and it does not seem to be hampering its development in any way. And the same applies to AI. Liability is also important. Not a sexy topic, but all the more important because a Tesla on autopilot can actually crash you today. Only, the Commission is pushing that off and fighting illusory threats that are better in the headlines. When we have real evidence of the real holes in our - already rather overstretched - legal system, let's plug them. Until then, better to invest time and money in trying to make Matildas a little smarter.
Translated with www.DeepL.com/Translator (free version)