Pausing AI development? As if we could...
When we hear of a new way in which artificial intelligence (AI) can risk humanity, or even just move someone’s cheese, the debate often changes to whether or not we should pause AI development until some impact analysis is carried out, or some regulation kicks in.
I do not think that pausing AI development is even a plausible option to be considered. History has shown that technology never stops. We have tried this in areas that are much riskier for humanity, and without much success. We should accept that Humanity is insufficiently consensus-driven, and the stakeholders in AI technologies have too many easy ways to bypass bans, and too many good reasons to do so. Yes, there is serious lobbying promoting this pausing option, but I would argue that most is for pausing productization, rather than development, and is, to a notable extent, led by players who are just late in the market and need some time to catch up.
Comments
Display comments as Linear | Threaded