THE CRIMINAL LAW TO COME. BRIEF CONSIDERATIONS ON THE POSSIBLE USE OF THE AI (AND BIG DATA) TO PREVENT THE RISK OF UNINTENTIONAL DISASTERS.

The Criminal Law To Come. Brief Considerations On The Possible Use Of The AI (And Big Data) To Prevent The Risk Of Unintentional Disasters. The paper dwells on the impact of new tecnologies, in particular the artificial intelligence, in a somewhat original perspective that is not related to the theme of predictive justice, but the forthcoming “reflections” on criminal fault. The author draws inspiration from some (more or less) recent cases of unintentional disaster to examine the possible concrete effects of the artificial intelligence on the criminal fault, referring to the “weak” meaning of artificial intelligence. It is desiderable that, in the near future, the use of the artificial intelligence will allow to reduce the incidence of human conduct on the verification of unintentional disasters.

The prosecutor's office in Verbania, which started the investigations for disaster and wrongful death against the chief executive, the manager and the head of service of the cableway, also asked the confirmation of the provisional arrest and the application of the coercive measure of pre-trial detention in prison for the three suspects; this request was not approved by the Crown Prosecution who applied, instead, the precautionary measure of the home detention only towards the head of service of the cableway 1 . One of the nodes of the investigation concerns the placing of brake-lock devices by the head of service, that inhibited the normal operation of the emergency brake system of the cable car, located on the highest (supporting) cable. As stated in the arrest decree, the head of service of the cableway has admitted having «deliberately and repeatedly placed the brake-lock devices (forchettoni), deactivating the emergency brake system», a conduct «that was widely known» also by the other two people under investigation, who «agree with this choise and do not take steps to allow necessary maintenance works, which require the locking of the infrastructure, with a financial impact» 2 . Pending the investigation, the company responsible for ordinary and extraordinary maintenance of the Stresa cableway disclosed the latest ckecks, carried out under the request of the operator, which did not highlight technical problems; moreover, this company has announced to become a civil party because «the tampering of security systems that led to 14 people death is an extremely serious act. The use of the "forchettoni" is expressly prohibited with people on board. Any compensation will be donated to families of victims» 3 .
(follows) The relevance of the human conduct with regard to unintentional disaster and murder.
The tragic accident of Stresa cableway gives rise to a question of a preliminary profile, that is related to the conceptual distinction between "case" and relevance of human conduct, regarding the unintentional disaster. There are two co-present factors, which could lead to such a fact: the anomalous breach (or detachment) of the traction cable and the brake failure on the load carrying cable. The sudden break of the traction cable could appear "weird", because it constitutes a "slow" phenomenon: steel cables are regularly monitored and their breaking is certainly not unexpected, but constitutes the result of a slow phaenomenon of wear and corrosion, that has to be foreseen by the technician who monitors the infrastructure, unless there has been something distorting, as a strong mechanical action, a lightning, an object which falls very violently and heavily (that did not emerge in this case, almost from the early stages of the investigation). Between the periodical cable checks, must be mentioned for their importance the magnetic-inductive checks, that consist in the submission of traction and supporting cables to a magnetic field that checks them for their extension; the locally magnetized cable responds with a proportional signal, even if with possible error margins, to the area subjected to the magnetic field and, in this way, the technician can understand the situation inside the cable, even if they do not see broken lines. These checks are used from decades and constitue the most established technique; although there are possibilities of further studies (such as, for example, the opening of the cable or a long mechanical action), the magnetic-inductive checks give a picture of the situation. From a quick look of the list of checks carried out by the company responsible for plant maintenance, it seems that the magnetic-inductive check of traction cables (and all cables of the plant) was successfully done on 5 November 2020 (in accordance with the periodicity imposed by the law). Among the precedents of the Stresa cable way accident can be mentioned, for example, that of Monte Bianco, where on 29 august 1961 the supporting cable of the cableway between Punta Helbronner and l'Aiguille du Midi had been sheared by a French fighter-bomber in flight above the Monte Bianco: three cabins fell and six passengers died; the remaining passengers spent the night suspended in the void, before being rescued. That accident happened after 24 years and half from the installation of the cable that had to be replaced within 25 years: since then, the cable replacement obligation has been set at intervals lower than twenty years. One of the possible problems that high altitude structures have to face is the temperature variation: on summer days, cables are bombarded by sunlight and this increases the temperature, also around 70-80°; at night, at the highest heights the temperature drops by several degrees and the humidity inside cables condenses and accumulates towards the bottom and, therefore, it is possible that in the period of non-use of the cableway there was an accumulation of humidity and formation of water drops towards the bottom. This phaenomenon, that takes the name of "fretting fatigue" (well known by the technicians), involves that in inner layers of the tracting cable could have been a break of wires, so called "in the form of the mouth of a flute". These first (summary) observations seem to exclude the relevance of the case and lean towards the incidence of human behaviour, since the case has to be an unforeseeable and exceptional event, that suddenly insert in the action of the subject (pursuant to article 45 of the Italian criminal code). Further confirmation to that effect (as if proof were needed) could be deduced from previous cases of unintentional disasters, that marked the most recent history of our country. This is referred, for example, to the train wreck of 12 july 2016, when a frontal collision between two trains of the Andria-Corato route of the Ferrotramviaria caused the death of 23 people and the injury of other 51 passengers; in the proceedings before the Court of Trani are accused the Ferrotramviaria s.p.a. 4 and other 17 natural person (employees and managers of the apulian transportation company and the Ministry of infrastructure, accused in various ways of train wreck, manslaughter and serious negligent injuries, intentional omission of precautions, violation of security norms and forgery. During a trial hearing -as we learn by the judicial chronicles -it would be challenged the obsolescence of the so-called telephonlocking system, that is the phone calls system that allows trains depart (dating from the middle of the 19th Century), that regulated the security of the rail traffic in the line where the accident occurred 5 . The technological inadequacy of the line -according to the technical consultants 4 The Court of Trani, by the order of 7 May 2019, revoked the previous order issued by the Preliminary Hearing Judge of the same Court and ammitted the establishment of a civil party against the charged institution (both orders are available on www.giurisprudenzapenale.com, 8 May 2019). 5 It is useful to recall the full statement of the Infrastructure and Transports Minister Graziano Del Rio, made at the Italian Chamber of Deputies for the government communication in relation to the tragic train crash on the Andria-Corato line: «la sicurezza della tratta coinvolta dall'incidente è regolata tramite consenso telefonico che lascia interamente all'uomo la gestione ed è tra i sistemi meno evoluti e piu' rischiosi di regolazione della circolazione ferroviaria. Purtroppo, un sistema come quello del consenso telefonico che lascia interamente all'uomo la possibilità di intervento è oggi considerato maggiormente a rischio anche se utilizzato su tratte di esercizio compatibili con il sistema. La sicurezza della circolazione ferroviaria nella tratta dove è avvenuto l'ncidente, è regolata of the Public Prosecutor's Office -would have led, moreover, to the creation of potential condition for the frontal collision between two trains, 146 times before the tragic accident. Another predictable and avoidable train wreck, according to the Prosecutor of Milan who concluded the investigations (as we learn from the judicial reporting) was that of Pioltello on the Milano-Venezia line where, on 25 January 2018, the derailment of a passenger train caused three deaths and 46 wounded; this accident, according to a report of the consultant of the public prosecutor's office in Milan, would have caused by the now known "piece of rail" of 23 centimenters, cracked in the so called "zero point", would not have occurred whether the manteiners reports were taken into account and if it had not been a long series of safety omissions, for the exclusive purpose of saving maintenance costs for the RFI 6 company. A further example of unintentional disaster is the collapse of the Polcevera viaduct in Genoa, also known as the "Morandi" bridge, that on 14 August 2018 caused the death of 43 people. The Public prosecutor's office of Genoa, as we know from the judicial reporting, recently closed the investigation against several people (71 people and two companies, Autostrade, Aspi and Spea) involved for various reasons for disaster and manslaughter, and for attempt on transport security, malicious removal of precautions against accidents at work and vehicular homicide. From the investigations emerged a worrying data, namely that «despite many warning signs, no one took decisions to secure cables, which are the most critical parts of the viaduct»; since the inauguration of the viaduct in 1967, «the cables of the collapsed pile were not the subject of any substantial maintenance work» 7 . In the accident of the Mottarone cableway -as in other previously mentioned unintentional disasters -it is undisputed that there was no use of Artificial Intelligence (known under the acronym AI), in the monitoring of possible infrastructure's criticalities. Instead, what would happen if in the next future there would be a concrete "transition" to the new promising technologies as, for example, the AI. The question, which this paper will aim to answer, is if someday soon the AI could at least "reduce" the incidence of human behaviour in relation of unintentional disasters as, for example, those set out above. Maybe, that day could mark, so to speak, a "new beginning" tramite il meccanismo del consenso telefonico: nel regime del blocco telefonico il capostazione non può inviare un treno alla stazione successiva se non ha domandato e ottenuto dal capostazione della predetta il consenso a inviare quel determinato treno. La sezione di linea è dunque considerata normalmente bloccata, e viene liberata per la circolazione di volta in volta mediante il consenso dell'inoltro del treno; con tale procedura sulla sezione di linea puo' essere presente un solo treno per volta. Il sistema di segnalamento con consenso telefonico, pur essendo sicuro, è certamente un sistema tra i meno evoluti rispetto alle tecnologie disponibili per la regolazione della circolazione ferroviaria: infatti il sistema si affida interamente all'uomo, nella fattispecie all'operativita' dei capistazione, come sopra descritto. Le tecnologie oggi disponibili sono molteplici, e si adattano ai diversi regimi di esercizio in relazione alle caratteristiche della rete, alla frequenza dei convogli e alla velocita' di esercizio. Nel caso di specie, sulla tratta a binario semplice in esame, il sistema di consenso telefonico è in uso da oltre sessant'anni: l'attuale frequenza dei convogli è praticamente inalterata da circa dieci anni, durante i quali non si sono evidenziati inconvenienti all'applicazione del sistema. Il sistema, ripeto, è di completa responsabilita' della Ferrotramviaria, della società di gestione» (the text is available on www.agi.it). 6 It should be added that the Preliminary Hearing Judge of the Mailand Court, with an order lodged on 2 February 2021, declared for criminal fault, where human behaviour could have as "parameter" the super model agent of the AI.
A technological view: the AI to make infraastructures "smart".
The AI, meant as the use of computers to simulate the cognitive functions of human beings 8 , is based on algorithms which are a set of precise instructions and mathematical expressions to face a given problem and solve it 9 ; speaking metaphorically, it could be said that «it espresses with formulas the complexity of human behavior» 10 . Suffice it to say that the self-learning algorithms, that «identifies strict relationships in the observed data, without rules and explicit pre-programmed models» (so-called machine learning), or algorithms based on a simplified model of human brain, the so-called neural network which recently, thanks to most powerful computers, are capable realize a deep learning which refers to «many layers of units involved in the algorithm» 11 . This last kind of learning does not only apply to images, but it has applications «in most challenging case, as predictions» 12 . With an emphasis, it can be asserted that the disposition of a large amounts of data from which to extract information and algorithms, means today to have a gift similar to that of the Delphic Oracle in the ancient Greece: «they are the digital soothsayers of today. As in ancient times, they are at the heart of huge economic and power interests and became object of adoration or fear. Always mysterious, both when they solve our problems and indicate inconvenient perspectives. Terrifying when they make us realize a world where the algorithm not only reads the future, but also defines it» 13 . No less important for AI algorithms are Big Data 14 , fundamental economic goods in the age of the digital capitalism, which can be understood in the sense of mega-data (established by the EU), namely «large amounts of different kinds of data produced from several sources, among which people, cars and sensors» 15 . The amount of this huge mass of data, collected and processed in the global digital platforms (over the top) 16 , serves to improve the efficiency of algorithms and, «in turn, the use of the algorithm by everyone of us generates new data and so on, teaching to the algorithm how to improve and, even, how to learn better» 17 . The algorithmic efficiency of the AI, supplemented by the hudge amount of Big Data -in addition to technologies of corollary as "the Internet of Things" (that virtually extends the connection to all things) and the advanced robotics -appear to have started the "fourth 8 The conceptual basis of the AI was laid by the famous British mathematician Alan Turing, in one of his writings named Computing Machinery and Intelligence, in Mind, 1950, 49, 433 ff. On the concept of artificial intelligence, see BODEN, L'intelligenza artificiale, trad. it., Bologna, 2016, 7 ff. 9 For a definition of algorithm, see TOFFALORI, Algoritmi, Bologna, 2015, 7. 10 VESPIGNANI, L'algoritmo e l'oracolo. Come la scienza predice il futuro e ci aiuta a cambiarlo, Milan, 2019, 65. 11 VESPIGNANI, L'algoritmo e l'oracolo, cit., 66-67; in machine learning, reti neurali e deep learning, see also CRESCENZI, PAGLI, Problemi, algoritmi e coding, Bologna, 2017. On the machine learning algorithms, see funditus DOMINGOS, L'algoritmo definitivo. La macchina che impara da sola e il futuro del nostro mondo, trad. it., Turin, 2016. 12 VESPIGNANI, L'algoritmo e l'oracolo, cit., 76. 13 VESPIGNANI, L'algoritmo e l'oracolo, cit., 19. 14 DELMASTRO, NICITA, Big data, Bologna, 2019, which should be referred also for further bibliographic details. 15 DELMASTRO, NICITA, Big data, cit., 10 ff. 16 The English expression "over the top" means that «digital platforms develop services that are hierarchically above fisical fixed and mobile telecommunications infrastructures, thanks to which we can access the network» (DELMASTRO, NICITA, Big data, cit., 48).
industrial revolution", term coined by the economist Klaus Schwab 18 , founder of the World Economic Forum. Among the various applications of the AI in the predictable future 19 -for what is of interest in this paper -there is also the infrastructures' monitoring: in fact, the AI could "intercept" those "suffering indicators" of infrastructures that are initially undetectable, but tend to grow to huge proportions, endangering the integrity of a structure. To achieve high security standard, thanks to the new technologies, the key word of the near future seems to be the "predictive" maintenance, instead of the "corrective" one: each infrastructure could be equipped with sensors suitable for "smart" monitoring. The use of AI could analyze the automatically and continuous detected parameters on infrastructures, in order to elaborate previsions on its state and indications about the maintenance to carry out; in this way, the AI would be useful for the technicians of this sector to find and solve complex problems.

(follows) The EU proposal of "AI Regulation".
In any case, it should be noted that the European legislator introduced for the first time a systematic legal framework with the proposal of "AI Regulation" published by the European Commission on 21 April 2021 20 . In this regard, it is provided a pioneering definition of AI system, that includes softwares made with techniques or approaches indicated in the Annex I, among which there are the machine learning algorithms. Even more interesting is the classification of the AI patices, which are allowed or prohibited depending on the risks for fundamental rights and security: for what is of interest of this paper, the AI sistems related to essential public infrastructures are qualified "at increased risk", since their use is allowed only in the presence of specific security controls, according to a classification model based on risks associated with the product. In particular, a first category comprises the AI systems, listed in the Annex II of the Regulation, which will be used as safety components of the products (or which are themselves a product). Instead, a second category concerns the stand-alone AI systems listed in the Annex III, the use of which can affect fundamental rights. The AI Regulation comprises several checks to be carried out on high-risk AI systems, in order to ensure high transparency and security standards. As a corollary of this complex "net" of checks are planned important duties of liability of the provider (as, for example, the traceability and the verifiability of the AI outputs, wich are at high risk for the the entire life cycle), a special risk management system, practices of data governance that has, among other things, statistic properties which are appropriate to support the use of the AI system, and, last but not least, an effective human supervision.
18 SCHWAB, La quarta rivoluzione industriale, trad. it., Milan, 2019, according to whom the fourth industrial revolution would have started in 2016 (after the third industrial revolution, started in 1960, known also as "digital revolution"), thanks above all to the AI machine learning algorithms and to the growth of the Big Data. 19 On the main applications of AI, that cover a wide range of society sectors, see in general LONGO, SCORZA, Intelligenza artificiale, Milan, 2020, 60-61. 20 To provide better understanding on the articulate discipline of the AI Regulation proposal, see LUSARDI, Regolamento UE sull'Intelligenza Artificiale: uno strumento articolato per gestire il rischio, in Quot. giur., 3 June 2021.
The Regulation also provides for the establishment of a European "Committee" with advice and assistance functions for the European Commission and the power to comminate pecuniary administrative sanctions in case of non-compliance with the complex discipline.
Brief critical criminal law considerations about the "strong" meaning of AI.
In the face of the future scenario of smart infrastructures which could help to forecast risks of unintentional disasters (and related fatalities), the criminal law scholar must preliminarily dwell on the "strong" meaning of AI, intended as an autonomous legal entity and after on the "weak" one, which states, instead, the self-serving nature of the AI compared to human behaviour 21 . Unlike traditional softwares, an AI system is not based on the computer programming (that is, on the work of developers who write the operating code of the system), but on learning techniques: it will be created algorithms to elaborate a hudge amount of data, from which the system derives its ability to understand and reason. Therefore, AI decisions are based on decision-making processes that cannot be understood by external observators: this is the known issue of the so-called black box, that refers to the fact that «many algorithms take an incoming data and produce another one in output, passing through a learning process that is the "black box", which can not be interpreted from the outside» 22 . Moreover, the machine learning permitted to algorithms to cicumvent the Polanyi paradox, that can be summed up with the common-sense sentence «we know more about what we can explain», for this reason «implicitly, we know much about the way in which works the world around us, but we can't explain this knowledge […]; machine learning algorithms and, in particular, neural networks are the right instruments with which computers acquire a type of implicit knowledge through illustrative inputs, without having the ability to explain the the reason of their results» 23 . Moreover, in many cases the AI can learn not only from its own experiences, but also "by its own kind", through cloud computing technologies, which allow to increase the level of learning of intelligent machines, that can add up their "operating experiences" in various fields 24 . However, the above mentioned "strong" meaning would require a "robotic" criminal law 25 , in which the AI would be considered as a subject to a judgment of fault for unintentional disasters, which could be foreseen and avoided, in the light of a model agent. This is a suggestive but (much) problematic conception 26 , since the AI has no "inner reality", even remotely "comparable" to (human) consciousness, neither can understand, for example, the "negative value" of an unlawful behaviour, that constitutes a social "structure" earlier than a juridical one. Until machine will be able to think, seems to be "premature" to question the problem of the "man-machine", namely the attempt to describe machines as human beings: according to the notorious Turing test, a machine will be able to think only when it will be able to speak with an human being and this one won't understand if he speaks with an autonomous machine or with an "hidden" programmer 27 . In the same way, the similarly insidious "perspective" of the "man-machine" (coined by the illuminist philosopher La Mettrie) seems destined to fail, this is the attempt to describe human beings like they were machines 28 .
In this regard, it should be noted that human intelligence is nothing but the «brain in a vat», envisioned by the philosopher Putnam in the 1980s 29 , but an "embedded" intelligence, which is not only rational (cold, detached), but also "emotional" (corporeal, instinctive); as assumed by Kahneman (a famous behavioral psychology scholar), two "systems" of thoughts integrated with each other can be configured, these systems connote human intelligence and are: «fast thinkings» corresponding to «system 1» ("automatic"), and «slow thinkings» that are related to «system 2» ("rational") 30 . Therefore, in order to have an AI system that can imitate human beings, not only the "mental" part of the machine need to be developed, but also the "corporeal" one. As written by the famous scientist Federico Faggin, «today we read that the AI based on neural nets can soon outperform human capabilities. These predictions are supported by impressive results, as, some years ago, the defeats by a computer of the world champion of chess and, most recently, of the world champion of Go (a Chinese game considered even more imaginative and therefore more "human" than chess). Moreover, some scientists are predicting that, in less than 40 years computers will be conscious. These results and previsions go against common sense of many people and challenge our intuitive understanding of life, consciousness, and nature of reality. Science must urgently address these issues, which in the past were studied almost exclusively by philosophy. What distinguishes us from machines? Are we simply biological computers? According to most scientists, yes. But, based on my experiences, I do not think so» 31 . The cited passage is significant, since it expresses the point of view of a scientist who is "part of a minority", with respect to the "materialistic" conception that, instead, seems to equate the uman brain to a computer. For this writer, the lack of a form of "conscience" of the AI, beyond that of a "corporeal" intelligence, militates against the "strong" meaning, with the result that the suggestion of a "robotic" criminal code seems destined to remain so. To invoke an interesting contribution of Burchard, a Professor of Criminal Law at the Goethe University of Frankfurt am Main, it is possible to answer negatively to the question if the AI and the subsequent algorithmical transformation of the society, implies the end of criminal law 32 .
(follows) The "weak" meaning of AI and criminal implications for the fault.
27 Alan Turing, describes its famous test in Computing Machinery and Intelligence, cit. 28 LA METTRIE, L'uomo macchina, trad. it., Milan, 2015. 29 For the philosophical discussion about the scenery of the "brain in a vat", see PUTNAM, Ragione, verità e storia, trad. it., Milan, 1985. 30  The "weak" meaning of AI seems to be more "available" for the criminal law, it states that the AI operates as a system in aid of mankind, without having the capability to commit crimes or to be punished (machina delinquere et puniri non potest) 33 . This does not mean an understatement of the AI promises, which consist in a "better" guarantee of the legal asset of public security and integrity, as well as the efficiency of controls (with particular regard to infrastructures). More realistically, this means to take note that the AI could be an instrument to reduce the risk of hazard to the safety and integrity of infrastructures' users, that in Italy suffer the effects of "atavistic" control retards (as shown, for example, by the collapse of the "Morandi" bridge). It's clear that an overall risk reduction, also thanks to the use of the AI, would potentially lead to a reduction of the incidence of culpable (commissive or omissive) human behaviour in the infrastructure maintenance. With this statement it is not claimed that there would be no more processes for disaster and manslaughter, but it would be a reduction of the verification of these offences, that could mean a "contraction" of the frequency with which they occur. This could also help to "serene" the Community, whose control requests on infrastructures criticalities are often ignored because of the lack of maintenance by the authorities. Suffice it to mention, for example, the collapse of a bridge on the river Magra, in the Lunigiana in the near of Aulla, that catched mass media interest too; the collapse occurred on 8 April 2020, involving two drivers, one was unharmed and the other one was hospitalized for injuries, but his life was not in danger. The disaster could be much worser, if in that period it had not been in force the moving block due to SARS-CoV-2 health emergency. Beyond this further aspect, the worrying data for the criminal law scholar is that, despite several reports from citiziens of Aulla and its mayor on the need of further checks by Anas, considering the criticalities of the bridge, in 2019 a report of this company (subjected of parliamentary question) claimed that the viaduct (already attentioned and monitored by Anas employee) didn't have no issue to compromise its static functionality and, for this reason, emergency measures for the viaduct were not justified 34 . The AI could integrate (rectius: substitue) the parameter of model agent, with the consequence that the conduct of the real agent that complies with him, would make uncollectible an alternative, unless the previsional model outlined by the AI on the infrastructure security doesn't turn out to be unreliable, due to machine errors at the learning stage (through machine learning algorithms). In such event, the fault of a man could be considered as "mild", if not also totally excluded, in the light of the reliance placed in the better technical and computational abilities of the machines 35 , which are the "new" model agents; however, would remains a margin of "serious" misconduct when the AI error is determined by human malpractice (for example, for a poor statistic knowledge) in the interpretation of forecasting models. These possible "riverberous" on criminal fault might lend themselves to critical remarks, for the fact that the above mentioned "weak" meaning would not be able to justify a "qualification" of the AI in terms of super-model agent, because the benchmark of the real agent must be "human"-even if, in certain areas, "idealised" by the case-law itself -to the point of appearing "unapproachable" for the human being 36 . Anyway, these are findings that move more from a formal topic than from a substantial one and, ultimately, don't seem unsurmountable for the criminal law to come, where the human conduct will be related to the better technical capabilities of the AI. The cooperation, so to speak, between the (creative) human activity and "intelligent machines" 37 , the standard of caution in the infrastructure management and control appears to be destined to increase. The outcome of this cooperation seems to be oriented to exclusive benefit of humanity, as already sensed by the famous science fiction writer Isaac Asimov in his three famous "laws" of robotics 38 .

Conclusions.
At the end of these brief observations concerning the AI from a "different" perspective than the (now) known and bread question of predictive justice 39 , it does not seem out of place to recall some interesting considerations of a french mathematician who, in a renowned popular science book, wrote that: «computers, provided of increasingly complex and performing algorithms, seem to be able today to surpass human beings in most of their competencies. They drive cars, participate in surgical operations, can create music or draw original pictures. It is difficult to immagine an human activity that, technically, could not be realized by a machine equipped with an adequate algorithm. […]. It is difficult today to predict what the machines of tomorrow will be able to make. It would be surprising if they will not surprise us» 40 . There is no doubt that, between technical activities, in the near future infrastructures controls could constitue an important field of application of the AI, with the probably resulting reduction of the incidence of the culpable human conduct, as mentioned above. Moreover, it cannot be excluded that, the criminal law to come could be less "total" than it appears today to criminal law scholars 41 , because the use of the enormous computing capacity of algorithms in monitoring infrastructures could, in some way, "mitigate" the conviction that in the criminal law could be found (necessarily) the "solution" in the face of a foreseen 36 It is symbolic the reference, for example, to the field of the medical-surgical activity, where a doctrinal orientation highlighted the risk that, under the model agent invoked by the jurisprudence, would hide «a claim of diligence shaped on the extreme paramether of the doctor Gottsähnlich, hyperbolic, and not equiparable by the concrete doctor », for this reason the odel agent of the homo eiusdem professionis et condicionis «would repeat the limit of the unreachability» (CAPUTO, Colpa penale del medico e sicurezza delle cure, Turin, 2017, 92). 37 On the relations between the human intelligence and artificial intelligence, cf. MCAFEE, BRYNJOLFSSON, La nuova rivoluzione delle macchine, Milan, 2015, spec. 90 ff. 38 ASIMOV, Trilogia della fondazione, trad. it., Milan, 2004, where are established the three well-known "laws" of robotics: 1. A robot can not damage a human being or allow, with its inaction, that a human being could be damaged. 2. A robot must obey to human beings' orders unless these orders are in conflict with the first law. 3. A robot must protect its existence as long as this protection does not conflict with the first and the second law. 39 For an examination of AI criminal profiles, with particular reference to the different matter of predictive justice, see ex multis, F.