Public Lecture
In Commemoration of the 68th Anniversary Faculty of Law, Padjadjaran University
By Prof. Dr. Yusril Ihza Mahendra, S.H., M.Sc. Coordinating Minister for Legal Affairs, Human Rights, Immigration, and Corrections of the Republic of Indonesia
Professor of Constitutional Law Faculty of Law, University of Indonesia
Bandung, August 27, 2025
In the name of Allah, the Most Gracious, the Most Merciful,
Peace be upon you and Allah’s mercy and blessings,
First and foremost, let us offer our praise and gratitude to Allah SWT, the Almighty God, for His abundant grace that has allowed us all to gather here this morning for this public lecture. I would like to take this blessed opportunity to express my deepest thanks and appreciation to the Dean of the Faculty, who has granted me the opportunity to deliver a lecture before the academic community of the Faculty of Law at Padjadjaran University. The topic requested for this public lecture is “Challenges in Law Enforcement in the Era of Artificial Intelligence and Digital Transformation.” Although this topic is not within my expertise as a professor of constitutional law, I have accepted the invitation, as I believe we should all never cease learning about new phenomena emerging in our time.
I also view this request as an intellectual and academic challenge to contribute to thinking about and seeking answers to this critically important issue.
We now live in a postmodern era with immense epistemological challenges. We must reexamine our existence as humans with all our intellectual capabilities to create things based on the knowledge we have achieved, but then question the implications of what we create for the welfare of humanity. These philosophical questions indeed lead us to endless answers, like sailing across a vast, boundless ocean. Every time we provide an answer to a question, that answer raises new questions.
However, at the very least, intellectually, we must provide answers to the questions arising in our time, even if not entirely satisfactory, but sufficient to quench our intellectual thirst, which will guide us in determining what we should do and what we should avoid, along with all its implications for human existence. The German philosopher Immanuel Kant (1724-1804) referred to this as du sollst und du sollst nicht—what is obligatory and what should be abandoned. This all relates to ethical and legal implications, as human life cannot be separated from these ethical questions. Only humans, as stated by Imam Al-Ghazali (d. 1111 CE), are capable of distinguishing ethical issues and questioning them, unlike other creatures. This theme is highly relevant and strategic given the rapid advancements in digital technology and artificial intelligence that are transforming the landscape of our lives, including the field of law enforcement.
Distinguished audience,
Before delving into the core discussion on law enforcement, allow me to begin with a philosophical reflection. Why is artificial intelligence (AI) so extraordinary that it sparks profound discourse among legal experts, philosophers, and the wider public? Artificial intelligence is not merely an ordinary technology; it possesses unique characteristics. Philosophers of technology remind us that AI “penetrates deeper into humanity than previous technologies.” In essence, the AI project aims to create machines with fundamental traits that define human identity: the ability to feel and think, to learn and be intelligent. Unlike past technological inventions such as the steam engine, electricity, and communication devices, AI challenges our understanding of what intelligence and even consciousness mean. This makes AI not merely a tool, like other inventions based on scientific concepts, but a mirror that prompts us to reconsider what distinguishes humans—as natural beings—from intelligent machines called AI.
Since ancient Greek and Chinese times, thinkers have imagined “artificial beings” that could mimic humans. Emperors of the Han Dynasty, long before the Common Era, created statues dressed in full military uniforms complete with weapons. Although these military figures were linked to mystical beliefs about the metaphysical realm, their existence was not natural, like the story of Prophet Solomon’s jinn army, but artificial creations of humans, believed to be activated by mystical powers. Humanity’s dream of artificial beings resembling itself has only become reality in this postmodern century. The question then arises: even though these artificial creations can move physically like humans and “think” mathematically like the complex logarithms of Al-Khwarizmi, do they truly possess consciousness like natural humans? Twentieth-century philosopher John Searle, through his Chinese Room Argument, argued that a computer merely processing symbols according to program rules does not truly “understand” meaning or possess any consciousness.
A computer might simulate intelligent conversation or behavior, but it is like someone in a closed room following a manual (program), able to respond to Chinese messages without ever truly understanding their meaning. The core of Searle’s argument awakens us to the fact that human intelligence is not merely a sequence of algorithms. Instead, there are qualities of consciousness, intent, insight, and conscience that are uniquely human. Abu Ja’far Muhammad bin Musa al-Khwarizmi (780-850 CE), the father of modern mathematics from what is now Kazakhstan, who formulated algorithms, was a philosopher grounded in faith. He believed that God created the universe not haphazardly but based on highly mathematical concepts that humans could study. However, human consciousness is not mathematical. It involves ethical considerations that determine responses when faced with ethical choices: whether something is good or bad, useful or not, beneficial or harmful, and so on. Such matters are typical of humans. Because humans possess awareness of ethical choices, they are accountable for their decisions. Based on this philosophical thinking, the question is: how should we treat AI within moral and legal frameworks? Is AI merely an object or tool, or an entity that may one day need to be recognized with rights and responsibilities?
To date, artificial intelligence is still regarded as a tool created and controlled by humans, not an autonomous entity that can be held normatively accountable morally and legally on its own. Indonesia’s positive law, as in other countries, does not yet have provisions designating AI as a legal subject. This has also been a topic of discussion between us, representing the Indonesian Government, and the United States Government, which recognizes that AI’s existence has broad legal implications, while the U.S. Government itself is not fully prepared to face this new challenge. Several lawsuits in other countries against companies operating AI for alleged violations of copyright, broadcasting rights, and other intellectual property rights have drawn global attention to how the law will respond to this new challenge.
The consequence of viewing AI as not accountable ethically or legally for errors or violations is that legal accountability remains with humans—whether the creator, owner, or user of the AI. Similarly, in aviation incidents, flight accidents are not attributed to the aircraft, which is merely a human invention, but to the pilot and crew, the airline owning the aircraft, the aircraft manufacturer, ground staff managing air traffic, or even passengers whose actions caused the accident. This is consistent with the legal agent principle in our criminal law, where only persons or corporations can be subjects of criminal acts. Algorithms or machines are not the “whoever” referred to in criminal statutes anywhere.
Thus, from the outset, we affirm the principle that behind every algorithm, there is a human responsible. Artificial intelligence may be sophisticated, but it arises from code sequences composed by humans and trained with human-made data, lacking any ethical or legal consciousness. Therefore, unlike past theological and kalam debates between Mu’tazilah and Jabariyah followers questioning whether humans are responsible for their actions or not, since they are God’s creations, the Quranic verse wallahu khalaqakum wama ta’malun (Surah As-Saffat, verse 96)—whether interpreted as God creating humans along with their actions, or God creating humans who autonomously decide their actions—has implications for personal human accountability in this world and the hereafter according to Islamic belief.
However, although AI currently holds the status of a “tool,” its implications for humans are real and multifaceted. This is where philosophical and ethical studies must play a role. Epistemologically in the philosophy of science, technology is never entirely neutral, as reminded by philosopher Martin Heidegger. Heidegger viewed technology not merely as a means to an end, but as a way of revealing the world. Technology carries a certain vision of the world and humans within it. In the context of AI, this means algorithms are built on assumptions by their creators: which data is deemed important, what goals to optimize, and what definitions of success are set. If we treat technology only as a tool for efficiency without deep thought, Heidegger warns of the greatest risk: that humans and values could be reduced to mere resources or objects. With increasingly sophisticated AI, we must not be deceived into viewing everything, including human dignity, through the narrow lens of algorithmic calculation alone. This is the philosophical challenge to ensure technological dominance does not diminish our humanity.
In the history of technological development, optimism and pessimism always emerge. Similarly with AI, some see it as a savior of civilization through automation and intelligent solutions, while others fear it will bring catastrophe. Artificial intelligence is like a double-edged sword: on one side bringing great benefits, on the other posing new challenges and even threats to humanity. Optimists see AI as aiding heavy work and solving complexities, for example in accounting where it surpasses human memory, in medicine where it provides more accurate diagnoses and medical actions than human doctors who sometimes lose caution due to natural traits like fatigue, boredom, and divided attention. Such traits do not exist in machines operating on programs, unless the machine errors due to programmatic mistakes or wear and tear of parts and devices.
Distinguished audience,
The massive use of AI in our time has raised various ethical dilemmas challenging our core values, including several fundamental issues. First, privacy and surveillance. AI amplifies our capacity to collect and analyze individual data. From simple CCTV recordings that anyone can own and operate, fingerprints, biometrics, facial recognition, to big data algorithms, it not only stores and recognizes personal data but also uncovers personal information from the mundane to tracking someone’s habits in cyberspace. The collection and analysis of personal data can predict trends in what someone might do based on that data. This aggregation of personal data facilitates criminals in committing desired crimes against individuals. On the other hand, law enforcement authorities can utilize this data collection technology for security, crime prevention, and law enforcement. However, all this leads to the threat of a “panopticon state” that endangers citizens’ privacy: as if no personal secrets remain unknown to others.
The threat of losing privacy makes humans feel the loss of their freedom and independence to act as individuals. The world becomes increasingly uncomfortable for humans because every movement can be “captured” and “known” by the “state” or any organization with “big data.” A world uncomfortable for humans will breed anxiety and despair. Then, what is the point of living in such a world?
In the legal world, the right to privacy is part of human rights that cannot be ignored. Only humans have privacy. If privacy no longer exists, what is the meaning of humanity itself? The amendments to the 1945 Constitution recognized this by inserting a new article on personal data protection as part of human rights in Article 28G paragraph (1), which states: “Every person has the right to protection of their personal self, family, honor, dignity, and property under their control, as well as the right to a sense of security and protection from threats of fear to do or not do something that is a human right.” The euphoria of digital transformation has the potential to erode human privacy without ethical norms for consideration by the state or any organization to overstep, and no legal norms to limit it. In the postmodern era, the existence of the state with all its information technology and AI devices for excessive surveillance and control can turn citizens from loving their country to resenting it. Can humans one day live without legal norms and power organizations authorized to enforce those norms?
This seems impossible based on human historical experience, but excessive control over citizens, only possible with AI, is happening now and will grow larger in the future. No matter how much Adolf Hitler (1889-1945) during his rule might have desired all German people’s behavior to be under his control as der Führer, the technology of his time could not aggregate big data as in our era. We cannot imagine what would happen if Hitler lived today. With his ability to agitate and propagandize through speeches alone, he could already flip the German nation’s thinking in his time. Imagine if that propaganda capability was supported by today’s AI; it’s hard to fathom what would occur.
Facing such issues, Indonesia, like many other countries, has taken positive steps by enacting the Personal Data Protection Law, namely Law No. 27 of 2022. This law aims to protect citizens’ rights over personal data and prevent misuse by anyone. Clearly, it regulates rights and obligations that must be obeyed by citizens, the state, and other organizations collecting personal data, both specific and general. However, the regulatory content in this law clearly cannot govern the collection, aggregation, and analysis of personal data if AI use becomes massive and operable by anyone.
We also need to recognize bias and even algorithmic discrimination. AI learns from historical data. If the data or model contains bias, AI can perpetuate or exacerbate it. Studies in various countries show that predictive policing algorithms tend to over-target minority areas due to past over-policing practices. In the United States, for example, the COMPAS software for assessing recidivism risk has been heavily criticized for bias against Black defendants, more often wrongly predicting them as high-risk compared to White defendants. This reminds us that algorithmic justice must be pursued. Otherwise, AI, which we assume is objective, could unwittingly sharpen social injustices. Therefore, AI system creators must ensure inclusivity and fairness, while regulations need audit mechanisms for publicly used algorithms to avoid worsening bias and stereotypes or labeling certain groups in society, even globally. Bias and stereotypes against people with Arabic names or Muslims as tending to be classified as “terrorists” two decades ago are real examples of this phenomenon.
There is also the issue of transparency and accountability. Many AI algorithms are closed (proprietary black boxes). This challenges accountability principles. In law enforcement, we have the due process of law principle for justice, fairness, and legal certainty in criminal procedures, including the right of suspects and defendants to know evidence and charges. The position of suspects and defendants in investigation, inquiry, and prosecution processes is equal and balanced with law enforcement officials facing them as citizens and people who hold sovereignty in the state. What if detention decisions are based on AI system recommendations whose workings are not understood? What if advocates also use AI to analyze suspicions and defenses? Lack of algorithmic transparency can erode public trust and complicate accountability. Therefore, ethically and legally, we need to promote algorithmic transparency. Every AI use in law enforcement processes must be testable and accountable like evidence or expert testimony in court.
The issue of responsibility and safety relates to AI’s lack of legal subject status. For example, if a self-driving car powered by AI causes an accident, who is legally responsible? The car manufacturer, the algorithm programmer, the vehicle owner, or the vehicle itself? Currently, our law requires accountability to fall on human or corporate parties behind the technology. However, as AI becomes more autonomous, the causal links in conventional legal proofing that we have understood and applied become blurrier. The precautionary principle needs to be applied in AI innovations risking public safety. Here, regulations must proactively set safety standards and clear liability mechanisms before technologies are released to society.
Social and humanitarian impacts. Beyond legal and rights aspects, AI has broad effects on social order and humanity. Automation with AI can replace many human jobs, from simple to skilled, causing economic and employment disruptions. The question is, are we prepared with workforce reskilling and social safety nets for those affected? Furthermore, existential questions arise: if in the future AI surpasses human intelligence in many aspects, where lies human uniqueness and dignity? It can even raise theological questions: If humans, with varying intelligence levels, must be ethically and legally accountable for every action in this world and the hereafter, then how can AI, lacking an eternal soul as per religious doctrine, be held accountable worldly and otherworldly for its “actions”?
Some extreme futurists imagine a “singularity” scenario where AI becomes super-intelligent and escapes human control, even ending human dominance on earth. Indeed, such apocalyptic scenarios are still in philosophical debate and the realm of science fiction, but these concerns imply that human control over technology must remain the primary principle. We must not succumb to technological determinism, as if AI development is an unstoppable current. Instead, humans through social and legal institutions must hold ethical control over AI’s direction to align with humanitarian interests.
Distinguished audience,
From the philosophical and ethical exposition above, the common thread is clear: artificial intelligence demands our intelligence in responding to it. Ultimately, AI is a reflection of human intelligence embodied in technology. If we want AI to bring benefits, our values must guide its development. This is where the importance of humanities perspectives (philosophy, ethics) lies in tandem with data science and computer science. Without a humanitarian perspective, technology will veer off course.
As an illustration, since the 1940s, science fiction writer Isaac Asimov has proposed the Three Laws of Robotics, a kind of “ethical law” for intelligent robots to avoid harming humans. The first law: a robot may not injure a human or, through inaction, allow a human to come to harm. The second law: a robot must obey orders given by humans, except where such orders conflict with the first law. The third law: a robot must protect its own existence as long as such protection does not conflict with the first or second law.
Although fictional, the popularity of Asimov’s (1920-1992) ideas shows our collective awareness that without moral guidelines, artificial intelligence could become a threat rather than a benefit. Of course, in reality, Asimov’s three laws are too simplistic to apply directly, but the real world is now striving to formulate actual “AI laws” through various legal instruments and ethical standards. The European Union, for example, is finalizing the AI Act regulating AI usage risk levels, while UNESCO in 2021 issued the Recommendation on the Ethics of AI as a global guide. All these initiatives stem from the understanding that we must frame this digital transformation within universal humanitarian values, such as human rights, justice, and the common good.
Challenges of Digital Transformation to Law Enforcement
After understanding AI’s philosophical and ethical implications, let us proceed to the core challenges of law enforcement in the AI and digital transformation era. Digital transformation means integrating information and communication technology, including AI, big data, Internet of Things, cloud computing, into all aspects of governance and societal life. For law enforcement officials (police, prosecutors, judges, as well as correctional and immigration institutions), digital transformation brings great opportunities alongside heavy challenges. On one hand, digital technology can empower law enforcement.
For example, it facilitates electronic evidence collection, accelerates cross-agency information access, and enhances real-time border surveillance. We see, for instance, the Indonesian National Police now has a Cyber Division actively monitoring and addressing cybercrimes. The Supreme Court implements e-court and e-litigation, even pioneering AI use in case administration. Immigration has transformed with integrated immigration information systems and automated tools at airports. All this is part of adaptive digital law enforcement to technological advances.
However, on the other hand, the emerging challenges are numerous. I will outline some concrete challenges to law enforcement in this AI and digital era. First, the emergence of new technology-based crimes.
Advances in AI are also exploited by criminals. We face increasingly complex cybercrime phenomena, from phishing, hacking, ransomware, online gambling, to deepfake fraud. Deepfake is a clear example where AI technology can manipulate videos or audio to appear convincingly as someone’s statement or action, when it is all fake. Imagine the implications: fake news using deepfakes can spread rapidly, defame reputations, incite riots, and even threaten national security. Law enforcement must develop robust digital forensic capabilities to detect and prove such manipulations in court.
Furthermore, AI-powered cyber attacks can adapt and evolve quickly. Experts note that attack modes are becoming more sophisticated, such as adaptive cyber attacks that change patterns to breach traditional security systems. In Indonesia, Kaspersky reports tens of millions of cyber attack attempts annually, with data indicating an average of 13 million cyber attacks per day in early 2024. These figures show the scale of the threat. Not to mention conventional crimes exploiting technology. Criminals can now hack internet-connected CCTV to disable home security alarms or use drones to smuggle drugs over prison walls. Criminal law and enforcement processes must continually innovate to keep up with digital deceptions and crimes.
A crucial problem is lagging regulations. Many AI-based crime forms are not yet accommodated in positive law. Law No. 19 of 2016 on Electronic Information and Transactions, as amended for the second time by Law No. 1 of 2024, for example, focuses more on electronic transactions and general cybercrimes but does not cover AI-based crime complexities like adaptive cyber attacks, big data manipulation, or deepfake misuse. Similarly, at the global level, the 2001 Budapest Convention on Cybercrime, an international reference, does not explicitly regulate AI threats. This means there is a regulatory gap that must be bridged immediately. The Indonesian Government recognizes this. The Draft Law on Cyber Security and Resilience is being prepared and prioritized, including efforts to adjust cyber crime definitions to include new modes. However, whatever we do, written legal norms will always lag behind changes in crime patterns and modes in the real world.
As Coordinating Minister for Legal Affairs, Human Rights, Immigration, and Corrections, I encourage cross-sectoral synchronization to accelerate the birth of these new regulations. Of course, we need to involve academic experts; Padjadjaran University is one of the higher education institutions with significant concern for AI-related law. We need a forward-looking perspective in addressing the very rapid changes in the AI world. We indeed face a dilemma between creating adaptive and progressive legal norms that open interpretive space for law enforcement to face the speed of change. On the other hand, adaptive and progressive norms risk creating legal uncertainty.
Second, the readiness of law enforcement human resources. No matter how good the regulations, their spearhead is the people implementing them. Our law enforcement officials must enhance their technical capacities and digital literacy. This is a real challenge because technology evolves faster than conventional education and training curricula. Limited technical capacity of officials remains an obstacle in handling cybercrimes and cases involving digital evidence. Therefore, the Police, Prosecutor’s Office, Judiciary, and even prison officers need upskilling. Specialized training in digital evidence, forensic data analysis methods, and AI ethics must be intensified.
Furthermore, digital infrastructure in law enforcement institutions must be adequate. Digital transformation requires investment, including hardware, software, secure networks, integrated databases, and cybersecurity protections. For example, the Supreme Court has initiated e-filing and AI utilization to assist judges’ assistants, such as automatically summarizing cases with natural language processing. This is a commendable step forward. However, the Supreme Court also identifies challenges like infrastructure limitations and the need for digitally literate human resources. Without reliable infrastructure and human resources, digital transformation in law enforcement will not be optimal. Therefore, the government is committed to supporting this digital transformation.
We also need to encourage strategic partnerships with campuses and industry. As recommended by the Supreme Court, collaborating with universities and research institutions will help develop AI systems suited to Indonesia’s legal needs. For example, Padjadjaran University as a cyber studies center could collaborate with the Police in designing more accurate hate speech detection algorithms while respecting freedom of expression. With collaboration, law enforcement does not walk alone but is supported by a broader innovation ecosystem.
Third, the need for ethics in AI utilization in law enforcement processes. This is a very important point, where AI use in judiciary and investigations must not sacrifice justice principles. The Supreme Court has emphasized, “AI must not replace the legal assessment function, which is the exclusive domain of humans.” Meaning, no matter how smart the system, final legal decisions must remain in the hands of human judges or investigators who can consider moral aspects, substantive justice, and each case’s uniqueness. AI can provide recommendations or analyses, but judgment must not be wholly delegated to machines. Why? Because AI is not yet capable—and in my opinion, never will be—of understanding socio-cultural contexts and substantive justice values as humans do.
Law is not merely formal logic or data patterns. There is legal philosophy, a sense of justice, conscience there. Take a simple illustration. An algorithm might read a statute literally and conclude a certain punishment must be imposed, but a human judge can see mitigating circumstances, such as the defendant stealing due to hunger, warranting more humane substantive justice consideration. In conventional criminal law, there are mitigating and aggravating factors that judges consider in deciding cases. This is humans’ advantage that must be preserved in the judicial system: the ability to empathize, consider justice and propriety values, not just calculate based on past data as AI does. We must not, tempted by efficiency, lose the spirit of justice in law enforcement itself.
Besides at the judicial decision level, AI ethics are also relevant in policing. For example, predictive policing technology predicting potential crime locations or perpetrators based on data can aid patrol allocation. However, its use must be supervised to avoid racial or social profiling. A NAACP policy brief in the US, for instance, highlights concerns that unregulated predictive policing increases racial bias and privacy violations, eroding public trust. We must learn from that experience. In Indonesia’s plural society, AI use for law enforcement must be even more sensitive to diversity and citizens’ rights. Every innovation must be tested for alignment with non-discriminatory rule-of-law principles upholding human rights. If not, design improvements or even implementation delays are needed until ready.
Fourth, optimal international legal cooperation is required. Cybercrimes and high-tech crimes are often transnational. Perpetrators can act from outside Indonesia’s jurisdiction, with servers in country A and victims in country B. This challenges traditional territorial-bound law enforcement models. Thus, strengthening international cooperation is needed. Cross-country coordination in cyber investigations, extradition treaties, mutual legal assistance, and harmonization of electronic evidence rules are crucial. In the AI era, we may need a new global framework, like an “AI Convention,” regulating information exchange and enforcement against AI misuse.
Indonesia, as part of the global community, must proactively engage in such forums. Currently, we are de facto part of the Budapest Convention on Cybercrime though not ratified, and in ASEAN, there are plans for cyber security cooperation. Going forward, AI challenges must enter the agenda. We must prevent criminals from exploiting regulatory differences between countries as golden opportunities to escape justice. For example, if an AI crime form is unregulated in Indonesia but regulated elsewhere, or vice versa, jurisdiction gaps arise. Gradual global harmonization will close those gaps.
Distinguished audience,
Law enforcement in the AI and digital era is no easy or light task. There are regulatory aspects, human resource capacity, usage ethics, and cross-country coordination that all need simultaneous addressing. Of course, the government cannot proceed alone. Support from academics, industry, and civil society is needed. I am grateful that Padjadjaran University has been a strategic partner of the government in formulating digital-related legal policies, for example, actively contributing to studies on the ITE Law and the Draft Cyber Security Law. We will continue to strengthen such campus-government synergies, as directed by President Prabowo Subianto, so our regulations do not lag behind technological disruptions. This public lecture today I see as a strategic momentum to collectively convey digital law enforcement policy directions to the academic community and wider society.
Toward an Adaptive and Humanist Legal Framework
After outlining the challenges, how do we respond to law enforcement challenges in this AI era? I will conclude this presentation with some forward-looking ideas, open for further discussion.
First, as mentioned, the ITE Law and related regulations need updating. Academic recommendations, for example, include adding definitions and classifications of AI-based crimes, regulating liability mechanisms when AI is involved, and sanctions for AI misuse. Beyond criminal realms, civil and administrative areas must prepare, such as rules on AI product liability for autonomous vehicle accidents, AI diagnostic errors in healthcare, consumer protection against algorithm-based services, and labor law to anticipate automation. All require flexible yet firm legal frameworks. The principle is that law must provide certainty without excessively hindering innovation. This is the art of regulation: achieving balance between fostering innovation and protecting public interest.
In drafting, interdisciplinary perspectives must be involved. Technology experts explain possibilities, legal experts formulate norms, philosophers and ethicists ensure norms align with goodness values. This holistic approach is vital; if we only view it from a purely technical lens, we might neglect the humanities aspect. Conversely, if we are too normative without understanding the technical side, the rules created could be inoperable. I invite universities like Unpad to continue taking an active part in this process, whether through academic studies, public hearings, or policy drafting teams.
Second, enhancing the capacity and digital literacy of law enforcement apparatus. This is non-negotiable; the largest investment must be in human resource development. Starting from the higher education level, to advanced training in the police, prosecutor’s office, and judiciary. The law curriculum needs to be expanded to include Cyber Law, Digital Forensics, AI Ethics, and the like. I think at the Faculty of Law Unpad there are already concentrations or courses on cyber law and perhaps it will develop into AI law. This is something commendable and worth supporting. In the future, we may need more hybrid experts, for example, prosecutors who also understand basic coding, or police officers who delve into data analysis. The apparatus must also be equipped with practical training through simulations of digital cases. For example, training to investigate crimes on the dark web, or how to extract evidence from encrypted phones, up to understanding how bias can emerge in AI systems. I am optimistic that in the next 5-10 years, the quality of our digital law enforcement will become more proficient along with human resource regeneration and knowledge transfer.
Third, building legal infrastructure compatible with the digital era. This means two things, namely technical infrastructure and legal infrastructure in the sense of culture/institutions. Technically, law enforcement institutions must update their equipment and systems. In this regard, digital forensic laboratories in the police need to be expanded and equipped with AI to quickly sift through thousands of digital traces like analyzing millions of log files to track cyber attacks. Courts can utilize AI-based jurisprudence search systems so judges more easily find relevant precedents. Correctional Institution officers can apply automated monitoring via smart cameras to prevent riots. But all this requires network and hardware support, which certainly needs investment.
Then, legal infrastructure in the sense of soft infrastructure includes improving SOPs, ethical standards, and special units. Each institution needs to draft ethical guidelines for AI utilization. For example, the Supreme Court drafting a code of ethics that AI is only a decision-writing aid but prohibited from making legal assessments. The National Police can create SOPs for face recognition usage prohibiting automatic identification as the sole basis for arrest without field verification. Then perhaps need to form internal oversight units ensuring AI usage stays on track. The point is needing checks and balances mechanisms over technology utilization in legal institutions, so efficiency is achieved without sacrificing integrity.
Fourth, a humanist and human-centered approach in digital law enforcement. I emphasize this because it should not be forgotten amid the clamor of technology. Digital transformation is about changing tools and methods, but the ultimate goal of law enforcement remains the same, namely upholding justice and protecting humans. Therefore, all digital strategies must be centered on humans (human-centered approach). For example, AI development in court administration aims to accelerate services for justice seekers, not merely to cut workload. E-court is made so society gets easier, more transparent judicial access, not merely following digitalization trends. The success indicator is when the rights of the parties in court are better fulfilled with technology aid. Similarly in policing, digital systems should support community policing that is closer to society, data used to prevent crime and protect citizens, not for excessive control. Pancasila values and the 1945 Constitution must be references, especially the second principle of Just and Civilized Humanity. The justice we seek to uphold in the digital era must remain grounded in civilized humanity. What is the meaning of advanced technology if law enforcement actually ignores human dignity?
This is where the importance of legal scholars and humanities scholars come in as balancers. Interdisciplinary dialogue between computer scientists, engineers, with philosophers, legal experts, social scientists, will ensure AI develops responsibly (responsible AI). Philosophy teaches us to think clearly about fundamental matters including ethics; law provides rule frameworks; socio-cultural sciences understand societal impacts. All these perspectives are very important. If one is neglected, our digital transformation building will be lopsided. I personally believe, Indonesia with its gotong royong philosophy and richness of cultural values, can actually provide its own color in the global development of AI ethics, namely ethics that prioritize humanitarian and communal values. Do not just follow the flow, we can also be a nation that contributes in terms of thought. For example, the “restorative justice” concept can be integrated into punishment prediction algorithms so as not merely to punish but also to rehabilitate perpetrators. There are many potential humanist insights from local wisdom and national ideas that can be blended with technology.
Distinguished audience,
Although in this era of Artificial Intelligence and digital transformation, our law enforcement faces increasingly complex challenges, however that is not without hope. We are indeed like at a crossroads, one path leading to a future where technology and law synergize to maintain order and justice effectively, while the other path potentially leads to chaos if we fail to control technology in a wise way. The key in facing that failure risk is the existence of balance in terms of taking AI benefits for law enforcement, while we control its risks with appropriate ethical norms and legal norms. In our country based on Pancasila, where the principle of Belief in the One Almighty God occupies the first principle, it is fitting that faith taught by religions underlies the ethical actions that will be taken by someone. Secular ethics (secular moral philosophy) seems not a choice in our country based on this Pancasila.
Faith underlies ethics because the belief that life is not merely in this world alone, but there will be an afterlife that brings accountability, so faith is what will underlie someone’s ethical choices when about to act. In that context, AI should be positioned as a partner that helps our work to be faster, accurate, and efficient, not as a replacement for humans’ moral and intellectual roles. Judges’ common sense, police conscience, and prosecutors’ sense of justice remain the primary determinants in every legal process. Technology continues to develop, comes and goes, but rule of law with justice spirit must be eternal.
I continue to encourage synergy between government and academics like Unpad in facing legal challenges in this digital era. Earlier I have outlined strategic steps that are being and will be taken, but ultimately the success of this effort also relies on the support and participation of us all. Let us together welcome digital transformation with critical optimism. Optimistic that technology can be a tool for advancing the nation, but critical to ensure every transformation step is executed in the correct ethical and legal corridor.
To the students of the Faculty of Law at Padjadjaran University, I entrust a message to you all to equip yourselves with the spirit of faith, ethical awareness and multidisciplinary knowledge. The world is increasingly complex and full of uncertainty that requires such views. The future legal world demands lawyers and legal scholars who understand cutting-edge technology yet remain humanist and prioritize humanitarian considerations in taking every policy and decision. Keep honing analytical abilities and our moral judgment, because amid data floods and artificial intelligence, this nation will always need human wisdom. Artificial Intelligence may someday be able to store millions of legal articles and jurisprudence, however it will never replace the wisdom and integrity of a true law enforcer.
Thus, these are the matters that I can convey on this public lecture opportunity. May this exposition be beneficial and can spark constructive discussion. I am confident with collaboration and good intentions from all parties, the challenges of law enforcement in this AI and digital era can be turned into opportunities to advance justice for us all.
Thank you, And Allah knows best what is right, Peace be upon you and Allah’s mercy and blessings.





