Rozina Akter Nishu
AI is no longer just a hypothetical concept. It's already making a big difference in how people live, work, and talk to each other in Bangladesh. One can find AI being used in banking, farming, and even in police work with things like facial recognition. But as AI becomes more common, it also brings new dangers, and Bangladesh's current laws aren't prepared for them.
Governments around the world are working quickly to manage how AI is used. The European Union’s AI Act is the first of its kind, creating clear guidelines for companies that make or use AI systems. It categorizes AI based on risk from "minimal" to "unacceptable" and prohibits those that potentially compromise people's rights or safety. Also, countries like the United States, China, and Canada are making their own rules to keep things new and creative while still keeping people safe. However, Bangladesh currently does not have a specific AI law. It is based on outdated legal rules including the Digital Security Act, 2018 (now replaced by the Cyber Security Ordinance 2025) and some provisions of the Information and Communication Technology Act, 2006 (ICT Act). These laws mainly deal with online crimes such as hacking or spreading false information, not issues related to AI, like deepfakes, biased algorithms, or manipulating data.
Experts warn that abuse of AI is already becoming a serious problem in Bangladesh. It is easy to misuse generative artificial intelligence (AI) tools like ChatGPT or photo generators, to spread false information, create false videos, or even appear as actual people. It is also seen how false political information and altered images are quickly spreading on social media, particularly before elections, a write-up published on LinkedIn suggests. A big example is deepfake videos or voice recordings that look and sound completely real. Bangladesh has already had several cases where fake videos were used to harass people or ruin their reputations. These actions can really hurt someone's life and job, but it's hard to take legal action because current cyber laws don't clearly explain or control how deepfakes are made or shared, reports The Daily Star. Beside deepfakes, AI algorithms have the potential to increase inequity. Automated systems used in hiring or banking might accidentally favor certain groups over others based on gender, income, or where someone lives. It's clear that algorithms are now driving a new type of crime, from online fraud to manipulation, but there aren't many options for victims to get help, according to The Daily Star.
The Cyber Security Ordinance 2025, which replaces the controversial Digital Security Act, is a key move toward better digital safety. (The Business Standard) It reduces penalties for certain online offences and tries to promote transparency in law enforcement. (Bangladesh Post). But it doesn't directly deal with AI-generated content, data privacy, or who is responsible for decisions made by automated systems. For example, if an AI system incorrectly labels someone as a criminal or spreads false information, it's not clear who should be held responsible: the developer, the person using the system, or the AI itself. This lack of clarity leaves both people and officials uncertain about how to handle these situations, reports The Business Standard.
Why Bangladesh Needs Its Own AI Law
Bangladesh needs a new set of laws specifically for artificial intelligence not just to stop harm, but to help the country develop AI in a safe and fair way. An AI Governance Act could clearly say what actions are illegal when it comes to AI, like making or sharing deepfakes, spreading fake news through automation, or using biased algorithms in hiring, loans, or public services. These issues aren’t something that will happen in the future, they’re already showing up online in Bangladesh, and there’s often no clear law to deal with them.
A law like this should also have strong rules to protect people's data so that AI systems can't use personal information without permission. Right now, many algorithms collect and look at private details like voice recordings, faces, or what people search online without people even realizing it. A good legal system could make sure this data is handled properly and that people know how their information is being used.
It’s also important that the law makes developers and companies take responsibility for the AI they create and put into use. This means requiring them to be clear about how their algorithms work, what data they use, and what problems could come up. It would help prevent AI from making unfair or biased decisions that affect people’s lives without them knowing. Lastly, the law should encourage ethical AI research and make it mandatory to publicly share information about high-risk systems like those used in policing, surveillance, healthcare, or education. This would help Bangladesh find a good balance between advancing technology and keeping people safe. A clear and responsible AI system would protect citizens and help build trust in digital tools, pushing both government and businesses to use technology more wisely.
The Tech Global Institute’s study says that existing digital laws are too vague and old to handle the challenges of AI. It suggests creating a new legal system with groups to monitor AI, rules for being open about how it works, and strict punishments for misuse. AI is already involved in new kinds of crimes in Bangladesh. Criminals are using fake identities and voice cloning to trick people into scams or blackmail, which makes it hard for victims to prove what happened. Some people are being fooled by job or loan applications made with AI-generated documents, tricking employers and banks. Cyber bullying has also gotten worse, with AI creating fake personal or harmful content to scare or hurt people’s reputations. On online marketplaces, bots and algorithms are being used to change prices, post fake reviews, and mess up competition, making it unfair for customers and small businesses. Without clear rules to control these tools, these crimes will probably get worse and be harder to catch, leaving regular people at risk and the legal system behind.
To deal with these risks, Bangladesh could look to the European Union’s AI Act Model, which focuses on controlling AI based on how risky it is and making sure users know how it works. Schools and media should also teach people about AI so they can spot things like fake videos, automated scams, or other harmful uses. Lawmakers might create a National AI Regulatory Commission to check that AI is used ethically, watch how data is handled, and help guide tech development in a safe way. Working together: government, businesses, and universities, can make sure AI is built and used in a way that keeps people safe and helps the country grow.
Laws need to keep up with technology. Bangladesh has already taken steps to adapt to digital changes, and it needs to do the same with AI now. Getting regulations in place early can help protect people from harm, build trust with other countries, and support innovation that helps society. If action is delayed, AI misuse could become one of the biggest legal and ethical problems the country faces.
Rozina Akter Nishu studies
Law in Bangladesh University
of Professionals (BUP), Mirpur Cantonment, Dhaka.
Latest News