On January 12, the UK media regulator Ofcom opened an investigation into platform X over what it described as “deeply disturbing” AI-generated sexual images produced using tools such as Grok.
Although the probe is taking place in Europe, similar concerns are emerging in Rwanda. As artificial intelligence gains ground in education, entertainment, and content creation, questions about misuse are becoming more urgent.
So, what does Rwandan law say? If someone uses AI to create sexualised deepfake images without consent, could that amount to a criminal offence?
Existing laws already apply
According to Mary Musoni, a human rights lawyer who researches technology-based violence, Rwanda’s current legal framework is sufficient to address such conduct, even without AI-specific provisions.
She points to Article 34 of the Cybercrimes Act and Article 135 of the Penal Code.
“In Rwanda, the unauthorised creation or distribution of sexually explicit or manipulated images, especially those that violate a person’s privacy or dignity, can constitute a criminal offence,” Musoni said.
She explained that when someone uses AI to generate or share explicit images without consent, the act may amount to cyber harassment, invasion of privacy, or dissemination of offensive material. In such cases, intent plays a decisive role.
“The intention to harm, exploit, or violate digital privacy is what transforms a digital act into a cybercrime,” she added.
Intent and action matter
For a case to succeed in court, prosecutors must prove two elements: intention and action, according to Innocent Muramira, founder of Innocent Law.
“Liability arises from intention,” Muramira said. “If you digitally design a person without clothes and then share the image, that shows both mens rea, the guilty mind, and actus reus, the guilty act.”
Although Rwanda has not yet enacted AI-specific criminal regulations, Muramira stressed that existing laws remain enforceable.
He also noted that Rwanda may eventually follow countries such as the United Kingdom and Australia by adopting stricter AI-focused guidelines.
A cultural dimension
Beyond legality, many Rwandans view the issue through a cultural lens.
Joseline Uyisabye, a university graduate and content creator, said online behaviour should reflect local values.
“In Rwandan culture, public nudity is generally discouraged,” she said. “When I create content, I am very intentional. I focus on educational material and avoid anything that could be seen as undignified.”
Jean Paul Ibambe, a lawyer specialising in media law, highlighted the gap between global platforms and local norms.
“Most platforms apply standards developed in the United States,” he said. “However, they need contextualisation. Platform operators must engage with local authorities to ensure their rules align with Rwanda’s cultural and legal environment.”
Technology versus misuse
Others argue that responsibility lies with users, not the technology itself.
Daniel Twayinganyiki, a recent university graduate, said AI should not be blamed for harmful behaviour.
“AI is just a tool,” he said. “The real problem is the person who uses it to harm others. For me, AI remains far more useful than dangerous.”
Legal accountability remains clear
As AI tools become more accessible, the debate continues. However, legal experts agree on one point: using AI to create or share explicit images without consent can already trigger criminal liability in Rwanda.
Even without AI-specific laws, existing cybercrime and penal statutes provide a clear basis for accountability when technology crosses the line into abuse.
Source: The New Times




















