Skip to main content
BCIT News

Cybersecurity and the impact of AI: Four key elements

BCIT faculty Dr. Maryam Raiyat Aliabadi on CTV Your Morning Vancouver with host Keri Adams

October is Cybersecurity Awareness Month in Canada, though for BCIT Digital Forensics and Cybersecurity faculty member Dr. Maryam Raiyat Aliabadi, cybersecurity is on her mind every day. She recently presented to BCIT colleagues Navigating the Digital Age: How AI is Reshaping Digital Transformation.

Maryam outlined the many applications where AI has strong potential to benefit society, such as increasingly accurate health care diagnoses, lowered manufacturing costs, synthetic biology, and accelerated drug discovery.

AI: Not without drawbacks and risks

But Maryam emphasized wide-ranging concerns, from AI video generators making increasingly convincing fakes, to more people relying on “advice” from ChatGPT and other services.

“AI systems like ChatGPT are still in the ‘Proof of Concept’ stage, and there are many deficiencies: it lacks ethics, it lacks empathy and situational awareness,” she explained. “It’s not a therapist, doctor, or lawyer – and misplaced trust in it can have critical consequences.”

AI versus AI in cybersecurity

Similarly, AI is having a mixed effect on Maryam’s field of cybersecurity. AI is used by attackers through automated scams; impersonation and fake voice calls; sextortion through deepfake videos; phishing scams and identity theft; password cracking tools; malware and ransomware; and misinformation. And Maryam points out these are all unfortunately on the rise.

“The old sloppy errors we’d see in email phishing could be easy to spot,” according to Maryam. “But AI has increased the quality of attack attempts, making it much harder to identify suspicious elements.”

One example includes a company that lost hundreds of thousands of dollars because their CEO’s voice was impersonated, and the staff followed the instructions created by the criminals to transfer funds. Maryam offers other examples of worrying AI applications, from misleading troops in the war in Ukraine, to making fake job offers that lure people into providing private information.

But defenders are using AI as well. “Threat detection accuracy is improving and speed of response is increasing,” says Maryam. “Through prevention and firewalls, incident response and mitigation, we’re seeing that AI can be phenomenal, and these are the skills we are teaching in our classrooms.”

Safe and responsible AI: Four key elements

1. Human oversight and human values

Maryam believes that the “Human-in-the-Loop” principle is crucial for applications that include autonomous systems and robotics, smart medical devices, transportation, and defense, to keep humans safe. “This means there must always be a person in control – human intelligence remains the winning factor.”

2. Transparent AI decision-making

Maryam says that humans must understand how AI was trained and know how it makes decisions. “We need to train AI to be aligned with human values.”

3. Robust cybersecurity

“With cyberattacks becoming increasingly powerful, it’s critical that both organizations and individuals understand their level of risk to build resilience. Ultimately, cybersecurity is a shared responsibility.”

4. Regulatory system compliance and cooperation

“Governments, companies, and the tech sector need to collaborate to build frameworks for responsible AI,” emphasizes Maryam.

Spotlight on cybersecurity at home

While much of the cybersecurity conversation focuses on protecting organizations and individuals, Maryam emphasizes that it also needs to include children, and it should start early. According to Global Cybersecurity Forum, 72% of children globally have experienced a cyberthreat.

“One single mistake can impact a whole family or community,” says Maryam.

She had personal experience with this risk to families when her son’s multi-player gaming experience turned into a cybersecurity threat. He was only nine years old at the time.

The concerning encounter led Maryam to found Kids’ Shield Services, a gaming platform that provides cybersecurity education to children.

screen shot of childrens' animated video game
Above: Kids’ Shield Services game PassX, free in App Store and Google Play

“Through gamification and AI-driven tools, we are bridging education and digital safety,” she explains. “We offer fun and educational cybersecurity games, as well as an AI-based AntiBully System for enhanced online safety against cyberbullying.”

Maryam recently joined both CTV and Global News about the importance of cybersecurity education for children and the need to start early in raising cybersmart children.

Sign up for the twice-yearly BCIT Forensics Investigator and keep up with the latest.

Feature photo: Dr. Maryam Raiyat Aliabadi, right, on CTV Your Morning Vancouver with host Keri Adams – photo by BCIT

Dr. Maryam Raiyat Aliabadi is a Digital Forensics and Cybersecurity faculty member at BCIT and a Computer Science sessional teaching faculty member at UBC. She pursued her PhD jointly at the Shahid Beheshti University in Iran and UBC – focusing on applied machine learning for cyber-physical systems security. She subsequently completed two postdoctoral fellowships at UBC, furthering her research and teaching expertise. Maryam is passionate about cybersecurity education and is CEO and founder of Kids’ Shield Services.