Summary
Mohamed Gebril is an associate professor in the Cyber Security Engineering department. He joined Mason after working as a primary patent examiner for the US Patent and Trademark Office (USPTO). In his seven years at USPTO, he was involved in patent examination and prosecution areas in the data storage and networking fields, where he worked in intellectual property matters directed to computer security, enterprise network security and authentication, databases, data processing, and memory storage devices.
Gebril completed his PhD in electrical engineering at North Carolina A&T State University in 2011 in the area of big data indexing, machine learning, and AI applications. He received his MS in electrical engineering from North Carolina A&T State University in May 2008 in the area of robotics navigation and optimization and completed his undergraduate studies in electrical engineering at Alexandria University, Egypt.
Source: CEC webpage
OnAir Post: Mohamed Gebril
News
In the rapidly evolving field of cybersecurity, the integration of generative AI and large language models (LLMs) is a game-changer. While offering immense potential for positive applications, these tools also pose significant risks if misused. Mohamed Gebril, an associate professor at the Department of Cyber Security Engineering at George Mason University, is spearheading a project to use generative AI and LLMs to better identify the very threats they pose.
The project is a collaborative effort between George Mason and the Virginia Military Institute (VMI) funded by the Commonwealth Cyber Initiative Northern Virginia Node. Gebril, along with VMI’s Sherif Abdelhamid, is assembling a team including master’s and undergraduate students to assist with research and is preparing educational workshops to introduce high school and middle school students to the research topic. Gebril’s approach aims to equip future cybersecurity professionals with the knowledge and skills needed to tackle emerging threats.
Automating threat detection
The initiative aims to leverage the power of AI to enhance threat detection and response mechanisms, ultimately making cybersecurity operations more efficient and effective. One of the primary advantages of using AI in cybersecurity is its ability to automate processes that were traditionally manual and time-consuming. Harnessing the beneficial aspects of generative AI for threat-hunting operations involves detecting malicious activities and monitoring data logs in real-time.

“AI has been very helpful in automating this process instead of doing it manually,” Gebril explained. “It can generate the alerts, automate the notifications, and make the instant response.” By automating these tasks, organizations can respond to threats more quickly and efficiently, reducing the potential damage caused by cyberattacks.
Preparing for prompt-injection attacks
A significant challenge in the realm of AI-driven cybersecurity is the threat of prompt-injection attacks, which involve malicious actors using AI prompts to generate harmful outputs, such as malware, said Gebril. His core objective is to create mechanisms for detecting malicious intent, particularly in prompts that are subtle and indirect.
“What we’re hoping to get out of this project is to be able to develop a novel method, a novel mechanism, to detect such malicious intent that is meant to be used or developed by indirect prompt injection attacks,” Gebril said. This involves using advanced AI techniques, such as fuzzy reasoning and deep learning, to analyze and interpret data in real-time.
By leveraging the power of AI, Gebril and his team are working to create more robust and effective threat detection systems, ultimately contributing to a safer digital landscape.
George Mason University Cybersecurity Engineering associate professor Mohamed Gebril led a team of students to the first-ever BattleDrone competition, an exercise coordinated through the Commonwealth Cyber Initiative.
In April, Mohamed Gebril, an associate professor in George Mason University’s Cyber Security Engineering Department, took a team of students into “battle.” The team traveled to Blacksburg, Virginia, for a BattleDrones Competition that was hosted by the Commonwealth Cyber Initiative (CCI) at Virginia Tech’s Drone Park.
Gebril emphasized that the inaugural battle was not really a competition but a learning experience. CCI began working on this competition in 2020, but the pandemic halted its progress. This was the first time the event was held.
“It was not a competition per se. All the teams worked together to get this project off the ground,” said Gebril, who teaches in Mason’s College of Engineering and Computing. “CCI-VT ran into some issues with some of the computer vision tools, but overall it was a great learning experience.”
The main objective of the competition, according to Gebril, was to have student teams assemble their own drones with materials provided by CCI-VT research group, as well as promote interest in these kinds of activities among younger students.
For the competition, Gebril pulled together a team of Mason cyber security engineering majors interested in hands-on opportunities, which included senior Kylie Amison, senior Corrado Apostolakis, senior Brandon Henry, junior Casey Cho, sophomore Zaid Osta, and Mahmoud Zaghloul, an area high school student.
By all accounts, it was a successful trip.
“The team did really well,” Gebril said. “They were able to assemble the drone successfully. We are also working on continuing this project by adding cybersecurity features to enhance this learning experience.”
Will they compete again?
“Yes, indeed,” he said. “Our students love this project and how it applies concepts learned in classrooms toward this hands-on activity.”
CCI is a network of Virginia industry, higher education, and economic development partners dedicated to cybersecurity research, innovation, and workforce development. Mason leads the Northern Virginia Node of the network.
About
Web Links
Research
Commonwealth Cyber Initiative (CCI)
Source: CCI
Threat Hunting System Enhancement by Generative AI and LLMs
Researchers will investigate the impact of generative AI and large language models (LLMs) on automated threat-hunting operations to develop a system to monitor live network traffic and perform an automated incident response on large real-time data with suspicious network traffic from prompt injection attacks.
Funded by the CCI Northern Virginia Node
Project Investigators
Rationale
Network security continues to adapt to the emergence of new adversarial threats, such as prompt injection attacks. Artificial intelligence (AI) tools and generative AI can enhance and automate defense measures.
With the use of threat hunting and anomaly detection utilizing AI (THAD-AI) systems that include generative AI capabilities, network traffic can be more secure, and system assets can be protected.
Projected Outcomes
Researchers will create incident response to flagged anomalies and develop an alert system via detection rules generated by LLMs using the developed THAD system, including:
- Development of novel methods to improve threat-hunting systems to reduce cyber threats exploiting enterprises.
- A tool/plugin/framework that will analyze networks and provide automated monitoring of logs to generate alerts and automatic incident reports.

