Summary
Hi there! My name is Saanvi and I am currently a Junior majoring in computer science at George Mason University. I am also Aircasting Director for the Cyber onAir network of hubs. The top hub is at cyber.onair.cc.
I am passionate about exploring the intersection of technology, ethics, and societal impact. My work spans research in AI ethics, responsible AI governance, and the practical applications of machine learning, all grounded in a commitment to leveraging technology for humanity’s highest needs.
My current areas of focus are on:
– preparing for the CompTIA Security+
– researching and studying Gen AI risk mitigations as well as exploring the intersection between AI and Cybersecurity (Galexor AI)
– building a GRC portfolio: security fundamentals, CIA triad, NIST CSF basics, policy writing, risk register basics, Excel/Google Sheets for tracking, familiarity with regulations (starting with HIPAA or SOC 2)
– upskilling on Linux and programming.
My dream job involves doing what AI cannot — that is provide context, judgement, and ethical leadership.
Always eager to connect with professionals, collaborators, and innovators who share a vision for responsible technological advancement!
OnAir Post: Saanvi Munigela
About
Passionate about exploring the intersection of technology, ethics, and societal impact. My work spans research in AI ethics, responsible AI governance, and the practical applications of machine learning, all grounded in a commitment to leveraging technology for humanity’s highest needs.
Current areas of focus:
– preparing for the CompTIA Security+
– researching and studying Gen AI risk mitigations as well as exploring the intersection between AI and Cybersecurity (Galexor AI)
– building a GRC portfolio: security fundamentals, CIA triad, NIST CSF basics, policy writing, risk register basics, Excel/Google Sheets for tracking, familiarity with regulations (starting with HIPAA or SOC 2)
-upskilling on Linux and programming.
Experiences:
- Childcare Worker at the International Sahaja Public School in Canajoharie, New York.
- Aircasting Director for the Cyber onAir network of hubs. The top hub is at cyber.onair.cc.
- Contributed to groundbreaking research on detecting deceptive interactions in Large Language Models (LLMs) under the guidance of leading experts at GMU. This research focuses on embedding space analysis to enhance the safety and robustness of AI systems.
- Founder @InternConnect, a career incubator service for CS and STEM majors in the DMV region that strives to build a platform that bridges the gap between college and real-world applications.
- Technical expertise includes AI/ML development, UI/UX design, and software engineering, complemented by a deep interest in creating explainable, ethical, and impactful technological solutions.
My dream job involves doing what AI cannot — that is provide context, judgement, and ethical leadership.
Always eager to connect with professionals, collaborators, and innovators who share a vision for responsible technological advancement.
Education
- George Mason University – College of Engineering and Computing
- Bachelor of Science – BS, Computer Science
Grade: Junior
- Bachelor of Science – BS, Computer Science
- Northern Virginia Community College
- Dual Enrolled
2022 – 2023
- Dual Enrolled
- Battlefield High School
- 2022 – 2023
Skills
- 8.00 CPE hours in “Information Technology – Technical.
Certificate of Attendance: Future Tech DC - Find a Hiring Manager
- Communication
- Project Management
- Entrepreneurship: Student Innovator Mastermind
- Organization Skills
- Problem Solving
- Java
- Python (Programming Language)Python (Programming Language)
- Oracle Database
- Database Design
- PL/SQL
- SQL
- HTML
- Cascading Style Sheets (CSS)
- User Experience Design (UED)
- UI/UX
Contact
Email: OnAir Member
Web Links
Interviews with Women in Cybersecurity
Sara Hayajneh
March 19th 1:30PM 2026
Shawn Purvis
Liza Durant
Interviews on AI & Cybersecurity
Coming soon…!
AI Cybersecurity Research
I am researching and studying Gen AI risk mitigations as well as exploring the intersection between AI and Cybersecurity (I did this also at my previous internship at Galexor AI).
Literature Review on AI and Ethics
The rapid evolution of Artificial Intelligence to the current state of the world has introduced profound technological, social, and ethical challenges, revealing inevitable debate over how to best manage its risks. AI tools have been used for societal control, mass surveillance, and discrimination. Predictive policing and automated decision-making systems in public sectors have undermined human rights. Facial recognition technology has targeted racialized communities, and fraud detection algorithms have disproportionately affected ethnic minorities (Amnesty, 2024, para 1). From a personal level to the entire globe, AI poses a threat.
While there are enormous benefits, the threats posed by AI outweigh any number of benefits because there may not be a world to consume those very benefits if the threats aren’t considered with appropriate seriousness. By far, laws and regulations were always made keeping in mind the limited capabilities of humans. However, after the creation of AI, unlimited capabilities came into view, and the question that comes into picture is whether these laws and regulations formed by governmental agencies can offer sufficient protection to the near future world. And if not, are there alternative solutions?
Study in this intersection of philosophy and technology has recently gained popularity and is being encouraged to explore. The Stanford Encyclopedia of Philosophy determines that a thorough study of the philosophy and ethics of AI and robotics is imperative in order to keep pace with the rapid evolution of these fields. And it has become more crucial than ever to address the ethical challenges posed by artificial intelligence in order to ensure that these technologies benefit society and minimize any potential harm (Muller, 2020). This literature review will do exactly that and try to explore whether laws and regulations can effectively mitigate the threats posed by the growing capabilities of AI, and if not, what alternative solutions are necessary to ensure a safer future.
Literature Review
Regulatory Challenges
It has been seen that the importance of regulating AI is extremely crucial for today’s world and moving forward. However, although this determination has been made, many challenges remain along the way. The first of these is the velocity with which AI is developing. The pace of AI is rapid, advancements outstrip the federal government’s existing expertise and authority, and regulatory statutes and structures designed for industrial-era technologies are inadequate for the fastchanging landscape of AI (Wheeler, 2024, para 5). This velocity makes it challenging for regulators to keep up with the latest developments and ensure that regulations remain relevant and effective.
According to the U.S. Artificial Intelligence Policy, “Despite the strong bipartisan interest in AI regulation — as well as support from leaders of major technology companies, and the general public — passing comprehensive AI legislation remains a challenge” (U.S. Artificial, 2023, para 4). Although over 36 hearings and 30 AI-focused bills have been introduced in congress, the article claims no consensus was able to be made because of the lack of agreement between different groups and their own versions of AI legislation (U.S. Artificial, 2023, para 3, 4). In addition, the Royal Society’s article on Governing Artificial Intelligence explains how many regulations such as the General Data Protection Regulation, do not wholly deal with the challenges presented by machine learning and algorithmic systems of the robots in the EU (Nguyen et al., 2018, Intro para 9).
Let alone the limitations of regulation, there is a lot of complexity in what and how to regulate. The fact is that “AI” is a vague term encompassing numerous technological applications, and AI systems are developed and deployed across multiple sectors, involving numerous stakeholders. Therefore, the impact of AI depends on its context, requiring regulation to consider both upstream and downstream harms (Amnesty Tech, 2024).
Beyond all this, lies the bigger question of determining who should regulate. The question of who should be responsible for regulating AI is complex because different stakeholders such as federal agencies, state governments, and international bodies, have varying levels of expertise and authority (Wheeler, 2024). The lack of a clear regulatory framework and the potential for conflicting regulations across different jurisdictions further complicates the issue. The numerous scenarios listed above are a few of the many obstacles faced in effective AI regulation. Next, ethical frameworks will be explored and the dynamics it plays in the role of regulating AI.
Ethical frameworks
Ethical frameworks and efforts of implementing structural limitations on AI for the benevolence of human beings are currently taking place, and a lot more is being encouraged toward this type of regulation. The UK hosted an AI Safety Summit in November 2023, bringing together global leaders, industry players and civil society to discuss AI risks, and they were able to execute the EU AI Act (Amnesty, 2024, para 5). While this was a step forward, there remain complaints that it still falls short in ensuring human rights protections, particularly for marginalized groups.
According to the 2024 article 14 Dangers of Artificial Intelligence by Built In, it is extremely important to address liability and intellectual property rights in the development of new legal frameworks (Thomas). But it has been difficult for lawmakers to ensure responsible development of AI products because of their deep and difficult to understand AI learning models and the secrecy surrounding biased/unsafe decisions is now resulting in a lack of transparency (Thomas, 2024). When one looks at ethical challenges, there are various overarching ethical dimensions that must be addressed. Beneficence, non-maleficence, autonomy, justice, and explicability are dimensions similar to those found in bioethics, with explicability being a new dimension specific to AI (Serafimova, 2020). Let alone the specifics, dimensions such as transparency, justice and fairness, non-maleficence, responsibility, and privacy are also emphasized (Serafimova, 2020). This just shows the complexity of the situation, how much is at stake, and how much is to be accounted for.
At the end of the day when is all is said and done, founder of Artificial General Intelligence and lead researcher of the Machine Learning Institute, Eliezer Yudkowsky argues current efforts to address AI safety are insufficient, as they focus largely on technical fixes and incremental improvements than existential risks posed by complex systems; he altogether criticizes the reliance on “safety” measures that do not fundamentally alter the trajectory of AI development (2023). The article references that an open letter was signed by thousands of AI researchers and industry leaders, calling for a pause in the development of advanced AI systems until safety measures can be adequately implemented, yet Yudkowsky believes it is merely a first step and that more radical changes are of great necessity in order to prevent downfall of the human race (2023). In closing, ethical frameworks have been doing their best in the game of regulation, however it ultimately proves to be inadequate, and there remains no guarantee that threats posed by the growing capabilities of AI can effectively be mitigated.
Collaboration and Transparency
Naturally, this leads one to contemplate what alternative solutions exist and how they can ensure a safer future. One of the first steps has been found to be collaboration and transparency. Yudkowsky advocates for greater collaboration among AI researchers, industry leaders, and policymakers to address the risks of AI (2023). He also calls for more transparency in AI development to ensure that the public can understand and engage with the issues at stake (Yudkowsky, 2023). Raising awareness is one of the most important steps to change, and this has been seen throughout various movements in history as well. Collaboration and transparency, when exemplified on many levels, can help bring about this change.
It has also been examined that the Behavioral Science field offers suggestions on how to develop a sustainable and enriching relationship between humans and intelligent machines, including understanding human behavior, preferences, and needs to ensure that AI systems are designed to enhance human properties and experiences (Fenwick, 2022). Their solution is that Humanizing AI will not only make intelligent machines more efficient but will also make their application more ethical and human-centric (Fenwick, 2022). While the approach aims to enhance human experiences and create a more symbiotic relationship between humans and machines, there is more said that can be done easily in this field. Moving on, the importance of global unity and succession will be explored.
Global Cooperation
One of the most important solutions as well as worries is that of global cooperation. The Stanford Encyclopedia of Philosophy not only discusses the need for AI governance frameworks to ensure that AI technologies are developed and used ethically, but also highlights the importance of international cooperation in the very development of these ethical guidelines and regulations (Muller, 2020). During the formation of the EU AI Act and other global initiatives, there was a great emphasis on the need of harmonizing AI standards and regulations to facilitate international trade and collaboration (U.S. Artificial, 2023). This article touches on the importance of international cooperation in AI regulation.
Despite the standalone importance of global cooperation, various issues are arising that make it more of an exigency. According to the 14 Dangers of Artificial Intelligence by Mike Thomas, the development of autonomous weapons that operate without human oversight poses significant threats and potential for rogue states or non-state actors to misuse AI-driven weaponry (2024). Because he concludes that through autonomous weapons powered by AI, a global arms race is inevitable, a calamity this large-scale is no longer the problem of a single state, country, or continent, but the whole world. Therefore, global cooperation becomes an absolute necessity in preventing autonomous weaponry.
Thomas also describes a huge environmental harm. The essence is that AI relies on energy-intensive computations, contributing to increased carbon emissions and water consumption, and training algorithms on large datasets while running complex models requires vast amounts of energy (Thomas, 2024). Global warming and climate change has always been a global issue and therefore no matter where in the world resources are being exhausted, the impact is on the whole. While a lot of these may just seem like issues, realizing that these are global issues that require global cooperation is of utmost importance. So again, we come to say that there must be international unison.
In the 2022 article ‘The importance of humanizing AI,’ various challenges are brought up. One of the major challenges in AI is the lack of consideration for human enhancement as a cornerstone for its operationalization, and there is no universally accepted approach that guides best practices in this field (Fenwick, 2022). This clearly portrays the urgency of international cooperation in today’s time. The world coming together and making unified decisions can be one of the greatest assets in solving the problems created by advancing AI.
Responsible AI
Moving forward, an important solution is monitoring the way we put AI to use. Through something called ‘responsible AI,’ AI can be geared toward the benevolence of human beings, instead of falling prey to destructive methods. Its main purpose is to benefit society and prioritize transparency and justice while minimizing any harm. According to ‘What Is Responsible AI,’ by Built-In, “Responsible AI is a set of practices used to make sure artificial intelligence is developed and applied in an ethical and legal way” (Glover). Responsible AI will guide the effect that AI has on society.
This has come to be very critical, because over-reliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. So, balancing AI-assisted decision-making with human input is crucial to preserve cognitive abilities (Thomas, 2024, para 11). “Balancing high-tech innovation with human centered thinking is an ideal method for producing responsible AI technology and ensuring the future of AI remains hopeful for the next generation” (Thomas, 2024). While there are many risks, what this article does is explain to us that AI is also a huge tool that can be useful to us enormously when geared in the right direction and aligned with human values.
Similarly, Yudkowsky’s article on Pausing AI Developments highlights the potential existential risks associated with AI, such as the possibility of AI systems becoming uncontrollable or acting in ways that are detrimental to humanity. He warns that if AI systems are not designed with a deep understanding of human values, they could pose a threat to human existence (Yudkowsky, 2023). While this is a stronger viewpoint, it shows how important it is to work toward this direction. And responsible AI acts as a first step in doing just that.
Conclusion
In conclusion, the evolution of Artificial Intelligence brings both unprecedented benefits and serious risks that demand immediate, comprehensive attention. Current regulatory frameworks struggle to keep pace with AI’s rapid advancement, and ethical frameworks (while valuable) are limited in effectively addressing the full scope of AI’s potential dangers. This literature review highlights the necessity of combining legal, ethical, and technical efforts to mitigate AI risks.
Given the inadequacies of existing regulations, alternative solutions such as global cooperation, transparency in AI development, and a strong emphasis on “responsible AI” are essential to ensure that AI technologies are aligned with human values. The complexity and scale of AI’s impact call for a unified, global approach where nations and organizations work collaboratively to address threats such as autonomous weaponry, environmental impacts, and ethical concerns.
Something to consider though is, even then, aligning AI systems to human values remains a challenge. An article by Nature News highlights that even if AI can make decisions based on ethical principles, it doesn’t necessarily mean they can act as autonomous moral agents like humans, and this opens the door to a whole new world of problems including the “Moral Lag Problem,” which refers to the gap between what humans should be morally and what they actually are (Serafimova, 2020). This problem complicates the creation of moral AI because it’s hard to avoid projecting human moral imperfections onto machines (Serafimova, 2020). So, to boil it down, the epitome of the advent of AI lies in perfecting human nature and morality, which can only be attained through the mass inner-transformation of human beings or what some may call enlightenment. Ultimately, this is what must be sought as the permanent and long-lasting solution to all the problems discussed above.
In conclusion, laws and regulations fail to effectively mitigate the threats posed by the growing capabilities of AI, and while alternative solutions such as balancing technological innovation with ethical responsibility and global cooperation are necessary to foster a safer, more sustainable future, the final destination/solution addressing the root of all problems lies in discovering the path to human transformation.
References
Amnesty Tech. (2024, January 16). The Urgent but Difficult Task of Regulating Artificial Intelligence. Amnesty International. https://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficulttask-of-regulating-artificial-intelligence/
Fenwick, A., & Molnar, G. (2022). The importance of humanizing AI: using a behavioral lens to bridge the gaps between humans and machines. Discover Artificial Intelligence, 2(1), 14. https://doi.org/10.1007/s44163-022-00030-8
Glover, E. (n.d.). What Is Responsible AI? Built In. Retrieved November 9, 2024, from https://builtin.com/artificial-intelligence/what-is-responsible-ai
Müller, V. C. (2023). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2023). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/
Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221–4241. https://doi.org/10.1007/s10639-022-11316-w
Serafimova, S. (2020). Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020- 00614–8
Thomas, M. (n.d.). 14 Dangers of Artificial Intelligence (AI). Built In. Retrieved November 9, 2024, from https://builtin.com/artificial-intelligence/risks-ofartificial-intelligence
U.S. Artificial Intelligence Policy: Legislative and Regulatory Developments. (n.d.). Retrieved November 9, 2024, from https://www.cov.com/en/news-and-insights/insights/2023/10/us-artificialintelligence-policy-legislative-and-regulatory-developments
Wheeler, T. (n.d.). The three challenges of AI regulation. Brookings. Retrieved November 9, 2024, from https://www.brookings.edu/articles/the-threechallenges-of-ai-regulation/
Yudkowsky, E. (2023, March 29). The Open Letter on AI Doesn’t Go Far Enough. TIME. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
