News

Despite increasing investment, security awareness training continues to deliver marginal benefits. With a focus on actions over knowledge, AI-based HRM can personalize training to improve employee behavior — and ROI.

Senior Analyst Woman Worries About Cyber Ransomware Attack On Business PC.
Credit: Andrey_Popov / Shutterstock

Cybersecurity guru Bruce Scheier is often quoted as saying, “People are the weakest link in the security chain.” No more accurate words have ever been spoken about cybersecurity. You can spend millions of dollars on firewalls, endpoint security tools, access controls, and data encryption, but one employee can cause a catastrophic security breach, simply by downloading a malicious file or clicking on a rogue link.

Industry research indicates that 70% to 90% of breaches are the result of employees succumbing to social engineering, making skills-based errors, sharing sensitive data with shadow IT services, or through a compromise of a privileged user. Oh, and things seem to be getting worse as adversaries adopt sophisticated AI-based attacks like deepfakes.

Of course, this problem is well known. As a countermeasure, organizations spent around $6 billion on security awareness training (SAT) in 2025. While some firms did so as a best practice, most did so to comply with industry or government regulations such as HIPAA (requires a “security awareness and training program” for all workforce members per 45 CFR § 164.308), GDPR (article 39(1)(b) tasks data protection officers with “awareness-raising and training of staff”), PCI (requirement 12.6 mandates a formal program to make all personnel aware of cardholder data security), and many others.

Industry research indicates that SAT expenses will increase by an estimated 15% per year as organizations continue to invest in what Gartner calls “security behavior and culture programs.”

Who Operates the Badbox 2.0 Botnet?
Krebs on Security, Brian KrebsJanuary 26, 2026

The cybercriminals in control of Kimwolf — a disruptive botnet that has infected more than 2 million devices — recently shared a screenshot indicating they’d compromised the control panel for Badbox 2.0, a vast China-based botnet powered by malicious software that comes pre-installed on many Android TV streaming boxes. Both the FBI and Google say they are hunting for the people behind Badbox 2.0, and thanks to bragging by the Kimwolf botmasters we may now have a much clearer idea about that.

Our first story of 2026, The Kimwolf Botnet is Stalking Your Local Network, detailed the unique and highly invasive methods Kimwolf uses to spread. The story warned that the vast majority of Kimwolf infected systems were unofficial Android TV boxes that are typically marketed as a way to watch unlimited (pirated) movie and TV streaming services for a one-time fee.

Our January 8 story, Who Benefitted from the Aisuru and Kimwolf Botnets?, cited multiple sources saying the current administrators of Kimwolf went by the nicknames “Dort” and “Snow.” Earlier this month, a close former associate of Dort and Snow shared what they said was a screenshot the Kimwolf botmasters had taken while logged in to the Badbox 2.0 botnet control panel.

5 Predictions for AI in 2026
TIME, Harry Booth and Tharin PillayJanuary 15, 2026

Last year, AI companies struck multibillion-dollar deals to build out AI infrastructure. In 2026, as this new computing power starts coming online, experts say we’ll begin to see whether that investment pays off.

1. Advancing science

In November, California startup Edison Scientific said its system, Kosmos, which combs existing scientific literature for new insights, has not only replicated human discoveries but also turned up new ones—like evidence that aging brain cells in Alzheimer’s may tag themselves with signals telling the brain’s cleanup system to dispose of them. The Trump Administration’s Genesis Mission, a Manhattan Project–style initiative, also aims to use AI to advance science. But what counts as an autonomous discovery may be contested. We may be far from a discovery that “we can very confidently say a human would not have done that,” says Edward Parker, a physical scientist at think tank Rand Corp. He expects a “messy middle ground” in which AI assists human researchers more than it discovers on its own.

2. AI shops for you

This year could see many shoppers skip not only physical stores but also websites to buy directly inside chatbots. Forecasters on the online prediction platform Metaculus put a 95% chance on a major company running an AI shopping agent that completes over 100,000 transactions by the end of 2026. “They’ll clear that very, very quickly,” says Tyler Cowen, an economist at George Mason University. The groundwork has already been laid. Last April, Amazon began testing an agent that makes purchases within the site. In September, Open-AI started allowing users to buy from U.S. Etsy sellers within ChatGPT. AI-facilitated shopping could reshape consumer behavior as e-commerce did before it, offering companies like OpenAI a new revenue stream in the process.

3. Companions go mainstream

As the year progresses, it will become better understood “that people develop real and meaningful relationships with these technologies,” predicts Kate Darling, author of The New Breed: How to Think About Robots. Robust research shows people treat machines as if they’re alive, even when they know they’re not. As adoption increases, “that’s going to explode,” she says. Dmytro Klochko, CEO of AI-companion company Replika, expects people will use one AI for productivity and another for emotional connection. Companions are distinct from models like ChatGPT, he says, because they’re designed to proactively engage people. “What we care about is people getting happier,” he says. “Whether or not it’s good, it’s happening.”

4. More political attention

AI will “play a larger, more palpable role on the world stage” this year, says Dean Ball, primary drafter of America’s AI Action Plan. Ball, who has since left the White House, predicts that AI could be a top-five issue in the midterm elections—amid concern about issues like data centers increasing electricity prices and mental-health harms.

Alex Bores, a New York State assembly member working on AI legislation, expects the technology will remain a bi-partisan issue. The tech is evolving faster than political parties can create consensus, and people already feel its impacts in their communities, he says. Bores believes that 2026 will be a pivotal year for U.S. AI governance, as lobbyists angle to prevent regulation, even as the systems and the companies building them both become more powerful.

The AI Patchwork Emerges
Hyperdimensional, Dean W. BallJanuary 15, 2026

Introduction

State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.

It’s not just the topics that vary. It’s also the approaches different bills take to each topic. There is not one “algorithmic pricing” or “AI transparency” framework; there are several of each.

The political economy of state lawmaking (in general, not specific to AI) tends to produce three outcomes. First, states sometimes do converge on common legislative standards—there are entire bodies of state law that are largely identical across all, or nearly all, states. The second possibility is that states settle on a handful of legal frameworks, with the strictest of the frameworks generally becoming the nationwide standard (this is how data privacy law in the U.S. works). Third, states will occasionally produce legitimate patchworks: distinct regulatory regimes that are not easily groupable into neat taxonomies.

Two cyber hacks have highlighted the vulnerability of New Zealand’s digital health systems – and the vast volumes of patient data we rely on them to protect.

Following the hacking of Manage My Health – compromising the records of about 127,000 patients – and an earlier breach at Canopy Health, a concerned public is asking how this happened and who is to blame.

The most urgent question, however, is whether it can happen again.

What we know so far

Manage My Health (MMH) – a patient portal used by many general practices to share test results, prescriptions and messages – published its first public notice about a cyber security incident on New Year’s Day.

According to the company, it became aware of unauthorised access on December 30, after being alerted by a partner. It says it immediately engaged independent cyber security specialists and that the compromise was limited to its “Health Documents / My Health Documents” module.

The Office of the Privacy Commissioner confirmed it was notified on January 1 and later published guidance for those affected. The National Cyber Security Centre also issued an incident notice.

MMH has since obtained urgent High Court injunctions that restrain the use or publication of data taken. In its decision, the court described activity patterns consistent with automation, including unusually high-frequency behaviour and repeated access attempts.

While this sheds some light on how the hacker operated, it does not establish which specific technical control failed – or where responsibility ultimately lies.

We have now also learned that a second provider, Canopy Health, experienced unauthorised access to parts of its administrative systems six months ago, with some patients only being notified this week.

How Many Cybersecurity Job Openings Are There? (January 2026)
ProgramsJanuary 8, 2026

Cybersecurity continues to be one of the fastest-growing sectors, with millions of job openings worldwide.

Global demand for cybersecurity professionals has surged, driven by rising threats and expanding digital infrastructure.

This report breaks down the current state of cybersecurity job openings, regional trends, and key factors contributing to the numbers.

Top Cybersecurity Job Opening Stats

  • There are approximately 4.8 million unfilled cybersecurity roles.
  • In the US alone, there are currently 514,359 cybersecurity job openings.
  • Over a quarter (26%) of cybersecurity roles in the US are currently vacant.
  • Virginia has the most cybersecurity job openings in the US (53,855).
  • The US cybersecurity workforce gap has grown each year since 2020.
  • CISSO is the most in-demand certification in US cybersecurity openings (82,494).

Number of Open Cybersecurity Jobs

Globally, there are an estimated 4.8 million unfilled cybersecurity jobs.

Source: DeepStrike

Fears Mount That US Federal Cybersecurity Is Stagnating—or Worse
WIRED, Lily Hay NewManDecember 31, 2025

As the first year of the Trump administration approaches its end, government cybersecurity experts and even some United States government officials are warning that recent White House initiatives—including downsizing and restructuring of the US federal workforce—risk setting the government back on improving and expanding its digital defenses.

expired: US cybersecurity struggling
tired: US cybersecurity improving
wired: US cybersecurity backsliding

Read more Expired/Tired/WIRED 2025 stories here.

For years, the federal government was playing catch-up on cybersecurity, scrambling to replace ancient software, apply security patches to newer systems, and deploy other baseline protections across a massive and disparate population of PCs and other gadgets. With so many agencies and offices that needed upgrading, it was slow going. But as repeated government data breaches drew urgent attention to the issue, and as the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency—founded in 2018—established itself during the early 2020s, minimum standards seemed to be rising. Now, with major staffing cuts at CISA and in other key departments across the government, that incremental progress could quickly erode.

“We’ve spent a lot of time trying to encourage the government to do more, and CISA was doing, you know, a better job,” retiring comptroller general Gene Dodaro told the US Senate Committee on Homeland Security and Governmental Affairs on December 16. He added that the Government Accountability Office has “a lot of open recommendations still for them to do. But I’m concerned that we’re taking our foot off the gas at CISA, and I think we’ll live to regret it.”

Happy 16th Birthday, KrebsOnSecurity.com!
Krebs On Security, Brian KrebsDecember 29, 2025

KrebsOnSecurity.com celebrates its 16th anniversary today! A huge “thank you” to all of our readers — newcomers, long-timers and drive-by critics alike. Your engagement this past year here has been tremendous and truly a salve on a handful of dark days. Happily, comeuppance was a strong theme running through our coverage in 2025, with a primary focus on entities that enabled complex and globally-dispersed cybercrime services.

Image: Shutterstock, Younes Stiller Kraske.

In May 2024, we scrutinized the history and ownership of Stark Industries Solutions Ltd., a “bulletproof hosting” provider that came online just two weeks before Russia invaded Ukraine and served as a primary staging ground for repeated Kremlin cyberattacks and disinformation efforts. A year later, Stark and its two co-owners were sanctioned by the European Union, but our analysis showed those penalties have done little to stop the Stark proprietors from rebranding and transferring considerable network assets to other entities they control.

In December 2024, KrebsOnSecurity profiled Cryptomus, a financial firm registered in Canada that emerged as the payment processor of choice for dozens of Russian cryptocurrency exchanges and websites hawking cybercrime services aimed at Russian-speaking customers. In October 2025, Canadian financial regulators ruled that Cryptomus had grossly violated its anti-money laundering laws, and levied a record $176 million fine against the platform.

i
Coupang Incident
CoupangDecember 29, 2025

Below are the statements Coupang published related to the recent cybersecurity incident.

—————————————————————————————–

Originally posted on Dec 29, 2025 09:50 in KST:

Coupang Announces Compensation Plan to Restore Customer Trust… Issuing 1.685 Trillion Won Worth of Purchase Vouchers

– Compensation plan implemented for all 33.7 million customers… To be provided sequentially starting January 15, next year

– Equivalent to 50,000 won per person… Purchase vouchers for all Coupang products and for Coupang Eats, Travel, and R.LUX

– Practicing ‘customer-centric principles’… We will transform into a company trusted by customers.

Fully acknowledging its responsibility for the recent personal information leak incident, Coupang announced on the 29th that it plans to implement a 1.685 trillion won customer compensation plan to restore customer trust.

Harold Rogers, Coupang Corp.’s interim CEO, stated, “All Coupang executives and employees deeply regret the significant concern and distress the recent personal data leak has caused our customers,” adding, “We have prepared a compensation plan as part of taking responsible action for our customers.”

Coupang plans to distribute purchase vouchers worth around 1.685 trillion won to customers starting January 15 next year. The plan applies to 33.7 million customer accounts who were notified of the personal information leak at the end of last November. Purchase vouchers will be provided equally to both WOW and non-WOW members. It also includes Coupang customers who had canceled their membership and were notified of the personal data leak. The company plans to sequentially notify its 33.7 million customer accounts via text message about the use of the purchase vouchers.

Coupang will provide each customer with four single-use purchase vouchers totaling 50,000 won: all Coupang products including Rocket Delivery, Rocket Overseas, Seller Rocket, and Marketplace (5,000 won), Coupang Eats (5,000 won), Coupang Travel products (20,000 won), and R.LUX products (20,000 won).

Customers can check the purchase vouchers sequentially on the Coupang app starting January 15 and apply them when purchasing products. More specific details are scheduled to be released in a separate announcement.

Harold Rogers, Coupang Corp.’s interim CEO, stated, “Taking this incident as a turning point, Coupang will wholeheartedly embrace ‘customer-centric principles’ and fulfill its responsibilities to the very end, transforming into a company that customers can trust,” adding, “We once again deeply apologize to our customers.”

—————————————————————————————–

Originally posted on Dec 26, 2025 15:00 in KST:

Coupang’s investigation was not a “self investigation.” It was an investigation coordinated on a daily basis, under the express direction of government, over a period of several weeks.

This data leak incident has caused great concern to the public and the continued misstatements that Coupang was conducting an investigation without governmental oversight are creating false insecurity. We would like to clarify facts of our coordination process with the government.

On December 1, the government approached Coupang and asked for full cooperation.

On the 2nd, Coupang received an official, written letter with regard to the incident from the government. On an almost daily basis for the next several weeks, Coupang worked with the government to locate, contact, and communicate with the leaker. At the direction of the government, Coupang secured the leaker’s full confession, recovered all devices used in connection with the leak, and received critical details about Coupang user information.  As soon as Coupang received new facts, sworn testimony, or physical materials from the leaker, Coupang turned them over to the government immediately.

On the 9th, the government suggested that Coupang contact the leaker. Coupang worked with the government on messaging and word choice in its communications. Following this, Coupang met the leaker initially on the 14th and reported this to the government. On the 16th, we completed the primary retrieval of the leaker’s desktop and hard drives as directed by the government, which was then reported. On the 17th, we provided them to the government. Coupang understands that after it delivered the hard drive to the government, the government began an immediate review. The government then requested that we recover additional devices from the leaker.

On the 18th, Coupang recovered the leaker’s MacBook Air laptop from a nearby river. Coupang used a forensics team to document and take inventory and then immediately handed the laptop over to the government. On December 21 the government let Coupang to deliver the hard drives, laptop, and all three sworn and fingerprinted declarations to the police. At all times Coupang obeyed the government’s order to keep the operation confidential and not disclose any details, even while governmental agencies, the National Assembly, and parts of the media falsely accused Coupang of failing to seriously address the leak.

On the 23rd, at the government’s request we provided additional briefing about the details of the investigation including details about Coupang’s cooperation with the government. Subsequently, on the 25th, we notified Coupang customers of the investigation status.

Coupang will fully cooperate with the ongoing government investigation and take all necessary measures to prevent any secondary harm.

Timeline of Government Coordination to Recover Leaked Information

—————————————————————————————–

Originally posted on Dec 25, 2025 15:35 in KST:

Coupang confirmed that the perpetrator has been identified, and that all devices used in the data leak have been retrieved.  The investigation to date indicates that the perpetrator retained limited user data from only 3,000 accounts and subsequently deleted the user data.

Based on the investigation to date:

  • The perpetrator accessed 33 million accounts, but only retained user data from approximately 3,000 accounts. The perpetrator subsequently deleted the user data.
  • The user data included only 2,609 building entrance codes. No payment data, log-in data or individual customs numbers
  • The perpetrator never transferred any of the data to others

We know the recent data leak has caused concern among our customers, and we apologize for the anxiety and inconvenience. Everyone at Coupang and the government authorities has been working tirelessly together to address this critical issue, and we are now providing an important update.

Coupang used digital fingerprints and other forensic evidence to identify the former employee who leaked user data. The perpetrator confessed everything and revealed precise details about how he accessed user data.

All devices and hard drives the perpetrator used to leak Coupang user data have been retrieved and secured following verified procedures. Starting from the submission of the perpetrator’s declaration to government officials on December 17, Coupang has been submitting all devices including hard drives to government officials as soon as we received them. Coupang has also been cooperating fully with all relevant ongoing government investigations.

From the beginning, Coupang commissioned three top global cybersecurity firms—Mandiant, Palo Alto Networks, and Ernst & Young—to perform rigorous forensic investigation.

The investigative findings to date are consistent with the perpetrator’s sworn statements: (i) that he accessed basic user data from 33 million customer accounts using a stolen security key, (ii) that he only retained user data from roughly 3,000 total accounts (name, email, phone number, address and part of order histories), (iii) that from the roughly 3,000 accounts, he only retained 2,609 building entrance access codes, (iv) that he deleted all stored data after seeing news reports of the leak, and (v) that none of the user data was ever transmitted to others.

  1. Perpetrator accessed basic user data using a stolen security key. The perpetrator stated that he was able to access limited user data—including names, emails, addresses, phone numbers—by stealing an internal security key that he took while still working at the company. Data logs and forensic investigation had already confirmed that the access was carried out using a stolen internal security key and included only the types of data the perpetrator specified (e.g., names, emails, addresses, phone numbers).  He did not access any payment data, log-in data, or individual customs numbers.
  1. Perpetrator gained very limited access to order history and building entrance codes. The perpetrator stated that while accessing basic data relating to a large number of customers, he only ever accessed the order history and building entrance codes for roughly 3,000 accounts. Independent forensic analysis of data logs had already determined that the number of building entrance codes for only 2,609 were ever accessed, just as the perpetrator reported.
  1. Perpetrator used a desktop PC and MacBook Air laptop for the attack. The perpetrator stated that he used a personal desktop PC and a MacBook Air laptop to provision access and to store a limited amount of user data. Independent forensic investigation confirmed that Coupang systems were accessed using one PC system and one Apple system as the primary hardware interfaces, exactly as the perpetrator described. The perpetrator relinquished the PC system and four hard drives used on the PC system, on which analysts found the script used to carry out the attack.
  1. Perpetrator sought to erase and dispose of the MacBook Air laptop in a river. The perpetrator stated that when news outlets reported on the data leak he panicked and sought to conceal and destroy the evidence. Among other things, the perpetrator stated that he physically smashed his MacBook Air laptop, placed it in a canvas Coupang bag, loaded the bag with bricks, and threw the bag into a nearby river. Using maps and descriptions provided by the perpetrator, divers recovered the MacBook Air laptop from the river. It was exactly as the perpetrator claimed—in a canvas Coupang bag loaded with bricks—and its serial number matched the serial number in the perpetrator’s iCloud account.
  1. Perpetrator retained a very small amount of user data, never transferred any of the data, and subsequently deleted all the stored user data. The perpetrator stated that he worked alone, that he only retained a small amount of user data from roughly 3,000 accounts, that the user data was only ever stored on his personal desktop PC and MacBook Air laptop, that none of that user data was ever transmitted to a third party, and that he deleted the stored data immediately after seeing news reports of the leak. The investigative findings to date are consistent with the perpetrator’s sworn statements and found no evidence that contradicts these statements.

We will provide updates following the investigation and plan to separately announce compensation plans to our customers in the near future.

Coupang remains fully committed to protecting customer data. We will cooperate fully with the government’s investigation, take all necessary steps to prevent further harm, and strengthen our measures to prevent recurrence.

Coupang regrets the concern this incident has caused and apologizes to those affected.

Six (or seven) predictions for AI 2026 from a Generative AI realist
Marcus on AI, Gary MarcusDecember 20, 2025

2025 turned out pretty much as I anticipated. What comes next?

AGI didn’t materialize (contra predictions from Elon Musk and others); GPT-5 was underwhelming, and didn’t solve hallucinations. LLMs still aren’t reliable; the economics look dubious. Few AI companies aside from Nvidia are making a profit, and nobody has much of a technical moat. OpenAI has lost a lot of its lead. Many would agree we have reached a point of diminishing returns for scaling; faith in scaling as a route to AGI has dissipated. Neurosymbolic AI (a hybrid of neural networks and classical approaches) is starting to rise. No system solved more than 4 (or maybe any) of the Marcus-Brundage tasks. Despite all the hype, agents didn’t turn out to be reliable. Overall, by my count, sixteen of my seventeen “high confidence” predictions about 2025 proved to be correct.

Here are six or seven predictions for 2026; the first is a holdover from last year that no longer will surprise many people.

  1. We won’t get to AGI in 2026 (or 7). At this point I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.
  2. Human domestic robots like Optimus and Figure will be all demo and very little product. Reviews by Joanna Stern and Marques Brownle of one early prototype were damning; there will be tons of lab demos but getting these robots to work in people’s homes will be very very hard, as Rodney Brooks has said many times.
  3. No country will take a decisive lead in the GenAI “race”.
  4. Work on new approaches such as world models and neurosymbolic will escalate.
  5. 2025 will be known as the year of the peak bubble, and also the moment at which Wall Street began to lose confidence in generative AI. Valuations may go up before they fall, but the Oracle craze early in September and what has happened since will in hindsight be seen as the beginning of the end.
  6. Backlash to Generative AI and radical deregulation will escalate. In the midterms, AI will be an election issue for first time. Trump may eventually distance himself from AI because of this backlash.

And lastly, the seventh: a metaprediction, which is a prediction about predictions. I don’t expect my predictions to be as on target this year as last, for a happy reason: across the field, the intellectual situation has gone from one that was stagnant (all LLMs all the time) and unrealistic (“AGI is nigh”) to one that is more fluid, more realistic, and more open-minded. If anything would lead to genuine progress, it would be that.

A security review of eight popular internet toys found “widespread security and privacy weaknesses,” per a new report shared first with Axios from Mozilla Foundation and cybersecurity consultant 7ASecurity.

Why it matters: Connected toys like tablets, smartwatches, and robots now store everything from a kid’s photos to their location, raising serious privacy concerns and creating new vulnerabilities for hackers and other bad actors.

What’s inside: “Across the smart toys audited for this report, 7ASecurity found widespread security and privacy weaknesses,” the report reads.

  • “In practical terms, that means many toys marketed for children could be misused to spy on families, manipulate what kids hear or see, or expose sensitive data.”

The toys on 7ASecurity’s list are:

  1. Amazon Fire Kids Tablet
  2. Emo Robot
  3. Huawei Watch Kids 4
  4. PlayShifu Plugo Count
  5. TickTalk 5
  6. Powerup 4.0 Airplane
  7. Sphero Mini Activity Kit
  8. GoCube Edge

Security risks include hackers being potentially able to:

  • Hijack the speakers of a toy and talk back if kids are using an insecure WiFi network
  • Access location data and other personal information like names, birthdates and phone numbers
  • Use a Bluetooth-connected toy to remotely control the toy if they’re in pairing range

7ASecurity said it chose toys to research based on popularity with shoppers around the world.

The big picture: The report comes at a time when lawmakers are raising serious concerns about smart toys.

  • Sens. Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) wrote to the CEOs of six companies this week demanding answers on safeguards for children using AI-enabled toys.
  • “Not only are these products potentially dangerous, but they also collect sensitive data on American families,” the senators wrote.

The Trump administration has pursued a staggering range of policy pivots this past year that threaten to weaken the nation’s ability and willingness to address a broad spectrum of technology challenges, from cybersecurity and privacy to countering disinformation, fraud and corruption. These shifts, along with the president’s efforts to restrict free speech and freedom of the press, have come at such a rapid clip that many readers probably aren’t even aware of them all.

FREE SPEECH

President Trump has repeatedly claimed that a primary reason he lost the 2020 election was that social media and Big Tech companies had conspired to silence conservative voices and stifle free speech. Naturally, the president’s impulse in his second term has been to use the levers of the federal government in an effort to limit the speech of everyday Americans, as well as foreigners wishing to visit the United States.

In September, Donald Trump signed a national security directive known as NSPM-7, which directs federal law enforcement officers and intelligence analysts to target “anti-American” activity, including any “tax crimes” involving extremist groups who defrauded the IRS. According to extensive reporting by journalist Ken Klippenstein, the focus of the order is on those expressing “opposition to law and immigration enforcement; extreme views in favor of mass migration and open borders; adherence to radical gender ideology,” as well as “anti-Americanism,” “anti-capitalism,” and “anti-Christianity.”

Earlier this month, Attorney General Pam Bondi issued a memo advising the FBI to compile a list of Americans whose activities “may constitute domestic terrorism.” Bondi also ordered the FBI to establish a “cash reward system” to encourage the public to report suspected domestic terrorist activity. The memo states that domestic terrorism could include “opposition to law and immigration enforcement” or support for “radical gender ideology.”

Never let a Good Sputnik Moment go to Waste
Digital Spirits, Matthew MittelsteadtDecember 18, 2025

On November 13th, Anthropic reported something truly remarkable: they disrupted an almost entirely AI-orchestrated cyber operation. A Chinese state-sponsored group had jury-rigged a framework allowing Claude to orchestrate a battery of agents and off-the-shelf attack tools against up to thirty high-profile targets including “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.” Data theft was the goal and a small number of attacks, it seems, succeeded. The kicker: humans were relegated to an approval role, authorizing attacks and selecting targets, while AI drove 80-90% of actual execution.

It’s hard to overstate how significant this is. In Washington, ‘Sputnik Moment’ gets tossed around so liberally it’s lost almost all meaning. This time, however, the moniker fits. This is something special.

Just like Sputnik, this was unanticipated. Even Anthropic’s own engineers admitted surprise at how quickly AI cyber capabilities evolved at scale. Just like Sputnik, this demonstrates a powerful new military-relevant capability. Intelligent machines, uncapped by human fatigue, knowledge burdens, or labor constraints, are now truly stepping into the cyber battlefield. The result could be a massive step change in attack volume, speed, and effectiveness that many defenders are unequipped to manage.

Finally, just like Sputnik, this is an as-yet crude capability that will only improve. When Sputnik-1 launched, functions were limited to simple radio transmission while battery powered operational life was just three weeks. Anthropic suggests these AI capabilities were likewise limited. Only a small few attacks succeeded while persistent hallucinations undermined success. As was the case for satellite technology, however, this first version is the worst these capabilities will ever be. This is a floor, not a ceiling.

i
CYSE 587 Shark Seminar
Connor WadlinDecember 9, 2025

On December 8th, 2025, Dr. Alexandre De Barros Barreto’s CYSE 587 class presented their shark tank seminar presentations! Each team presented for twenty minutes before a panel of sharks began to ask their questions.

It was an innovative and engaging night, full of discussion, collaboration, and problem solving. Thank you to all of the amazing sharks who came out, and to all of the presenters for their solutions to real world problems!

Please look at the overview post to view each team’s presentation and videos.

Axios AI+
AXIOS, Ina FriedDecember 11, 2025

I really wanted to go to last night’s launch event for the San Francisco Chronicle’s book on the Valkyries’ inaugural season, but went to Google’s holiday PR party instead.

🤖 Situational awareness: Time magazine named “the architects of AI” as 2025’s Person of the Year.

Today’s AI+ is 1,105 words, a 4-minute read.

1 big thing: New models could increase cyber risks

Illustration of two hands made out of glowing binary code typing on a keyboard.

Illustration: Aïda Amer/Axios

OpenAI says the cyber capabilities of its frontier AI models are accelerating and warned yesterday that upcoming models are likely to pose a “high” risk, in a report shared first with Axios.

Why it matters: The models’ advances could significantly expand the number of people able to carry out cyberattacks.

Driving the news: OpenAI said it has already seen an increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks.

  • The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month.
  • “We expect that upcoming AI models will continue on this trajectory,” the company says in the report. “In preparation, we are planning and evaluating as though each new model could reach ‘high’ levels of cybersecurity capability as measured by our Preparedness Framework.”
GMU Board of Visitors approves renaming of PhD program
BOV Minutes from Dec. 4, 2025 meeting, Cyber curatorsDecember 4, 2025

ITEM NUMBER:

PhD in Cybersecurity Degree Program Proposal

PURPOSE OF ITEM:
The PhD in Cybersecurity degree program proposal is under consideration by the State Council of Higher Education for Virginia (SCHEV) for initiation in Fall 2026. The degree program was originally entitled, “PhD in Cyber Security Engineering.” Board action is required to approve the revised name of the degree program.

APPROPRIATE COMMITTEE:

Academic Affairs Committee

BRIEF NARRATIVE:

On September 26, 2024, the Board of Visitors approved George Mason University’s proposal for a PhD degree program in Cyber Security Engineering. The proposal was submitted to SCHEV in August of 2025. Feedback from SCHEV staff included discussion of a name change to the proposal that would eliminate unnecessary confusion between the terms “cybersecurity” and “cyber security engineering.” Faculty determined that a name change would benefit the degree program. The revised name, “PhD in Cybersecurity,” must be approved by the Board of Visitors before consideration of the degree program can resume at SCHEV.

The proposed degree program is built upon the existing bachelor’s and master’s degree programs in Cyber Security Engineering offered by the Department of Cyber Security Engineering in the College of Engineering and Computing and will create a pathway for doctoral level research and training for students in these degree programs.

The proposed program will train students to solve the next generation of engineering and research problems, educate the future workforce, and lead government agencies and industries in the domain of cybersecurity. The proposed degree program responds to the escalating challenges of an increasingly interconnected and digitized world. The proposed degree program will prepare students for the growing faculty and researcher positions in academia, industry, and government on cyber security education and research. Establishing a PhD program in cybersecurity will address the shortage of experts, foster a robust research community in Virginia, and contribute to the evolution of cutting-edge technologies and methodologies in cybersecurity.

REVENUE IMPLICATIONS:

The program at launch will be revenue neutral. The required core courses will be offered by existing faculty, and the program does not require new laboratory or other facilities. It is anticipated that the program to be revenue enhancing as it reaches maturity.

STAFF RECOMMENDATION:

Staff recommends Board approval.

I. Basic Program Information

Institution (official name) Degree Program Designation Degree Program Name
CIP code

Anticipated Initiation Date Governing Board Approval Date (actual or anticipated)

George Mason University Doctor of Philosophy Cybersecurity

Fall 2026
Anticipated December 4, 2025

STATE COUNCIL OF HIGHER EDUCATION FOR VIRGINIA

Program Announcement Formpage42image3479774192 page42image3479774480 page42image3479774832page42image3479775120 page42image3479775408page42image3479775696 page42image3479776112page42image3479776400 page42image3479776688page42image3479776976 page42image3479777264page42image3479777552 page42image3479777840page42image3479778128

II. Curriculum Requirements. Address the following using appropriate bolded category headings:

  • Core Coursework and total credit hours (include course descriptor/designator, name, and credit hour value). Indicate new courses with an asterisk.
  • Sub Areas (e.g., concentrations, emphasis area, tracks) and total credit hours. Include brief description of focus/purpose of sub area and required courses.
  • Additional requirements (e.g., internship, practicum, research, electives, thesis, dissertation) and total credit hours
  • Total credit hours for the curriculum/degree program.

Core Courses: 18 credits

CYSE 700: Research Methodology and Pedagogy in Cybersecurity (3 credits) CYSE 710: Advanced Networks and Cybersecurity (3 credits)*
CYSE 757: Cyber Law (3 credits)*
CYSE 780: Advanced Hardware and Cyber-Physical Systems Security (3 credits)* CYSE 788: Advanced Systems Engineering for Cybersecurity (3 credits)*

CYSE 789: Advanced Artificial Intelligence Methods for Cybersecurity (3 credits)*

Restricted Electives: 30 credits

Students select 6 credits from the following courses.
CYSE 760: Human Factors in Cyber Security (3 credits)* CYSE 770: Fundamentals of Operating Systems (3 credits)* ECE 646: Applied Cryptography (3 credits)

Students select 24 credits from a list of courses.
CS 530: Mathematical Foundations of Computer Science (3 credits)
CS 583: Analysis of Algorithms (3 credits)
CYSE 640: Wireless Network Security (3 credits)
CYSE 650: Topics in Cyber Security Engineering (3 credits)
CYSE 698: Independent Study and Research (3 credits)
CYSE 750: Advanced Topics in Cyber Security Engineering (3 credits) CYSE 765: Quantum Information Processing and Security (3 credits)* CYSE 785: Advanced Unmanned Aerial Systems Security (3 credits) ISA 764: Security Experimentation (3 credits)
ISA 862: Models for Computer Security (3 credits)
ISA 863: Advanced Topics in Computer Security (3 credits)
OR 719: Graphical Models for Inference and Decision Making (3 credits)

page42image3511588160

Program Announcement Form Page 1

Research Requirement: 12 credits

CYSE 998: Doctoral Dissertation Proposal (3-12 credits)*

Dissertation Requirement: 12 credits

CYSE 999: Doctoral Dissertation (1-12 credits)*

Total: 72 credit hours

III. Description of Educational Outcomes. Use bullets to list outcomes. (max. 250 words)

Students will learn to
• Apply foundational knowledge of cybersecurity to engineering applications.
• Analyze cyber-physical systems, networks, software, and hardware for vulnerabilities

to various attack scenarios.
• Integrate security fundamentals in building secure and resilient cyber infrastructure,

including large-scale cyber-physical systems and networks.
• Apply quantitative and qualitative methods to cybersecurity.
• Construct approaches for predicting, detecting, and responding to cyber threats

utilizing artificial intelligence.
• Evaluate the principles of cyber law and how they impact cybersecurity occurrences. • Design curriculum and pedagogical experiences for training the next generation of

cyber security engineers.
• Lead innovative research that contributes to the cyber security engineering knowledge

base.

IV. Description of Workplace Competencies/Skills. Use bullets to list outcomes. (max. 250 words)

V. Duplication. Provide information for each existing degree program at a Virginia public institution at the same degree level. Use SCHEV’s degree/certificate inventory and institutions’ websites.
Institution Program degree designation, name, and Degrees granted (most

CIP code recent 5-yr average)

*ODU is currently developing a stand-alone PhD degree program in Cybersecurity.

Graduate will be able to

  • Conduct fundamental research to push the frontiers of cybersecurity defense andmitigation techniques.
  • Train and educate undergraduate and graduate students and the population in computersecurity fundamentals.
  • Analyze cyber security problems in critical infrastructure and design effective solutions.

page43image3479731840 page43image3479732128 page43image3479732416

Old Dominion University*

Doctor of Engineering (DEng)/Doctor of Philosophy (PhD) in Engineering, concentration in Cybersecurity, CIP code: 140101

31 (unable to aggregate by concentration)

page43image3491970944page43image3491971232

Program Announcement Form Page 2

VI. Labor Market Information. Fill in the tables below with relevant information from the Bureau of Labor Statistics (BLS) and Virginia Employment Commission (VEC). Insert correct years (2023 and 2033) to reflect the most recent 10-year projections. Add rows as necessary.

Labor Market Information: Bureau of Labor Statistics, 2022 -2032 (10-Yr)

Occupation Base Year Projected Total % Change Typical Entry Employment Employment and #s Level Education

page44image3469574464 page44image3469574752 page44image3469575040 page44image3469575328 page44image3469575616 page44image3469576016

Computer science teachers, postsecondary

42000

44300

5.3

Doctoral or professional degree

Engineering teachers, postsecondary

45500

49700

9.3

Doctoral or professional degree

Computer and Information Research Scientists

36500

44800

22.7

Master’s Degree

Labor Market Information: Virginia Employment Commission, 2020 -2030 (10-Yr)

Occupation

Base Year Employment

Projected Employment

Total % Change and #s

Annual Change #

Education

Computer Science Teachers, Postsecondary

1523

1595

4.73

7

N/A

Engineering Teachers, Postsecondary

1249

1357

8.65

11

N/A

Computer and Information Systems Managers

14659

16636

13.48

198

Bachelor’s degree

page44image3497160096

Program Announcement Form Page 3

VII. Projected Resource Needs
Cost and Funding Sources to Initiate and Operate the Program

page45image3375346864

Program Initiation Year 2026 – 2027

Program Full Enrollment Year 2030-2031

page45image3496013664

Informational Category

  1. 1  Projected Enrollment (Headcount)
  2. 2  Projected Enrollment (FTE)

8 22 6 16

page45image3496026368 page45image3496026656 page45image3496027072

Projected Revenue Total from Tuition and E&G Fees Due to the Proposed Program

page45image3496036560 page45image3496036848 page45image3496037136

3
VIII. Virginia Needs. Briefly indicate state needs for the degree program. (max. 250 words)

$228,072 $622,152

page45image3496050688 page45image3496051232 page45image3496051424

State Needs. This proposed program will further the State’s effort in developing a sustainable Cybersecurity industry in the Commonwealth. Although there are bachelor’s and master’s degree programs available in cybersecurity, there is no existing doctoral-level Cyber Security Engineering degree program in Virginia. This is a unique but timely program that will address the gap in producing academic doctoral-level academic and researchers in cybersecurity.

Employer Needs. The program will prepare students for international, national, and local employment in academia, government, contractors, think tanks, and non-government organizations. The program will provide rigorous academic training in cybersecurity required by the employers. Given the location of George Mason, the program has the potential to contribute to the Government needs in cybersecurity researchers. In addition, the program will address the growing need of academics in cybersecurity for academic roles, i.e., faculties and research scientists, opening throughout the country.

Student Needs. The success of the BS and MS in Cyber Security Engineering at George Mason underlines student participation and interest in higher education in cybersecurity. As noted from the BLS data there is significant growth is expected in cybersecurity related jobs, such as 31.5% growth in Information Security analysts over the next ten years. To rigorously train the workforce and continued innovation in cyber, students will need doctoral-level education and research experience. This program will address this unmet student demand.

Researchers question Anthropic claim
Ars Technica, Dan GoodinNovember 14, 2025

Researchers from Anthropic said they recently observed the “first reported AI-orchestrated cyber espionage campaign” after detecting China-state hackers using the company’s Claude AI tool in a campaign aimed at dozens of targets. Outside researchers are much more measured in describing the significance of the discovery.

Outside researchers weren’t convinced the discovery was the watershed moment the Anthropic posts made it out to be. They questioned why these sorts of advances are often attributed to malicious hackers when white-hat hackers and developers of legitimate software keep reporting only incremental gains from their use of AI.

“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” Dan Tentler, executive founder of Phobos Group and a researcher with expertise in complex security breaches, told Ars. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”

i
The Future of AI in Higher Education
AP News, EINPresswireNovember 14, 2025

WASHINGTON, DC, UNITED STATES, November 14, 2025 / EINPresswire.com / — In the latest episode of CAIO Connect, host Sanjay Puri speaks with Dr. Amarda Shehu, the inaugural Vice President and Chief AI Officer at George Mason University. Dr. Shehu’s story is one of resilience, intellect, and vision—from studying geometry by candlelight in Albania to leading one of America’s most comprehensive university-wide AI transformation strategies. She discusses the democratization of knowledge, AI literacy, and why “Humanity First” should remain the guiding principle for AI leadership.

A Journey from Resilience to Leadership

Dr. Shehu’s early life in Albania taught her that education is both survival and liberation. Those formative experiences continue to shape her mission today: ensuring that access to AI education is not a privilege, but a right. As she notes, “Technology should empower people, not replace them.” This belief underpins her leadership philosophy—combining rigor with empathy and innovation with inclusion.

At George Mason University, Dr. Shehu’s vision has transformed how faculty, staff, and students engage with artificial intelligence. Her leadership is a reminder that AI in education isn’t just about algorithms—it’s about building equitable systems of learning that reflect human values.

Democratizing AI for All

A cornerstone of Mason’s AI strategy is accessibility. Under Dr. Shehu’s direction, the university launched “AI for All,” an introductory course designed to make AI concepts approachable for students across disciplines—even those with no technical background.

The initiative has ignited enthusiasm across the campus, equipping students with the literacy to understand, question, and collaborate with AI. It’s an embodiment of Dr. Shehu’s mission to close the gap between technical and non-technical learners, ensuring that every student—from arts to engineering—can thrive in an AI-driven world.

The Evolving Role of the Chief AI Officer

As the university’s first Chief AI Officer, Dr. Shehu stands at the crossroads of technology, policy, and education. Her role, she explains, differs from its corporate counterpart: while industry focuses on speed and efficiency, academia must balance ethics, transparency, and long-term societal impact.

Her work extends beyond strategy—it’s about building trust. The university’s secure internal platform, Patriot AI, reflects that commitment to privacy, ethical use, and institutional accountability. “Innovation without integrity isn’t progress,” she emphasizes.

AI, Critical Thinking, and the Human Mind

One of Dr. Shehu’s core concerns is preserving critical thinking in the age of AI assistance. Rather than viewing AI as a threat to intellectual rigor, she sees it as a partner in learning—a tool to enhance curiosity and deepen understanding. Her approach encourages educators to integrate AI in ways that sharpen, not soften, cognitive skills.

Leading with Humanity

As Mason continues to advance its AI ecosystem, Dr. Shehu’s philosophy remains clear: Humanity must lead technology, not the other way around. For her, the future of AI in higher education depends on compassion, community, and courage.

“AI will change how we teach and learn,” she reflects, “but it must never change who we are.”

Through this inspiring conversation on CAIO Connect, Sanjay Puri and Dr. Amarda Shehu remind us that the real goal of AI in education isn’t automation—it’s amplification of human potential.

Google Sues to Disrupt Chinese SMS Phishing Triad
Krebs on Security, Brian KrebsNovember 13, 2025

Google is suing more than two dozen unnamed individuals allegedly involved in peddling a popular China-based mobile phishing service that helps scammers impersonate hundreds of trusted brands, blast out text message lures, and convert phished payment card data into mobile wallets from Apple and Google.

In a lawsuit filed in the Southern District of New York on November 12, Google sued to unmask and disrupt 25 “John Doe” defendants allegedly linked to the sale of Lighthouse, a sophisticated phishing kit that makes it simple for even novices to steal payment card data from mobile users. Google said Lighthouse has harmed more than a million victims across 120 countries.

Lighthouse is one of several prolific phishing-as-a-service operations known as the “Smishing Triad,” and collectively they are responsible for sending millions of text messages that spoof the U.S. Postal Service to supposedly collect some outstanding delivery fee, or that pretend to be a local toll road operator warning of a delinquent toll fee. More recently, Lighthouse has been used to spoof e-commerce websites, financial institutions and brokerage firms.

Google’s $32 Billion Wiz Deal Clears DOJ Review
Yahoo Finance, Moz Farooque ACCANovember 5, 2025

Wiz, the Israeli cloud security firm being acquired by Google-parent Alphabet (NASDAQ:GOOG), just cleared a big regulatory hurdle. The U.S. Justice Department has finished its antitrust review of the $32 billion deal giving Alphabet the green light to move one step closer to closing its biggest acquisition ever.

This is an important milestone, but we’re still in the journey between signing and closing, Wiz CEO Assaf Rappaport said at a Wall Street Journal event on Tuesday. His comments suggest the two companies are making steady progress but still have a few boxes to check before the deal becomes official.

The acquisition is a huge move for Alphabet as it looks to strengthen its cloud and cybersecurity capabilities, especially in a market where competition with Amazon (NASDAQ:AMZN) and Microsoft (NASDAQ:MSFT) is heating up. Bloomberg had previously reported that the DOJ was reviewing the deal to ensure it wouldn’t hurt competition in the cybersecurity space.

With that review now behind them, Alphabet and Wiz are one step closer to sealing one of the biggest tech deals of the year.
 

Commonwealth Cyber Initiative helps launch new start-ups
George Mason News, Michele McDonaldNovember 4, 2025

The Commonwealth Cyber Initiative Accelerator (CCI+A) program is a jam-packed, five-month innovation program that’s fueling cybersecurity entrepreneurship across Virginia. It’s teaching professors and start-ups what it takes to bring technology to market, including how to pitch to a panel of investors and industry leaders. 

“Supporting cybersecurity start-ups and bringing ideas out of universities and into the marketplace are an essential part of the CCI’s mission,” said Luiz DaSilva, executive director of CCI. “We’re excited to help these start-ups reach their milestones and create new opportunities and jobs in the commonwealth.” 

The program began at George Mason University and has expanded statewide. It’s co-funded by the CCI Northern Virginia Node and the CCI Hub. 

Since its launch at George Mason University in 2022, CCI+A has accelerated 34 technologies from across Virginia,” said Liza Wilson Durant, CCI Northern Virginia Node director, associate dean of strategic initiatives and community engagement at George Mason, and College of Engineering and Computer professor. “Each year we’ve seen the teams increase their customer engagement and elevate their competitive pitch performance. It’s exciting to see several entities undergo acquisition and see the impact of the program on Virginia economic development.” 

The CCI+A program works on two tracks—the CATAPULT Fund supports teams emerging from Virginia public research universities and the ASCEND Fund focuses on start-up teams collaborating with faculty subject matter experts. The teams receive up to $75,000 to help them commercialize their technology. 

CCI funded 10 projects this year. On October 16, teams from both tracks competed to pitch their start-ups to a panel of investors and industry leaders. The winning team from each category received an additional $5,000 in funding. 

  • The CATAPULT winning team is DeepScan, led by Rui Ning, an assistant professor in computer science at Old Dominion University (ODU). The team also includes Maia Lin and Yao Wang from ODU. 

  • The ASCEND winning team is Glacier21, a start-up led by CEO Ren McEachern, a former FBI agent. Team members include Robert Appleton, Mike Borowski, Neil Alexander, along with George Mason professor Foteini Baldimsti. 

The CCI+A program is coordinated by Gisele Stolz, senior director of entrepreneurship and innovation at George Mason. 

Meet the Winners 

DeepScan 

While artificial intelligence is quickly becoming a requirement on our smartphones and other devices, security is lagging, said Ning, DeepScan team lead. Hidden triggers can hijack behavior, and many apps ship models that are easy to tamper with. DeepScan promises to protect on-device AI without slowing down performance. First-time winner Ning is a veteran of CCI+A and put what he learned from a past program to good use. 

“The program taught me how to communicate technical ideas in a way that connects with broader audiences,” Ning said. “I’ve always been curious about entrepreneurship because it feels like a natural way to translate research into something that can make a difference. It’s another path to extend our impact beyond papers and grants.” 

Glacier21 

Glacier21 is focusing on combating illicit cryptocurrency activities. The platform integrates data from such sources as social media, data leaks, and the deep/dark web to uncover complex connections between crypto wallets, businesses, and individuals. 

Glacier21 presented their platform focused on combating illicit cryptocurrency activities. Photo by Ron Aira/Office of University Branding

CCI+A helped the team with the execution, McEachern said. “When we were lucky enough to be selected into the CCI+A program, it didn’t just open up some resources; it opened up access to people and vast networks that totally changed our strategy and our thinking.” 

Foteini Baldimsti, an associate professor of computer science at Mason, worked with Glacier21 to provide technical expertise. “It’s very nice to see how your academic work can help a start-up and give technical knowledge,” Baldimsti said. “But I think it’s also very interesting as an academic to learn from this project and from the CCI+A program about how you can bring your own ideas and how you can participate in one of the next cohorts and be on the other side as the founder of the company. I think this experience has been very, very valuable.” 

AI is changing who gets hired
The Conversation, Murugan AnandarajanOctober 27, 2025

The consulting firm Accenture recently laid off 11,000 employees while expanding its efforts to train workers to use artificial intelligence. It’s a sharp reminder that the same technology driving efficiency is also redefining what it takes to keep a job.

And Accenture isn’t alone. IBM has already replaced hundreds of roles with AI systems, while creating new jobs in sales and marketing. Amazon cut staff even as it expands teams that build and manage AI tools. Across industries, from banks to hospitals and creative companies, workers and managers alike are trying to understand which roles will disappear, which will evolve and which new ones will emerge.

research and teach at Drexel University’s LeBow College of Business, studying how technology changes work and decision-making. My students often ask how they can stay employable in the age of AI. Executives ask me how to build trust in technology that seems to move faster than people can adapt to it. In the end, both groups are really asking the same thing: Which skills matter most in an economy where machines can learn?

To answer this, I analyzed data from two surveys my colleagues and I conducted over this summer. For the first, the Data Integrity & AI Readiness Survey, we asked 550 companies across the country how they use and invest in AI. For the second, the College Hiring Outlook Survey, we looked at how 470 employers viewed entry-level hiring, workforce development and AI skills in candidates. These studies show both sides of the equation: those building AI and those learning to work with it.

AI is everywhere, but are people ready?

More than half of organizations told us that AI now drives daily decision-making, yet only 38% believe their employees are fully prepared to use it. This gap is reshaping today’s job market. AI isn’t just replacing workers; it’s revealing who’s ready to work alongside it.

Our data also shows a contradiction. While many companies now depend on AI internally, only 27% of recruiters say they’re comfortable with applicants using AI tools for tasks such as writing resumes or researching salary ranges.

In other words, the same tools companies trust for business decisions still raise doubts when job seekers use them for career advancement. Until that view changes, even skilled workers will keep getting mixed messages about what “responsible AI use” really means.

In the Data Integrity & AI Readiness Survey, this readiness gap showed up most clearly in customer-facing and operational jobs such as marketing and sales. These are the same areas where automation is advancing quickly, and layoffs tend to occur when technology evolves faster than people can adapt.

At the same time, we found that many employers haven’t updated their degree or credential requirements. They’re still hiring for yesterday’s resumes while, tomorrow’s work demands fluency in AI. The problem isn’t that people are being replaced by AI; it’s that technology is evolving faster than most workers can adapt.

Fluency and trust: The real foundations of adaptability

Our research suggests that the skills most closely linked with adaptability share one theme, what I call “human-AI fluency.” This means being able to work with smart systems, question their results and keep learning as things change.

Across companies, the biggest challenges lie in expanding AI, ensuring compliance with ethical and regulatory standards and connecting AI to real business goals. These hurdles aren’t about coding; they’re about good judgment.

In my classes, I emphasize that the future will favor people who can turn machine output into useful human insight. I call this digital bilingualism: the ability to fluently navigate both human judgment and machine logic.

What management experts call “reskilling” – or learning new skills to adapt to a new role or major changes in an old one – works best when people feel safe to learn. In our Data Integrity & AI Readiness Survey, organizations with strong governance and high trust were nearly twice as likely to report gains in performance and innovation. The data suggests that when people trust their leaders and systems, they’re more willing to experiment and learn from mistakes. In that way, trust turns technology from something to fear into something to learn from, giving employees the confidence to adapt.

According to the College Hiring Outlook Survey, about 86% of employers now offer internal training or online boot camps, yet only 36% say AI-related skills are important for entry-level roles. Most training still focuses on traditional skills rather than those needed for emerging AI jobs.

The most successful companies make learning part of the job itself. They build opportunities to learn into real projects and encourage employees to experiment. I often remind leaders that the goal isn’t just to train people to use AI but to help them think alongside it. This is how trust becomes the foundation for growth, and how reskilling helps retain employees.

The new rules of hiring

In my view, the companies leading in AI aren’t just cutting jobs; they’re redefining them. To succeed, I believe companies will need to hire people who can connect technology with good judgment, question what AI produces, explain it clearly and turn it into business value.

In companies that are putting AI to work most effectively, hiring isn’t just about resumes anymore. What matters is how people apply traits like curiosity and judgment to intelligent tools. I believe these trends are leading to new hybrid roles such as AI translators, who help decision-makers understand what AI insights mean and how to act on them, and digital coaches, who teach teams to work alongside intelligent systems. Each of these roles connects human judgment with machine intelligence, showing how future jobs will blend technical skills with human insight.

That blend of judgment and adaptability is the new competitive advantage. The future won’t just reward the most technical workers, but those who can turn intelligence – human or artificial – into real-world value.

AI ransomware attacks are coming
Axios+ Newsletter, Sam SabinOctober 21, 2025
Ransomware gangs are embedding AI into their workflows, allowing them to fine-tune and amplify attacks that have already stolen billions from U.S. corporations.

Why it matters: Most cases of cyber criminals using AI are still outliers, security responders say, but AI tools promise to accelerate the attacks that have wreaked havoc across industries.

The big picture: Ransomware gangs experiment with generative AI to negotiate ransoms, write code and sharpen social engineering attacks.

  • Security analysts at cybersecurity firm ReliaQuest said in a report today that 80% of the ransomware-as-a-service groups they observe are now offering automation or other AI tools on their platform.
  • NYU researchers showed in August they could use local LLMs to “autonomously plan, adapt and execute” a ransomware attack.
  • Palo Alto Networks observed criminals using AI-generated audio and video to impersonate employees and gain access before deploying ransomware.
Anonymity’s ARX nemesis
George Mason News, Nathan KahlOctober 20, 2025

A team of faculty and students from George Mason University recently discovered a vulnerability in a widely used anonymization tool. They presented their findings last week in Taiwan at the Association for Computing Machinery Conference on Computer and Communications Security (ACM CCS), one of the world’s most prestigious computer security conferences, with a very low paper acceptance rate.

The project was supported by a Commonwealth Cyber Initiative (CCI) grant from the program, “Securing Interactions between Humans and Machines,” and as a requirement of the grant, the project crossed different parts of the university. The College of Engineering and Computing collaborated with Mason and Partners (MAP) Clinics, which provided the data.

When George Mason University cyber security engineering student Noah Hinger interned at Surefire Cyber in summer 2023, his managers were so impressed with his work that they invited him back. Thanks to his previous experience, the Honors College student was able to take on more responsibility this summer at the computer security company and contribute to more projects. 

“The experience from my first summer here let me understand what was happening and contribute across the company,” said Hinger, who is a sophomore in the College of Engineering and Computing. “Surefire Cyber has an amazing internship program. They’re investing in us by having professional development meetings and providing us with the opportunity to talk to experts from different fields.” said Hinger. 

“I think it’s helped me a lot to be able to practice my independence and accountability,” said Hinger, who also competes in George Mason’s Chess and Competitive Cyber Clubs. “I’ve had the chance to do projects on my own without necessarily needing to report to someone, and I’ve also learned so much from all the smart people at Surefire Cyber.” 

What made you choose Surefire Cyber for your internships? 

Surefire Cyber is a digital forensics and incident response company. When an organization gets hacked, they call Surefire Cyber to conduct an investigation and restore systems so that everything’s OK. As a technology intern, I am helping develop the code that our forensic analysts use to figure out what’s happened and retrieve all the essential data. Surefire Cyber is really exceptional at using a lot of automation tools so that the forensic analysts can focus more on the big picture and the data they’re working with.  

What does a typical day interning at Surefire Cyber look like for you? 

I work under two groups: software development and information operations, or DevOps, and security engineering. For DevOps, I help to manage company infrastructure and work on a lot of coding assignments.  

Traditionally, there are software developers, and there are operations, which maintain it. DevOps is kind of a hybrid role, which means it entails coding, setting up servers, and being responsible for maintaining the infrastructure.  

How would you say George Mason has helped prepare you for this role? 

I definitely think more critically when problem-solving this time around, and I think that’s in part due to George Mason, especially the Honors College research and literature classes that I’ve taken. I’m learning about a lot of new technologies in real time, so I have to be able to research and read about them and then apply that knowledge to my work. The courses have definitely helped me when it comes absorbing the information I’m reading about and then transferring it to my assignments at Surefire Cyber.  

Another thing about the Honors College is that I get to meet different people from different disciplines, and that’s really helped me at Surefire Cyber when connecting and networking with colleagues. I think that’s been really rewarding and something that George Mason’s helped prepare me for.  

I’ve also gotten a lot of experience in my systems engineering and digital systems engineering classes, and getting to see what goes into a lot of security minded decisions is very similar to what I’m doing in this internship.  

What’s your favorite thing about interning at Surefire Cyber? 

My favorite thing about the company is that it’s always driving innovation and new discoveries within cybersecurity and digital forensics. They’re always pushing for the best and trying to help people. That’s the whole point of digital forensics is to be there for and helping them out on the worst day of their life.  

A one-woman team puts many eyes in the skies
George Mason News, Nathan KahlOctober 1, 2025

Fatima Majid, a George Mason University senior majoring in cyber security engineering, was not just the only one-person team in the top 10 award winners at a recent National Defense Industrial Association (NDIA) cyber competition, she was the only student team. Majid placed ninth out of 51 teams, most of them comprising industry experts.  

“I went in, and they were all professional teams, like from Lockheed Martin. I thought, ‘I want to go home,’” said Majid with a laugh, describing her initial cold feet. “But I told myself I could do it. It helped that it was hosted at George Mason, and I had professors there giving me support.”  

Her project focused on how the Department of Defense (DoD) can protect critical U.S. infrastructure against low-cost drone attacks at scale, informed by Ukraine’s “Operation Spiderweb,” which used 117 drones to attack Russian air bases in June 2025.  

Majid spent the summer in Richmond during a public policy internship. Photo provided

Majid’s lightbulb moment came with a flash as bright as a Virginia speed camera catching a lead-footed driver. Considering the significant network of traffic cameras in the commonwealth, she conceived SkyEyes. This applies an artificial intelligence (AI) model to the live feed of the Virginia 511 camera network, which provides real-time traffic information to citizens and transportation officials. SkyEyes demonstrated how a low-cost, AI-enabled surveillance layer could differentiate threats from non-threats, employing geofencing logic to define safe versus threat zones around sensitive sites. 

“I understand how drones work because of what I’ve done at George Mason’s MIX lab—and since I know how to build it, I also know how to jam it. I trained the AI model on a data set provided from a contest sponsor, and that data set had drone imaging and drone prototyping,” she said. “The camera feeds can find objects flying, but what if it’s an Amazon drone, for example? Then I added geofencing and threat analysis to observe the behavior of the drone—if it’s a drone at 2 a.m., for example, maybe that’s sketchy. So, the model gets smarter.”  

Majid said that to access the cameras, all she had to do was make a phone call to the right person and explain her project. She cited time spent this summer at the Virginia Academy of Science, Engineering, and Medicine in the Undergraduate Policy Program (VASEM UPP) in Richmond as giving her confidence and exposure to how government works.  

K. L. Akerlof, an associate professor in the Department of Environmental Science and Policy at George Mason, said, “The VASEM UPP is a unique opportunity for undergraduates to learn about opportunities in science policy at the state level. The immersive experience of spending a week in Richmond visiting the General Assembly and state agencies, while getting a crash course in how research evidence relates to public policy, can open new doors and career pathways.” 

Majid said the strong showing gave her tremendous exposure to influential professional contacts. She fielded several questions about her simulation and future professional plans from a man she only later realized was Retired Brigadier General and NDIA Executive Vice President Guy Walsh. 

“Because he showed interest, after he walked away, a crowd of people gathered around to ask me questions. It was very validating.”  

She also had a long conversation about her project with Harley Stout, acting chief digital and AI officer at the Joint Chiefs of Staff. 

Majid is still pleasantly shocked by her top-10 finish. She is working with Mohamed Gebril, an associate professor in Cyber Security Engineering, on expanding the research. She is confident that such a cost-efficient solution for critical infrastructure protection against drone attacks will attract more funding opportunities.  

She credited the supportive culture at Mason—and her family—for keeping her grounded and encouraging her throughout, saying their support made the accomplishment even more meaningful.

Cybersecurity student hopes to use his powers for good
Mason News, Shayla BrownAugust 25, 2025

When George Mason University cyber security engineering major Connor Wadlin learned about ransomware attacks on organizations, such as the one on the Health Service Executive in Ireland, in his CYSE 445 System Security and Resilience class, it confirmed his commitment to dedicating his educational and professional career to protecting and preserving human lives.

“There’s nothing more important than protecting and defending others. As an engineer, I’m driven to get important work done by thinking about complex problems and finding suitable solutions,” said Wadlin, who is from Leesburg, Virginia.

Since winter 2024, the Honors College student has been interning at the Commonwealth Cyber Initiative(CCI) Northern Virginia Node, George Mason’s branch of the statewide network dedicated of excellence in cybersecurity research. CCI’s mission includes workforce development through training the next generation of cybersecurity experts.

“It’s a super exciting job because I get to work with AprilTags, which are on objects that the drone’s camera then sees and scans. Instead of sharing data, the tags utilize location information for navigation, tracking objects, or pathing purposes,” he said.

Wadlin is also simulating drone flight with the Microsoft tool Air Sim, a project he presented at the CCI Symposium in April. “I created a model with a 98% accuracy, really high F1 score—higher than what we could find on the market—detecting collisions so the drones would be able to respond to anomalous factors such as objects that get too close, environmental variables, cyber-attacks, and more,” he explained.

Wadlin learned about many of the tools he’s currently using for CCI in his classes with College of Engineering and Computing professors, such as his mentor Mohamed Gebril, an associate professor in the Department of Cyber Security Engineering. 

“George Mason supports people where they are to get them where they want to be,” Wadlin said.  

The skills Wadlin has acquired during his time at George Mason and in his work with CCI have enable him to help other students in their studies.  

Connor is a very skilled student and has been able to develop different programs, as well as 12 labs for sophomore- and freshman-level students at George Mason. He even assists the students during our workshops,” said Gebril. 

Wadlin is participating in George Mason’s Bachelor’s to Accelerated Master’s Program and will to pursuing a master’s degree also in cyber security. Gebril said he’s looking forward to having Wadlin in his classes again as a graduate student. 

“It will be a smooth transition from the undergraduate to the graduate level because the curriculum aligns well with the CCI mission, which is to equip our students with the tools to conduct research activity and develop cuttingedge technology,” said Gebril. 

Wadlin’s team is also working to develop a firstofitskind cyber drone race that incorporates cybersecurity challenges and artificial intelligence for undergraduate students.  

Wadlin was diagnosed with autism at 19 and sees this diagnosis as working to his advantage by allowing him to see things from different perspectives and approach problems with his own unique ideas. 

“As an engineer, you have to ask yourself ‘how is this making the world a better place?’ That’s always got to be the end goal,” said Wadlin. 

i

The Rise of the Enterprising Adversary

Get actionable intelligence on these key findings:

  • 150% increase in China-nexus activity across all sectors
  • 442% growth in vishing operations between the first and second half of 2024
  • 51 seconds was the fastest recorded eCrime breakout time
  • 79% of detections were malware-free
  • 26 newly named adversaries in 2024
  • 52% of vulnerabilities observed by CrowdStrike in 2024 were related to initial access
Albanese named executive director of IDIA
George Mason News, Pam ShepherdSeptember 10, 2025

George Mason University has appointed Massimiliano Albanese as executive director of the Institute for Digital Innovation (IDIA)—a pivotal move as the university strengthens its position as a leader in cutting-edge research and technological advancement.

Albanese, who joined George Mason in 2011, currently serves as a professor and associate chair for research in the Department of Information Sciences and Technology within the College of Engineering and Computing. For over a decade, he has served as associate director of the Center for Secure Information Systems, where he has played a critical role in shaping the university’s research strategy in cybersecurity and information technology.

A recognized expert in cyberattack modeling and detection, optimal defense strategies, and adaptive security technologies, Albanese brings a deep understanding of digital systems to his new role. His research portfolio includes participation in projects totaling $13 million, six U.S. patents, two books, and 90 peer-reviewed publications. He is a recipient of George Mason’s Emerging Researcher/Scholar/Creator Award and earned his MS and PhD in computer science and engineering from the University of Naples Federico II, Italy.

“Dr. Albanese is an outstanding leader and researcher who understands the importance of collaboration and innovation in driving progress,” said Andre Marshall, vice president for research, innovation, and economic impact. “His depth of expertise in cybersecurity and digital systems, combined with his proven ability to foster interdisciplinary partnerships, makes him uniquely suited for this role. Under his leadership, we look forward to strengthening IDIA’s mission of advancing digital innovation, expanding cross-disciplinary collaboration across the university, and positioning George Mason as a national leader in solving complex technological challenges.”

Albanese steps into this role at a crucial moment for both George Mason and the technology landscape—particularly with the rise of artificial intelligence (AI) and other emerging technologies. His mission is clear: to drive impact through collaboration and to position George Mason at the forefront of digital innovation.

“This is a very interesting time to be in this position,” Albanese said. “By connecting digital innovation with AI and other emerging technologies, we can make a real difference—not just at George Mason, but for the nation and the world.”

AI’s rapid advancement offers tremendous opportunities as well as complex challenges, he said.

Albanese’s vision for IDIA centers on building a culture of collaboration that unites faculty, students, researchers, and external stakeholders. He said he plans to start by strengthening partnerships with the university’s other research centers and institutes.

The university’s Grand Challenge Initiative (GCI) provides opportunities to apply digital innovation to critical sectors.

“None of these solutions can be achieved without a collaborative mindset because they are inherently complex and multidisciplinary. We are at a point in time of rapid AI growth that is changing the way we approach everything: AI and digital innovation will play a critical role in advancing GCI.”

Another priority for Albanese is diversifying IDIA’s funding sources in response to tighter federal budgets. He said he intends to strengthen existing partnerships with industry and nonprofits, and develop new public-private collaborations to ensure the institute remains resilient and impactful. He notes that as funding becomes more challenging to secure, the university must become more efficient. And one way to do that is for “IDIA to work closely with other institutes and research centers on campus to increase awareness of who is doing what and join forces to have a better impact.”

Albanese sees IDIA as a critical driver in elevating George Mason’s reputation as a leading public research university. His strategy includes promoting technology transfer, supporting start-ups, and creating stronger connections between faculty and industry partners to bring innovations from the lab to the marketplace.

“There is a lot of competition to attract students and resources, and we must establish ourselves as the lead,” he said. “IDIA can help put George Mason at the forefront of research by leveraging our strengths and bringing talented people together to solve big problems.”

Looking ahead, Albanese encourages students and researchers to adopt a problem-driven approach to innovation and become problem solvers. “We should reach out to stakeholders with real-world challenges and develop solutions that truly address those needs.”

By fostering collaboration, driving interdisciplinary research, and forging strong partnerships with industry and government, Albanese aims to position IDIA—and the university—as a national leader in solving complex, real-world problems through technology.

The great divide
CNBC, CNBC TelevisionJune 3, 2025 (04:00)

Karen Hao, author of the new book “Empire of AI”, discusses the clear dividing line between those in the tech space who believe AI can lead to utopia and those who think it will only create massive problems, and perhaps the end of the human race.

AI’s Trillion Dollar Cyber Opportunity
Digital Spritis, Matthew MittelsteadtJuly 21, 2025

Enable AI. Reduce cybercrime. Unleash abundance

Perhaps the biggest near-term AI opportunity is reducing cybercrime costs. With serious attacks unfolding almost daily, digital insecurity’s economic weight has truly grown out of control. Per the European Commission, global cybercrime costs in 2020 were estimated at 5.5 trillion euros (around $6.43 trillion). Since then, costs have only spiraled. In 2025, Cybersecurity Ventures estimates annual costs will hit $10 trillion, a showstopping 9 percent of global GDP. As Bloomberg notes, global cybercrime is now the world’s third-largest economy. This is truly an unrivaled crisis.

Thankfully, it is also an unrivaled opportunity. Given the problem’s sheer scale, any technology, process, or policy that shaves off just a sliver of these cyber costs has percentage point growth potential. Reduce cyber threats, and abundance will follow.

The immense potential of software translation is far from the only near-term AI opportunity. Already, studies have proven AI can automate vulnerability detection—that is, AI can discover serious security issues without human involvement. Soon, software could be proactively secured even before it ships. Likewise, advances in AI task completion suggest software patches could soon be automated. In a few years, software fixes could be generated and shipped just moments after insecurities are discovered. Beyond, we find countless other possibilities in advanced cyber intelligence, threat detection, real-time response, and more.