Thursday, April 3, 2025

Can safety consultants effectively harness generative AI capabilities without requiring immediate engineering expertise?

As professionals across various sectors investigate the potential of generative AI for diverse tasks, including crafting data security training materials, a pressing question arises: can this technology truly deliver desired outcomes?

Brian Callahan, senior lecturer and graduate program director in data science and network sciences at Rensselaer Polytechnic Institute, together with Shoshana Sugerman, an undergraduate student enrolled in the same program, presented their findings on this topic at the ISC2 Security Congress in Las Vegas last October.

What would happen if we leveraged the power of language models like ChatGPT to develop a cutting-edge cyber coaching platform that empowers individuals to upskill and reskill in the ever-evolving world of cybersecurity?

What are the essential skills and competencies required for safety professionals to develop effective AI-powered safety coaching, and do they need to become technical experts themselves to achieve this goal?

Researchers presented three groups with identical tasks to address these queries: a team of safety consultants certified by ISC2, self-proclaimed engineering experts, and individuals possessing corresponding qualifications. Their approach involves developing a comprehensive cybersecurity awareness training program. Following the coaching session, the feedback was shared with the campus community, where neighbors and customers were invited to provide input on the fabric’s practicality.

Researchers hypothesised that the quality of coaching is homogeneous across all levels. When a distinction emerges, it may reveal which expertise proved most vital. Yes, prompts created by safety consultants would likely exhibit simpler language.

While coaching participants overwhelmingly praised the fabric’s exceptional quality, ChatGPT struggled with accuracy.

The researchers disseminated refined coaching materials, comprising minimally revised yet predominantly AI-generated content, to the Rensselaer community, inclusive of students, faculty, and staff.

The outcomes indicated that:

  • Individuals who underwent training in the coach’s design self-reported enhanced abilities to deflect social engineering attacks and secure passwords effectively.
  • Individuals who underwent training crafted by safety experts self-assessed as significantly more adept at identifying and circumventing social engineering attacks, spotting phishing attempts, and promptly responding to cyber threats.
  • Researchers found that individuals who underwent training created by twin consultants demonstrated a significant boost in their self-assessed proficiency in identifying and countering cyber threats, including phishing attacks.

It’s notable that Callahan discovered it seemed peculiar for individuals trained in safety consulting to genuinely believe they were more adept at immediate engineering. Notwithstanding, those who designed the coaching did not typically value the AI-generated content highly enough.

Callahan noted that no one felt their initial attempt at a cross-presentation was satisfactory for sharing with others. “This necessitated a series of successive revisions.”

ChatGPT generated a singular instance of producing seemingly comprehensive and coherent guidance on identifying phishing emails. Despite all information presented on the slide being inaccurate. The AI has developed innovative processes and implemented a comprehensive IT infrastructure to manage electronic communication effectively.

Asking ChatGPT to hyperlink to RPI’s safety portal drastically transformed the content and provided precise guidelines. The researchers promptly notified learners about an error in their coaching supply data and issued a correction. Sugerman’s reputation took a hit as none of his coaching clients picked up on the glaring error in the provided coaching data.

Determining transparency around AI-generated content in training materials is crucial.

Callahan posited that ChatGPT would likely comprehend one’s insurance policies if approached correctly. Notably, all of RPI’s insurance policies are readily available online for public access.

The investigators publicly disclosed that the content was artificially generated following the completion of training. Reactions, previously disparate entities, were consolidated under a unified banner by Callahan and Sugerman.

  • As college students increasingly rely on the capabilities of artificial intelligence, a growing sense of detachment pervades their anticipation of a future where certain written tasks may be assumedly accomplished by AI.
  • Some people have exhibited a mix of apprehension and wariness.
  • The irony was palpable: a coaching program focused on safeguarding data, conceived and crafted by artificial intelligence itself.

Callahan emphasized that IT groups should transparently disclose their use of AI in creating instructional materials when deploying the technology for practical applications rather than conducting research experiments, thereby maintaining open communication with others.

Callahan noted, “We’ve garnered tentative evidence suggesting that generative AI, in general, warrants consideration as a valuable tool.” “While devices do have their benefits, they also pose certain risks.” Components of our coaching have often proven to be simply inaccurate, overly broad, or lacking specificity.

Certain constraints inherent to the study

The study’s findings were tempered by several notable constraints.

“There is a wealth of literature suggesting that generative AI models, including ChatGPT, can create a false sense of discovery among readers, leading them to believe they’ve uncovered novel insights when, in fact, those ideas may already exist.”

To rigorously assess participants’ proficiency in a specific domain, rather than inquiring about their perceived discovery, would have necessitated allocating additional time for the study, noted Callahan.

Following the presentation, I inquired as to whether Callahan and Sugarman had considered leveraging a peer-led coaching program authored entirely by individuals. They’d, Callahan stated. Regardless, categorising coaching providers into cybersecurity consultants and technical experts proved to be a crucial aspect of the study. Given the scarcity of self-identified immediate engineering consultants within the college community, it was challenging to assemble a management class with diverse teams.

The panel presentation drew on insights gathered from an initial subset of members comprising 51 test-takers and three expert evaluators. As a follow-up email, Callahan notified TechRepublic that the final model for publishing would incorporate additional components, as the initial test run was an ongoing pilot study.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles