US Trends

what method did logan use to analyze his phishing email simulation

There is not enough reliable public information to identify a specific person named “Logan” and what exact method he used to analyze “his” phishing email simulation.

However, in cybersecurity training and phishing simulations, people typically use a fairly standard set of analysis methods, and it is very likely that “Logan” (whoever he is in your context) used some combination of these.

Likely method Logan used

In most phishing email simulation projects, an analyst will typically:

  • Collect simulation data
    • Export CSV or similar reports from the simulation platform (e.g., KnowBe4 or other tools) showing who opened, clicked, reported, or ignored simulated phishing emails.
* Include timestamps, campaign names, and user attributes (department, site, role).
  • Preprocess and clean the data
    • Use a script (often Python) to anonymize users, normalize campaign names, and merge multiple campaign exports into a single data frame.
* Handle changed email addresses or duplicate records so that each user has a consistent identifier.
  • Run quantitative analysis on behavior
    • Calculate click‑through rate, report rate, open rate, and time‑to‑report for each campaign and group.
* Compare results across campaigns to establish a baseline “band” (e.g., typical unsafe click range and reporting range) instead of relying on a single campaign number.
  • Segment users and campaigns
    • Group users by department, seniority, or location to see where risk is highest.
* Compare easy vs harder templates (e.g., generic vs highly contextual lures) to see how difficulty affects failure rates.
  • Correlate with real incidents
    • Compare simulation outcomes with real phishing incidents reported in the same time frame to validate whether the simulation reflects real‑world behavior.
  • Feed into training and improvements
    • Identify who needs extra training and what patterns (urgency, fake invoices, password resets) confuse users most.
* Adjust future simulations and awareness content based on these insights.

Mini story-style explanation

Imagine Logan has just finished a month‑long phishing campaign in his company. The tool spits out four CSV files: one from each wave of emails. He writes a short Python script to anonymize everyone’s email address, fix inconsistent campaign naming (“Q1 Test”, “Q1-Phish”, etc.), and combine everything into one dataset with a row per user.

Next, he calculates what percentage of people clicked, who reported the emails, and how long it took them to report. He notices that a “fake invoice” scenario causes far more clicks than a basic password‑reset phish, so he marks invoice scams as a training priority. He then compares these numbers with the real phishing alerts the SOC received that month and sees that the departments with the worst simulation performance are also the ones being hit hardest in real life.

Using that picture, Logan designs targeted micro‑trainings for those users, updates his future simulation templates to be more realistic (multi‑channel, contextual lures), and plans to re‑run the simulations to see if the risky behavior actually drops.

Direct answer (concise)

So, while the exact “Logan” from your question cannot be uniquely identified, the method such a person would typically use is:

Export and clean simulation data (usually with a scripting language like Python), then perform quantitative behavioral analysis (click rates, report rates, time‑to‑report) across user segments and campaigns, and correlate the results with real incidents to guide targeted training and improved future simulations.

Information gathered from public forums or data available on the internet and portrayed here.