Glen Sorenson gave a 40-minute talk called The High Performance Fuel for Social Engineering: Now in AI Flavours. Glen was also behind the Zero Trust to Trusted Adviser workshop earlier in the conference — that talk was about using psychology to sell security internally. This one was the darker side of the same coin. The same understanding of people and personal information that builds trust can be weaponised by someone outside it with far more damaging intent.
The central argument was direct. Privacy, as most people understand it, is largely a myth. Not because regulation does not exist — but because the regulation that exists was never designed to make your personal information disappear.
The Privacy Illusion
Most people in Europe point to GDPR when they want to feel reassured about their data. The regulation is real. The right to erasure exists. But Glen walked through the caveats, the exceptions, the open questions that sit inside both GDPR and US data law and create gaps wide enough for personal data to flow through regardless of what the regulation says on the surface.
Data erasure sounds absolute. In practice it is conditional. Legitimate interest provisions, legal obligation carve-outs, research and public interest exceptions all create situations where a deletion request generates a process rather than a deletion. The burden of following up sits entirely with the individual. Most people never follow up. Most requests are never fully executed.
The United States is in an even more exposed position. Without a unified federal privacy law, data brokers operate legally. People-search sites aggregate personal information and sell it openly. The onus is placed entirely on the individual to opt out across dozens of separate platforms. For most people that is not a realistic task. So the data stays. It accumulates. It gets enriched, cross-referenced, and sold again.
What That Data Becomes in the Wrong Hands
The OSINT exercise built around a fictional target called Tyson made the point with uncomfortable speed. In a couple of minutes, a complete personal dossier comes together from entirely public sources: name, date of birth, addresses, phone numbers, email addresses, family connections, spouse, parents, children, work context, role, employer technologies, and implied system access.
The group behind high-profile breaches at Okta, Nvidia, and Rockstar Games did not use sophisticated zero-day exploits. They reached out to employees on personal devices using personal contact details from public sources. Offered bribes via Signal and WhatsApp. Used home addresses and family information to make threats. Bombed personal phones with MFA push notifications at 2am until exhausted employees approved the login just to make it stop.
AI Changes the Scale of Everything
The automated PII weaponisation pipeline is now: target scouting → individual profile building → full dossier construction → target ranking by access level → personalised phishing content generation → campaign execution. All automated. What took a skilled human attacker hours or days of manual research now runs as an automated pipeline.
What This Means for Organisations
Security awareness training that teaches employees to spot generic phishing will not prepare them for a message that references their dog's name, their home neighbourhood, and their manager's name in the same paragraph. The attack is no longer impersonal. It is built from the employee's own publicly available life.
Glen's talk did not end with neat solutions — and that felt honest. There is no simple fix. What it argues for is a more serious organisational conversation about employee digital footprint as a security risk, not just a personal matter. The personal phone number your employee has on LinkedIn is not their problem alone. It is a potential entry point into your organisation.
Based on the session by Glen Sorenson at BSides Luxembourg 2026.