Initializing secure connection_

Building Your Ultimate Defense.

> AI Pentesting · Application Security · Offensive Security · Bug Bounty
0+ Bugs Found
0+ CVEs Published
0+ Live Hacking Events
Scroll to explore

Who Am I

Pedro Paniago - AI Pentesting & Offensive Security Expert

Pedro Paniago

Offensive Security Expert based in Brussels, Belgium

I'm an Offensive Security Manager at PwC, freelance bug bounty hunter, and passionate cybersecurity professional, also known as drop in the bug bounty community. Problem-solving is my playground, and passion drove me where I am today.

With specializations in Web, API, Mobile, and AI Penetration Testing, I've identified over 1000 vulnerabilities across major platforms and published 10+ CVEs affecting millions of users worldwide. As a HackerOne Belgium Ambassador and Synack Red Team Member, I operate at the forefront of offensive security, AI red teaming, and LLM security research.

From DEFCON to LeHack Paris, from Shopify's h1-416 to Belgium's Hack The Government, I bring real-world attack experience, including adversarial testing of AI systems, to help organizations build their ultimate defense.

Brussels, Belgium
PwC • PANIAGO
French • Portuguese • English

Offensive Security Manager

PwC — Oct 2021 to Present

  • Web, API, Mobile and AI Pentesting
  • SOC L2, L3 & Threat Hunting
  • Digital Forensics

Freelance Bug Bounty Hunter

PANIAGO — May 2023 to Present

  • AI, Web, API and Mobile focus
  • 1000+ bugs in multiple programs
  • 10+ CVEs discovered

Certifications & Training

IN PROGRESS HTB Certified Web Exploitation Expert (CWEE)
2025 AI SecureOps: Attacking & Defending GenAI (BruCON)
CERT Certified AI/ML Pentester (GenAI Pentester)
CERT Certified Mobile Penetration Tester (CMPen-Android)
BSCP Burp Suite Certified Practitioner
2023 Hacking Modern Web & Desktop Apps (BruCON)
CBBH Certified Bug Bounty Hunter (HackTheBox)
eJPT eLearnSecurity Junior Penetration Tester

Offensive Security & AI Pentesting Services

Comprehensive offensive security services to protect your organization from real-world threats.

Web Penetration Testing

In-depth security assessment of web applications, enhanced with AI-augmented testing techniques. Identifying vulnerabilities like XSS, SQLi, SSRF, IDOR, and business logic flaws before attackers do.

  • OWASP Top 10
  • Business Logic
  • Authentication

API Penetration Testing

Thorough testing of REST, GraphQL, and SOAP APIs. Uncovering broken access controls, injection flaws, and data exposure risks in your API infrastructure.

  • REST / GraphQL
  • Access Control
  • Rate Limiting

Mobile Penetration Testing

Comprehensive security testing of Android and iOS applications. From static analysis to runtime manipulation, ensuring your mobile apps are bulletproof.

  • Android / iOS
  • Static Analysis
  • Runtime Testing

AI Red Teaming

Comprehensive AI pentesting and adversarial testing of AI/ML systems and LLM-powered applications. Employing AI hacking methodology to identify prompt injection, data poisoning, RAG system vulnerabilities, and conduct thorough LLM vulnerability assessments.

  • LLM Security
  • Prompt Injection
  • RAG Attacks

Cyber Awareness Training

Hands-on training sessions, CTF organization, and developer education on secure coding for AI and web applications. Helping teams build security into their workflow from day one.

  • CTF Events
  • Secure Coding
  • AI Security

Bug Bounty Program Management

End-to-end management of your bug bounty program. From policy creation to triage, leveraging experience from HackerOne, Synack, and Yogosha platforms.

  • Program Design
  • Triage
  • Policy

Companies I've Hacked

Responsibly disclosed vulnerabilities to some of the world's most recognized brands.

Shopify
Shopify
Red Bull
Red Bull
Evernote
Evernote
WeTransfer
WeTransfer
Belgium Defense
Belgium Defense
eBay
eBay
Notion
Notion
Banco Inter
Banco Inter
Santander
Santander
Chia Network
Chia Network

Published CVEs

CVE-2024-9944 5.3 Medium WooCommerce ≤ 9.0.2 — Unauthenticated HTML Injection
CVE-2024-1289 6.5 Medium LearnPress ≤ 4.2.6.3 — Insecure Direct Object Reference
CVE-2024-0386 7.2 High weForms ≤ 1.6.21 — Unauthenticated Stored XSS
CVE-2023-7063 7.2 High WPForms Pro 1.8.4-1.8.5.3 — Unauthenticated XSS
CVE-2023-6828 7.2 High ARForms ≤ 1.5.8 — Unauthenticated Stored XSS
CVE-2024-1463 4.4 Medium LearnPress ≤ 4.2.6.3 — Authenticated Stored XSS
CVE-2023-6957 4.9 Medium Fluent Forms ≤ 5.1.9 — Authenticated Stored XSS
CVE-2024-1128 5.4 Medium Tutor LMS ≤ 2.6.0 — HTML Injection via Q&A
CVE-2024-1133 4.3 Medium Tutor LMS ≤ 2.6.0 — Missing Authorization
CVE-2023-6953 4.9 Medium PDF Generator For Fluent Forms ≤ 1.1.7 — Stored XSS
CVE-2023-6830 6.5 Medium Formidable Forms ≤ 6.7 — HTML Injection
CVE-2023-6842 4.4 Medium Formidable Forms ≤ 6.7 — Authenticated Stored XSS

Battle-Tested Achievements

Proven track record at the world's most prestigious live hacking events and security conferences.

2026

BSides Limbourg

Speaker — "Up and Down Technique - Exposing Hidden Data from RAG Systems"

Speaker
2025

HackerOne h1-416 Shopify — Toronto, Canada

2nd place with 11 bugs identified at Shopify's exclusive live hacking event.

2nd Place
2025

DEFCON 33 Bug Bounty Village

Speaker — "Up and Down Technique - Exposing Hidden Data from RAG Systems"

Speaker
2025

Hack The Government — Belgium

6th Place with 21 bugs identified in Belgian government systems.

6th Place
2025

LeHack & Rooted Con

Live hacking events in Paris (YesWeHack) and Madrid (Yogosha). Speaker at LeHack Paris.

Speaker + Hacker
2024

Hack The Government — Belgium

3rd Place with 35 bugs identified. Top performer in Belgium's government security initiative.

3rd Place
2024

LeHack Paris & Rooted Con Madrid

6 bugs identified across both live hacking events organized by YesWeHack and Yogosha.

Live Hacker
2024

Howest University — Belgium

Speaker — "An initiation to 0day research" at Howest University of Applied Sciences.

Speaker

HackerOne Belgium Ambassador

Representing HackerOne in Belgium, bridging the gap between ethical hackers and organizations seeking to improve their security posture.

Synack Red Team Member

Selected member of Synack's vetted Red Team, conducting authorized penetration testing for Fortune 500 companies and government agencies.

Yogosha Strike Force

Active member of Yogosha's elite Strike Force, participating in private bug bounty programs and live hacking events across Europe.

Public Speaking

Sharing knowledge at world-class security conferences and events.

Conference Talks

BSides Limbourg 2026

Up and Down Technique — Exposing Hidden Data from RAG Systems

DEFCON 33 Bug Bounty Village 2025

Up and Down Technique — Exposing Hidden Data from RAG Systems

LeHack Paris 2025

Up and Down Technique — Exposing Hidden Data from RAG Systems

Critical Thinking Bug Bounty Podcast

Featured guest discussing bug bounty methodologies and AI security

Howest University 2024

An initiation to 0day research

Latest AI Security & Pentesting Research

Deep dives into vulnerabilities, CTF writeups, and security research from the field.

$ curl -s https://blog.paniago.io/feed Fetching latest posts...

Frequently Asked Questions

Common questions about AI pentesting, LLM security, and offensive security services.

What is AI penetration testing?

AI penetration testing is a specialized security assessment that targets AI and machine learning systems, including large language models (LLMs), to identify vulnerabilities such as prompt injection, data poisoning, model manipulation, and insecure integrations. It combines traditional penetration testing methodologies with AI-specific attack techniques to evaluate the security posture of AI-powered applications before malicious actors can exploit them.

How do you test LLM applications for security?

Testing LLM applications involves evaluating multiple attack surfaces: prompt injection (direct and indirect), RAG (Retrieval-Augmented Generation) poisoning, data exfiltration through crafted queries, jailbreaking techniques, insecure output handling, and excessive agency vulnerabilities. The methodology follows the OWASP Top 10 for LLM Applications and includes both automated scanning and manual adversarial testing to ensure comprehensive coverage.

What is AI red teaming?

AI red teaming is the practice of simulating adversarial attacks against AI systems to uncover security weaknesses, safety failures, and alignment issues before they can be exploited. It involves testing AI models and applications for robustness against prompt injection, data leakage, harmful content generation, and business logic manipulation in real-world deployment scenarios.

Why is LLM security important?

Organizations increasingly integrate large language models into customer-facing applications, internal tools, and decision-making systems. Without proper security testing, LLMs can be exploited to leak sensitive data, bypass access controls, generate harmful content, or serve as pivot points for broader system compromise. As AI adoption accelerates, securing these systems is essential to protect business operations and user trust.

Let's Work Together

Ready to build your ultimate defense? Get in touch.

contact@paniago

$ whoami

Pedro Paniago

$ cat status.txt

Available for projects

$ cat location.txt

Brussels, Belgium

$ cat socials.txt

LinkedIn: @pedropaniago

Twitter/X: @dropn0w

$ _