Initializing secure connection_
> AI Pentesting · Application Security · Offensive Security · Bug Bounty
Offensive Security Expert based in Brussels, Belgium
I'm an Offensive Security Manager at PwC, freelance bug bounty hunter, and passionate cybersecurity professional, also known as drop in the bug bounty community. Problem-solving is my playground, and passion drove me where I am today.
With specializations in Web, API, Mobile, and AI Penetration Testing, I've identified over 1000 vulnerabilities across major platforms and published 10+ CVEs affecting millions of users worldwide. As a HackerOne Belgium Ambassador and Synack Red Team Member, I operate at the forefront of offensive security, AI red teaming, and LLM security research.
From DEFCON to LeHack Paris, from Shopify's h1-416 to Belgium's Hack The Government, I bring real-world attack experience, including adversarial testing of AI systems, to help organizations build their ultimate defense.
PwC — Oct 2021 to Present
PANIAGO — May 2023 to Present
Comprehensive offensive security services to protect your organization from real-world threats.
In-depth security assessment of web applications, enhanced with AI-augmented testing techniques. Identifying vulnerabilities like XSS, SQLi, SSRF, IDOR, and business logic flaws before attackers do.
Thorough testing of REST, GraphQL, and SOAP APIs. Uncovering broken access controls, injection flaws, and data exposure risks in your API infrastructure.
Comprehensive security testing of Android and iOS applications. From static analysis to runtime manipulation, ensuring your mobile apps are bulletproof.
Comprehensive AI pentesting and adversarial testing of AI/ML systems and LLM-powered applications. Employing AI hacking methodology to identify prompt injection, data poisoning, RAG system vulnerabilities, and conduct thorough LLM vulnerability assessments.
Hands-on training sessions, CTF organization, and developer education on secure coding for AI and web applications. Helping teams build security into their workflow from day one.
End-to-end management of your bug bounty program. From policy creation to triage, leveraging experience from HackerOne, Synack, and Yogosha platforms.
Responsibly disclosed vulnerabilities to some of the world's most recognized brands.
Proven track record at the world's most prestigious live hacking events and security conferences.
Speaker — "Up and Down Technique - Exposing Hidden Data from RAG Systems"
Speaker2nd place with 11 bugs identified at Shopify's exclusive live hacking event.
2nd PlaceSpeaker — "Up and Down Technique - Exposing Hidden Data from RAG Systems"
Speaker6th Place with 21 bugs identified in Belgian government systems.
6th PlaceLive hacking events in Paris (YesWeHack) and Madrid (Yogosha). Speaker at LeHack Paris.
Speaker + Hacker3rd Place with 35 bugs identified. Top performer in Belgium's government security initiative.
3rd Place6 bugs identified across both live hacking events organized by YesWeHack and Yogosha.
Live HackerSpeaker — "An initiation to 0day research" at Howest University of Applied Sciences.
SpeakerRepresenting HackerOne in Belgium, bridging the gap between ethical hackers and organizations seeking to improve their security posture.
Selected member of Synack's vetted Red Team, conducting authorized penetration testing for Fortune 500 companies and government agencies.
Active member of Yogosha's elite Strike Force, participating in private bug bounty programs and live hacking events across Europe.
Sharing knowledge at world-class security conferences and events.
Up and Down Technique — Exposing Hidden Data from RAG Systems
Up and Down Technique — Exposing Hidden Data from RAG Systems
Up and Down Technique — Exposing Hidden Data from RAG Systems
Featured guest discussing bug bounty methodologies and AI security
An initiation to 0day research
Deep dives into vulnerabilities, CTF writeups, and security research from the field.
Common questions about AI pentesting, LLM security, and offensive security services.
AI penetration testing is a specialized security assessment that targets AI and machine learning systems, including large language models (LLMs), to identify vulnerabilities such as prompt injection, data poisoning, model manipulation, and insecure integrations. It combines traditional penetration testing methodologies with AI-specific attack techniques to evaluate the security posture of AI-powered applications before malicious actors can exploit them.
Testing LLM applications involves evaluating multiple attack surfaces: prompt injection (direct and indirect), RAG (Retrieval-Augmented Generation) poisoning, data exfiltration through crafted queries, jailbreaking techniques, insecure output handling, and excessive agency vulnerabilities. The methodology follows the OWASP Top 10 for LLM Applications and includes both automated scanning and manual adversarial testing to ensure comprehensive coverage.
AI red teaming is the practice of simulating adversarial attacks against AI systems to uncover security weaknesses, safety failures, and alignment issues before they can be exploited. It involves testing AI models and applications for robustness against prompt injection, data leakage, harmful content generation, and business logic manipulation in real-world deployment scenarios.
Organizations increasingly integrate large language models into customer-facing applications, internal tools, and decision-making systems. Without proper security testing, LLMs can be exploited to leak sensitive data, bypass access controls, generate harmful content, or serve as pivot points for broader system compromise. As AI adoption accelerates, securing these systems is essential to protect business operations and user trust.
Ready to build your ultimate defense? Get in touch.
$ whoami
Pedro Paniago
$ cat status.txt
Available for projects
$ cat location.txt
Brussels, Belgium
$ cat socials.txt
LinkedIn: @pedropaniago
Twitter/X: @dropn0w
$ _