CVE Hunting Methodology: This is where the training wheels come off

Thumbnail


Intro

In this article, I’ll walk you through my CVE hunting methodology, from how I find vulnerabilities to how I go about getting them assigned official CVE IDs. If you’ve ever wondered how to go from poking around a web app to contributing to the global vulnerability database, you’re in the right place.

Back in February 2025, I was invited to the SchlopShow, where I talked about this exact topic. We even recorded a second video where I demonstrated live bug hunting on Vvveb, a web app where I found eight vulnerabilities, all of which I responsibly reported to the vendor & MITRE for CVE assignment.

Unfortunately, that second video was tragically stolen by Canadian raccoons. Yes, really. You can blame wildlife for the fact that no one got to see the hands-on part. 

But don’t worry, this write-up is your replacement for that missing footage, and frankly, it’s way more detailed. In fact, it’s so comprehensive you probably won’t need to enroll in one of those shiny, overhyped “CVE Hunting Masterclass” courses that keep popping up. You know the ones, where someone discovers one bug, repackages public information with a new buzzword, and sells it back to you. 
 

Alright, now let’s get to it.

 

What’s a CVE?

A CVE stands for Common Vulnerabilities and Exposures. It's a standardized identifier for publicly known cybersecurity vulnerabilities in software or hardware. Each CVE entry includes:

  • A unique ID (like CVE-2024-12345)
  • A brief description of the vulnerability
  • References to more detailed information (e.g., vendor advisories, patches)

     

Why it's important:
CVE IDs help security professionals and organizations track, discuss, and fix vulnerabilities consistently across different tools and platforms.
 

Why should you get a CVE?

Because unlike the thousand other people flexing their OSCP, CEH, or CompTIA certs, a CVE proves you’ve actually found something broken in the real world, not just memorized which port number is used by FTP or how to copy-paste reverse shells under exam pressure.

A CVE is public recognition that you didn’t just read about vulnerabilities, you discovered one. It means someone thought your finding was important enough to be cataloged alongside vulnerabilities that have brought down major companies and sent incident response teams into panic mode. Not bad, right?

While a CVE won't teach you how to “pivot in a segmented network under time constraints” (whatever that means this week), it will make you stand out in a crowd of people who list “security researcher” in their bio but have never actually submitted a single bug.

Bottom line: You can’t fake a CVE. And that makes it a lot more impressive than whatever multiple-choice test you passed last fall.
 

How to get your own CVE?

A CVE (Common Vulnerabilities and Exposures) is simply a publicly disclosed vulnerability in software or hardware. If you discover a vulnerability, especially in open-source code, you can potentially be assigned your own CVE, provided you report your findings through the appropriate channels. 

The process is context-dependent: some open-source projects manage their own CVE assignments, while others do not. For example, Apache has its own CVE assignment process and handles disclosures internally. In contrast, for projects that don't issue CVEs themselves, you would need to go through an external CNA (CVE Numbering Authority) like MITRE or GitHub.

If you're aiming to obtain your first CVE, Apache’s program can be a great starting point due to its structured and transparent vulnerability reporting system.

Ultimately, a CVE is just a way of cataloging a security issue you’ve discovered. As a security researcher, you can obtain a CVE ID either by reporting the vulnerability directly to a vendor that assigns CVEs automatically, or by submitting your report to MITRE for evaluation and assignment.

Here is what a CVE looks like:

https://www.cve.org/CVERecord?id=CVE-2025-29868
 

This was identified & reported by myself and my friend Luke Smith. 


 

Who is the MITRE Corporation?

The MITRE Corporation is a not-for-profit organization founded in 1958, tasked with providing technical guidance to the U.S. Air Force and later expanding to support pretty much every U.S. government agency that realized they needed someone smart to handle all the stuff they don’t understand, cybersecurity, defense, aviation, healthcare, homeland security, and, of course, writing acronyms no one can remember.

They’re best known in cybersecurity circles for operating the CVE program, you know, that small thing the entire industry relies on to track vulnerabilities, coordinate disclosures, and keep the internet from setting itself on fire.

Naturally, because we live in a golden age of strategic brilliance, MITRE recently found itself facing a funding crisis, as its primary contract channeled through CISA was set to expire on April 16, 2025. This apparently came as a surprise to some decision-makers, who must’ve assumed CVEs just assign themselves via divine intervention.

The situation wasn’t helped by a particularly efficient move (if your definition of “efficiency” is “let's break important things”) from the Trump-era Department of Government Efficiency (DOGE), which axed $28 million in MITRE contracts and triggered 442 layoffs in Virginia. Because, clearly, the best way to improve national cybersecurity is to fire the people actually doing the work.

With the expiration date looming and the industry awkwardly side-eyeing the potential collapse of the vulnerability tracking backbone of the internet, CISA swooped in just in time to extend MITRE’s contract for another 11 months. Phew! Crisis postponed. For now.

So while the CVE program lives to fight another day, its long-term survival seems to depend on a mix of temporary extensions, government calendar roulette, and the assumption that someone, somewhere, eventually remembers how essential this all is.

 

Should you still report your findings to MITRE?

You can report your findings to MITRE if you're in no particular rush and enjoy watching calendars collect dust. Personally, I’ve got over 30 CVE requests sitting in their inbox like forgotten leftovers in the fridge. It’s been over 90 days (yes, ninety), which is a bit longer than their supposed response window.

At this point, relying solely on MITRE for CVE assignment is kind of like mailing a letter and hoping it gets delivered by carrier pigeon: technically possible, just not exactly prompt. So if you want your vulnerability acknowledged sometime before the next Ice Age, maybe keep some alternative disclosure paths in mind. 


 

Alternatives? Yes, They Exist.

GitHub, for example, is a CVE Numbering Authority (CNA) and can issue CVEs through its Security Advisories feature. It’s actually a pretty streamlined process, provided the project is hosted on GitHub and the maintainers are still active (and by “active,” I mean they’ve logged in sometime since the last presidential election).

First, you’ll need to convince the maintainer to create a security advisory. This acts as a formal entry point for your vulnerability report. Sounds simple enough unless the maintainer’s idea of triage is to ghost issues for six months and then close them with “won’t fix” and a smiley face.

Once your vulnerability is submitted, the developer can resolve the issue and request a CVE from GitHub. Compared to MITRE’s bureaucratic timewarp, this process is refreshingly fast.

But a quick heads-up: this does require basic communication skills. Yes, that means sending more than just a one-line “u have bug,” and being prepared to wait a few days without having a meltdown on Twitter. In short, both parties need to act like functioning adults, an increasingly rare commodity, even in cybersecurity.
 

Let’s Talk About the Developer Side

Open-source, as you probably know, often runs on a potent blend of caffeine and unpaid labor. Most maintainers are volunteers, underpaid, or simply too exhausted to triage issues that aren’t immediately on fire. You've heard the stereotype that developers love coding and coffee but you may have missed the part where they also have jobs, families, burnout, and occasionally, the audacity to log off.

So yes, security is usually an afterthought. It’s “nice to have,” but rarely urgent unless your bug breaks production and generates a support ticket with all-caps subject lines.

And here’s the kicker: on GitHub, whether you get a CVE has nothing to do with how polished your report is, how exploitable the issue is, or whether it could bring down half the internet. It depends entirely on whether the person reading your report is having a good day, had their coffee, or feels like clicking “Request CVE” that morning.

Now for the Plot Twist

In some corners of GitHub, the maintainers are the bug hunters. Yes, you read that right. In projects like Contao CMS, the same people who write the code are also the ones “finding vulnerabilities” in it and assigning themselves CVEs like it’s a productivity badge.

Nobody bats an eye. Nobody asks questions. It's the same handful of names in every advisory, playing musical chairs with exploit discovery and fix credit.

So, if you're an employer and someone proudly flaunts a few GitHub-issued CVEs on their résumé, maybe just double-check the context. The GitHub CVE system is faster than MITRE’s, sure, but it's also a little more... community-driven. And by “community,” I mean “whoever had commit access and a free afternoon.”



 

Are CVE(s) and zero-day(s) the same thing?

Short answer: No, they are not.
Long answer: It depends on your understanding of basic definitions, which unfortunately many “experts” lack.

A zero-day (or 0day) is a vulnerability that is unknown to the software vendor or relevant defenders at the time of discovery. In contrast, a CVE (Common Vulnerabilities and Exposures) is simply an identifier assigned to a known vulnerability, often long after it was first discovered. A zero-day can later become a CVE, but a CVE is not necessarily a zero-day. That distinction seems to confuse more people than it should.

Per Wikipedia (because apparently some need citations for definitions they claim to work with daily):

A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it.

So yes, if you discover a vulnerability (even if it's “just” clickjacking or self-XSS) and nobody else knew about it before you, then congratulations: that was a zero-day at the time you found it. It's not about the exploit being sexy; it's about being first.

But here’s where things get amusing:
There’s a subset of self-proclaimed “infosec professionals” who scoff at calling anything a 0day unless it's a remote code execution exploit on Windows or Chrome. These are usually the folks who treat CVE numbers like trading cards and think a zero-day must come with a logo and press release. If you're among them, here's a quick literacy test: look up the definition of “zero-day,” read it slowly, and try not to project your lack of nuance onto the rest of us.

If you insist a vulnerability isn't a zero-day because it's "only" in an abandoned open-source project or doesn’t result in total pwnage, you might not be elitist, you might just be confused. It’s not illegal to be wrong, it just gets embarrassing when you choose to stay that way.

So yes, you're a 0day hunter if you’re the first to discover a previously unknown vulnerability, whether it’s in a billion-dollar product or a dusty GitHub repo with three stars. Zero-days aren’t about flashiness; they’re about timing.

Oh, and no zero-days are not exclusive to open-source software. They exist anywhere code does: in proprietary platforms, embedded hardware, third-party plugins, and yes, even in your favorite closed-source “enterprise-grade” solutions.

If no one knew about it before you did, it’s a 0day.
It’s not rocket science, it’s just InfoSec 101. Try to keep up.


 

What’s CVSS scoring?

If you’ve ever browsed public CVEs, you’ve likely seen a “severity” rating or a number labeled CVSS. That’s the Common Vulnerability Scoring System, a standardized way to assess the severity of a vulnerability, here is a good example:
https://nvd.nist.gov/vuln/detail/CVE-2024-57602 

Here’s the thing: when people see “Remote Code Execution” (RCE), many instantly assume it must be a CVSS 10.0. But that’s not always the case. Why? Because CVSS doesn’t just care about what the exploit can do, it cares about how easily it can be done. If the RCE requires super-admin access, root privileges, or a PhD in astrophysics to pull off, the score might be significantly lower.

On the flip side, sometimes a vulnerability that looks “harmless” on the surface like an XSS can lead to full account takeover or even RCE. Ever heard of session hijacking, CSRF chaining, or injecting payloads into vulnerable endpoints that trigger backend SQL queries? Yeah, that “low-severity” XSS might end up burning the whole house down.

Yet, every now and then, someone strolls into the conversation usually armed with a vague gut feeling and no understanding of attack vectors or impact metrics and declares, “That’s not how CVSS should work.” To which the rest of us say: we're not scoring feelings, we're scoring facts.

That’s exactly why tools like the official CVSS calculator exist. You don’t need to manifest severity through vibes or intuition, just fill in the fields:

https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator 

Shocking, right?
Turns out, applying objective, decades-old frameworks from NIST and MITRE works better than Twitter debates and Reddit headcanons.

So next time you see a vulnerability and feel the urge to declare its severity based on how “cool” the exploit sounds, take a deep breath, open the CVSS calculator, and let actual methodology guide your judgment. 


 

How to find a target?

By now, you've probably realized that finding CVEs is just a more glamorous way of saying bug hunting, same game, fancier scoreboard. But that leads to the next question: where do you actually find targets worth poking at?

One of the easiest and most overlooked methods is using DockerHub

Developers love Docker because it lets them package up their entire app, code, dependencies, configs, and occasionally a bug or two into a neat little container that “just works everywhere.” It’s like a virtual machine, but lighter, faster, and somehow even easier to forget to update.

Docker was created to eliminate the classic developer excuse of “it works on my machine.”
Now, it works on every machine... including yours, where you get to run it, dissect it, and see just how many things also work for attackers.

So, if you're looking for something juicy to audit, dig through DockerHub. You'll find public containers for CMS platforms, legacy projects, abandoned side hustles, and enterprise software someone probably should’ve secured two years ago.

Watch my video, starting at this exact moment to see how you can find targets:

https://youtu.be/8MwYXp6jvqw?t=2950 

 

How to determine the impact of a software?

Sometimes, identifying vulnerabilities in widely used software isn’t just a good idea, it’s a strategic one. Why? Because the more people and systems it touches, the more meaningful your discovery becomes. In other words, if you’re going to break something, break something that matters (ethically, of course).

On January 11th, we set out on a mission to improve global cybersecurity by securing 250,000 websites, one vulnerability at a time. For example, discovering a server-side request forgery (SSRF) in a CMS used by 1,541 sites doesn’t just make for a cool write-up—it contributes directly to that goal. Just like that: 1,541/250,000 secured. Progress.

But that brings us to a very practical question:

  • How do you actually determine how many systems are affected by the software you just found a bug in?

Two simple ways to check impact:

  • BuiltWith.com – a lovely tool that tells you exactly how many sites are running a particular CMS or technology. It’s like recon-as-a-service (RaaS; similar to another RaaS acronym, eh?).
  • Project download or usage stats – some developers (the well-organized ones) track installs using a “call-home” beacon or just publish usage data.

 

Why does this matter?

Well, flip the perspective for a second. Imagine you’re an attacker and you’ve found an RCE in a CMS. What’s your next move? Figure out who’s using it, where they’re located, and which targets are easiest to hit. Fortunately (or unfortunately), there are tools that make this incredibly simple. You can pull data by CMS, geography, tech stack, and more. Like shopping for victims but for research purposes, obviously.

As vulnerability researchers, our job is to identify what matters. Whether developers actually fix their code is, well, outside our scope of work (read: not our circus, not our monkeys).

BuiltWith

Curious how many websites use Microweber?
BuiltWith Microweber Stats
 

Project Usage Data

Some projects publish their own install stats.
For example, Open Journal Systems reports over 52,320 active journals using their platform:
OJS Usage Data

So next time you find a bug, don’t just stop at “cool.” Check the blast radius. The impact is what turns a finding into a headline or better yet, a well-earned CVE.


 

What do you do after finding a vulnerability?

Let’s say you’ve found a vulnerability. Congrats! You’re now holding something that’s part technical achievement, part ethical dilemma, and part radioactive material.

At this point, you have a few options, each with its own level of moral ambiguity and paperwork:

  • You could sell it on the dark web to the right buyer and enjoy a short, lucrative career followed by an exciting conversation with law enforcement
  • You could report it responsibly and earn a CVE, the gold star sticker of the vulnerability world
  • You could submit it to Trend Micro’s Zero Day Initiative for some recognition, a CVE, and potentially a payout
  • You could even sell it to your government and tell yourself you're “protecting national interests”
  • Or, of course, you can toss it in your private dungeon of unreported 0days like a true digital dragon hoarding exploits

If you’ve decided to be responsible (good for you), follow along to get your CVE ID. 

 

Getting CVE through MITRE (works for all projects):

  1. Go to cveform.mitre.org
  2. Click dropdown field and choose “Report Vulnerability/Request CVE ID”
  3. Fill the fields and hit submit
  4. Wait a few weeks/months/years and you will receive an email from MITRE, they will quote your report and tell you to use a CVE ID, mine was like this: “use CVE-2024.12345”
  5. At this moment, you can publish the vulnerability along with its CVE ID
    1. You can publish it at anywhere including your own website, just don’t publish it on BreachedForums
  6. After that, go to cveform.mitre.org again, click on dropdown and choose “Notify CVE about a publication”
  7. Fill the fields and hit submit
  8. Wait a few weeks 

 

Getting CVE through Github (works only for Github projects):

  1. Report the vulnerability through the repository’s “Security” feature
  2. Wait for acknowledgement from the devs
  3. You will be assigned a CVE
  4. Ask if you can publish it


 

Getting CVE through Apache (works only for Apache projects):

  1. Report the vulnerability to [email protected]
  2. Wait for acknowledgement email
  3. Wait for a member of the security team to get back to you
  4. Communicate your finding & ask if your finding is eligible for a CVE
  5. You will be assigned a CVE (through email)
  6. Ask for permission to publish it
  7. Await permission & publish it 
     

     

Bug hunting demonstration

Okay, I am not going to call this “CVE Hunting” but I am going to call it what it is: bug hunting. 

The following is a demonstration that showcases identifying vulnerabilities in Vvveb simply by interacting with the web app:

https://www.youtube.com/watch?v=QYEcVZ4uNRk 

You should understand that most web apps are CRUD. They allow you to:

  • Create
  • Read
  • Update
  • Delete

Vvveb is a CMS with drag-and-drop functionality, being a CMS, it allows you to create content and manage it. The functionality used for creating content involves some sort of data in forms of text, html, or markdown.

Think of posts, comments, search functionality; these are very common features of every CMS and you can target them just like I did in my demo.

The underlying hacking methodology isn’t necessarily complex, but real-world applications can be. The size, structure, and depth of modern web apps often make it hard to navigate or test thoroughly.

That’s why success in bug hunting often comes down to grinding, meticulous exploration, trial and error, and a sharp eye for anything that doesn’t behave as expected. 

 

Testing your newly acquired “Hacker Powers”

Alright, let’s see whether you’ve actually learned something useful or if you’ve just been collecting buzzwords and congratulating yourself for solving trivia-level crypto challenges while calling it “real-world hacking.”

I’m handing you a real web app to dig into. This is not a HackTheBox CTF. There are no carefully planted breadcrumbs, no “Intro to Web Exploitation” puzzles, and definitely no flags conveniently tucked behind base64 encoding. This isn’t the kind of setup where every bug politely waits its turn to be exploited in exactly the way the creator intended.

This is a real-world application, flawed, unpredictable, and wonderfully uncooperative. The bugs weren’t crafted to teach you something. They exist because someone made architectural decisions under pressure and never came back to fix them. You’ll need to deal with actual misconfigurations, inconsistent logic, and weird behaviors that don’t fit into tidy CTF categories.

In short: welcome to software as it truly is not how it looks in competitions with cute challenge names and expected outcomes. 

The web app is called Open Source Journal, developed by the folks at Simon Fraser University. I didn’t build it, and more importantly, I didn’t plant the bugs. This isn’t a puzzle designed for your entertainment. This is just good, old-fashioned vulnerable code… straight from the real world.

To give you a taste of what’s inside, here’s what I’ve personally found:

  • Brute-forceable login (because apparently rate limiting is optional)
  • Data export misconfiguration allowing guest editors to export admin data—great for violating GDPR on a Tuesday
  • Arbitrary code execution as the Journal Manager (feature or flaw? your guess is as good as theirs)
  • Guest Editors accessing plugin endpoints they shouldn’t even know exist
     

You can grab the affected version of OJS here:
https://github.com/pkp/ojs/releases/tag/3_4_0-8 

Don’t just focus on the vulnerabilities I listed, why?

Because those are things I found, I usually time myself during CVE hunting, I don’t spend more than a week on any web app. There might be a lot of other bugs, even very simple things that I could have missed and you could find them. The bugs I listed exist in version 3.4 but you could target the latest version and find more important things.

Now, remember how easy it was to get Vvveb up and running with just a few commands? Yeah… don’t get used to that. 

Some dev teams still treat containerization like a weird new trend, and OJS is one of those cases. Sure, they do offer a Docker container, but it’s more of a “demo” setup than a fully functional deployment. If you want the real deal, you’ll need to install it properly.


 

How, you ask?

Simple: read the documentation, study the repo, pray a little, and maybe ask the community for help. This is part of the process. Learning how to install web apps is… well, part of learning how to break them. I know, shocking, hacking involves actual programming and setup skills. It’s wild how OSCP doesn’t teach you that. 

And no, I’m not going to hand you a roadmap to the bugs. That’s your job. You’ve got the app, you’ve got the clues, and hopefully you’ve got some curiosity. This is where you graduate from “script kiddie” to someone who actually understands how things break.

If you get stuck or just need to know whether you're suffering in the right direction, you can reach out on Discord: 0xHamy. I might give you a nudge. But if you’re looking for a full walkthrough, sorry, this isn’t TryHackMe. We focus on real hacking here. 

Remember: real vulnerabilities don’t come with flashing signs or voiceover hints. This isn’t Dora the Exploiter. This is real-world security work, where half the job is reading code, and the other half is figuring out who didn’t.

Good luck. You’re going to need some.


 

Bug hunting resources

Let’s get something out of the way: the resources I’m about to recommend are genuinely useful. I’m not affiliated with any of these companies, courses, or training providers, and if you've been paying attention, you’ll notice I’ve already poked fun at HackTheBox and TryHackMe. That alone should tell you I’m not here to sell you anything.

That said, despite roasting HTB’s breadcrumb-laced, over-gamified challenges, I’ll still say this: their CPTS course is excellent. In fact, it’s one of the best out there, and yes, I’d put it ahead of OSCP. Could HTB’s challenges be more realistic? Absolutely. If they ever decide to think truly outside the (virtual) box, they might stop designing labs like a hacker-themed amusement park. But the CPTS content? Solid.


 

How to find your first vulnerability?

To find your first vulnerability, you should begin by identifying what type of software you want to analyze. Different targets require different skill sets. Some common areas you can focus on include:

  • Web Applications
  • Libraries (e.g., written in C, Python, C++, Go, Rust)
  • Binaries (e.g., CLI tools, desktop applications)
  • APIs

…and many more.

 

Virtual Hacking Labs

I earned two certificates from VHL. Unlike most platforms, they are not proctored. Instead, they ask you to compromise a bunch of machines, capture flags, and submit a final report with at least 20+ writeups for the VHL+ certificate or 10+ hard boxes for the Advanced+.

VHL gave me a book, realistic challenges, and access to a Discord community where I could ask questions and get help. It’s not exactly buzzing with activity, but it was just right for me, I learned a lot and met some genuinely great folks.

Would I recommend it? I would have, if a better option hadn’t come along. At the moment, HTB’s CPTS course and labs are just on another level. They’re more comprehensive, more engaging, and, ironically, less expensive than VHL.
 

PortSwigger Academy

This is a must-use resource for anyone serious about web app security. PortSwigger offers a wide range of labs, tutorials, and guided exercises that don’t just throw challenges at you, they actually teach you how the bugs work.

How good is it?
Well, I earned a CVE for discovering a brute-force vulnerability on a login panel. And guess where I learned how brute-forcing actually works (outside of CTF fantasyland)? Right here.

Also, for those still clinging to the “brute-force is unrealistic” myth, no, it isn’t. Especially not in real-world pentests where rate-limiting is treated more like a suggestion than a standard.

Their SSRF labs alone helped me find real SSRF vulnerabilities in apps like NukeViet, Microweber, Vvveb, and more. If you skip this platform, you’re doing yourself a disservice.

Get started:

https://portswigger.net/web-security 
 

Zeroday Factory

Zerodayf short for Zeroday Factory is an automated context-aware code analysis I have built and battle-tested on multiple Flask web applications to find actual 0days. It uses SAST (semgrep) and AI APIs such as Anthropic, OpenAI, HuggingFace to identify vulnerabilities in code.

It doesn’t make false promises to identify P1 bugs on Coinbase but it can analyze code just fine and find some clever vulnerabilities that you might otherwise miss with SAST. I have also created a demo that showcases how it found a critical account takeover & multiple IDOR vulnerabilities on a “URL shortener” web app.

Get started:

https://www.youtube.com/watch?v=vonOzedeN5M 

 

Offensive Bug Bounty Hunter 2.0

This course may not be plastered all over LinkedIn like OSCP, but it’s a practical, grounded introduction to real-world bug bounty hunting. I took the first version and used what I learned to find an IDOR vulnerability, which led to my first bug bounty reward. It's especially good for understanding recon, methodology, and low-hanging fruit that actually exists in production apps.

Get started:

https://www.udemy.com/course/offensive-bug-bounty-hunter-20/ 

 

Certified Penetration Testing Specialist (CPTS)

Why CPTS? Because it doesn’t just teach you how to run tools, it teaches you how to think like an actual attacker, with a focus on methodology and mindset.

I’ve completed the course (not the exam, I’ll get to it eventually), and even without the piece of paper, the material directly contributed to bugs I’ve found and CVEs I’ve earned. That’s more than I can say for some certs that test whether you can type fast under stress.

The course is modular, covering everything from web and CMS exploitation to Active Directory and network services. It includes hands-on exercises, assessments, and unlike OSCP’s 24-hour crunch marathon, CPTS gives you 10 days to complete your exam. You know, like an actual pentest. What a concept.

Get started:
https://academy.hackthebox.com/preview/certifications/htb-certified-penetration-testing-specialist 

 

Flask Mega Tutorial - Miguel Grinberg

Want to break web apps? Learn to build them first.

Miguel Grinberg’s Flask Mega Tutorial will teach you how Flask works, how templates are rendered, how databases are connected, basically how the stuff you’re breaking was put together in the first place. That insight is invaluable for deeper bug hunting. 

The course will also teach you basic DevOps so you will learn about things like Docker.

The Flask Mega Tutorial – Miguel Grinberg

 

Flask tutorials - Tech with Tim

Prefer videos over text? Tech With Tim breaks down Flask concepts in a way that’s beginner-friendly without being watered down. His series pairs nicely with Miguel’s for a more complete understanding.

 Tech With Tim Flask Series (YouTube)
 

That’s it: You Don’t Need Fancy Labs, a Degree, or an RGB Keyboard

Let’s put things in perspective. I’ve found seven CVEs using a machine with 8GB RAM, 128GB of storage, and a strong aversion to Chrome tabs. If my laptop freezes when I open Burp, Discord, and YouTube at the same time, then yours is more than capable.

You don’t need expensive labs, overpriced certs, or a cybersecurity degree to get started. You need curiosity, persistence, and a willingness to deal with actual code not just CTF candy.

That said... if someone wants to sponsor me with an Alienware, I’m not above accepting donations.


 


Conclusion

So here we are. If you’ve made it this far, congratulations. You’re ahead of most people who never move beyond tutorial videos and blog skimming.

We’ve walked through what CVEs are, how to discover vulnerabilities, how to get them assigned (eventually, depending on how many moons it takes MITRE to reply), and which resources actually teach you something beyond finding flags behind base64-encoded breadcrumbs.

Along the way, you’ve seen the difference between practical, real-world security research and the gamified fantasyland some platforms promote. You’ve also seen how, in some cases, GitHub maintainers are assigning CVEs to their own code, because, apparently, being both author and vulnerability reporter is a productivity hack now.

But here’s what matters most: you don’t need a degree in cybersecurity, thousands of dollars in gear, or a wall full of certificates to contribute meaningfully to security. I don’t say that rhetorically, I say that as someone who holds a college certificate in plumbing. Not a metaphor. Real, pipe-wrench-and-sinks kind of plumbing.

And yet, with a mid-range laptop, curiosity, and an unreasonably high tolerance for digging through bad code, I’ve reported vulnerabilities, earned multiple CVEs, and helped secure systems used by thousands and now I am also teaching other aspiring security researchers. 

So no, your lack of formal credentials isn’t the issue. A lack of action usually is.

Before we wrap up, a bit of honest advice:
Yes, learn from the platforms I’ve shared. But don’t let anyone convince you that you must collect a stack of certifications before you’re allowed to participate. You don’t need to finish five Udemy courses, memorize port numbers, or “start with CompTIA A+ and CCNA” because someone on Reddit said so. That kind of advice often comes from people with the least achievements, little empathy and the most rigid opinions.

You'll learn what matters as you get stuck. You’ll spend hours troubleshooting, reading docs, and, let’s be real, asking modern friends like ChatGPT, Claude, or Grok for help. That’s normal. That’s learning.

If you understand vulnerabilities like SSRF and XSS, you’re already capable of finding real bugs. You don’t need to know everything. You just need to start.

I’m not selling you a course. I’m not here to act like an expert. If you’ve watched my demos or read my reports, you’ll see plenty of things I don’t know. But I show up, keep learning, and keep submitting. That’s all I ask you to do too.

Now get to it. You’ve got everything you need to start making an impact.


 


Posted on: May 20, 2025 02:28 AM