1. Supercharging Ghidra: Using Local LLMs with GhidraMCP via Ollama and OpenWeb-UI
Link:
Summary:
The article explains how to enhance Ghidra (a popular open-source reverse engineering tool) by integrating it with local Large Language Models (LLMs) using a tool called GhidraMCP. This integration leverages Ollama and OpenWebUI to make LLM-powered analysis and automation possible directly on your machine, without sending data to the cloud.
2. Using LLMs as a reverse engineering sidekick
LInk: here
Summary:
This research explores how large language models (LLMs) can complement, rather than replace, the efforts of malware analysts in the complex field of reverse engineering.
LLMs may serve as powerful assistants to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis.
We will showcase practical applications of LLMs in conjunction with essential tools like Model Context Protocol (MCP) frameworks and industry-standard disassemblers and decompilers, such as IDA Pro and Ghidra.
Readers will gain insights into which models and tools are best suited for common challenges in malware analysis and how these tools can accelerate the identification and understanding of unknown malicious files.
We also show how some common hurdles faced when using LLMs may influence the results, like cost increases due to tool usage and limitations of input context size in local models.
3. Agentic RAG in Cyber Security: Transforming VAPT, Malware Analysis, Cyber Forensics, and Reverse Engineering for the C-Suite
Link: here
Summary:
The article explains how Agentic Retrieval-Augmented Generation (Agentic RAG) is transforming cybersecurity by combining AI’s retrieval abilities with autonomous decision-making. Applied to VAPT, malware analysis, cyber forensics, and reverse engineering, it automates complex tasks, learns from threat intelligence, orchestrates security tools, and delivers insights in both technical and executive-friendly formats. The result is faster threat detection, improved compliance, and more efficient use of cybersecurity resources.
4. All You Need Is MCP - LLMs Solving a DEF CON CTF Finals Challenge
Link: here
Summary:
Wil Gibbs’s blog post describes how an LLM-powered setup, using GPT-5 with an IDA MCP server, helped solve a DEF CON Finals CTF challenge called “ico” with minimal human intervention. The challenge involved reversing a large binary that hosted a network service. After partial manual analysis, the LLM generated scripts to interact with the service, uncovering that the “Author” field in metadata was an MD5 hash of the flag. Guided prompts led it to craft an exploit that changed the comment field to “/flag,” causing the server to return the flag in plaintext. The LLM then produced a Python patch to fix the vulnerability by altering a single byte to prevent the path-based exploit. Gibbs called the result a “perfect storm” of powerful tools, prior partial reversing, and a simple flaw, noting that while this showed LLMs can be highly effective, CTF challenges will evolve to resist such automation.