The GitHub developers review the exploit posting policy and want to discuss with the information security community a series of changes to the site rules. These rules determine how employees deal with malware and exploits uploaded to the platform.
The proposed changes imply that GitHub will establish clearer rules about what counts as code that is used to investigate vulnerabilities and what counts as code that attackers misuse for real attacks. The problem is that now this line is blurred. Anyone can upload malware or exploits to GitHub with the tag “for security research,” and the GitHub staff most likely will permit posting of such code.
GitHub now asks project owners to clearly state the meaning of their code and whether it can be used to harm others. Also, GitHub employees want to be able to intervene in the situation in certain cases, in particular, limit or remove the code intended for information security research, if it is already used for real attacks.
Hanley and GitHub are asking the community to provide feedback (here) on this initiative to work together to determine where the line between security research and actual malicious code lies.
What is happening is a direct consequence of the scandal that began last month. Let me remind you that in early March 2021, Microsoft, which owns GitHub, announced a series of ProxyLogon vulnerabilities that were used by hacker groups to attack Exchange servers around the world.
Then the OS manufacturer released patches, and a week later, a Vietnamese cybersecurity researcher reversed these fixes and created a PoC exloit for ProxyLogon based on them, which was then uploaded to GitHub. Within hours of uploading the code to GitHub, the Microsoft security team stepped in and removed the expert’s PoC, sparking industry outrage and criticism towards Microsoft.
Although back then Microsoft was simply trying to protect Exchange server owners from attacks, and GitHub eventually allowed the researcher and others to re-upload the exploit code to the site, now GitHub still wants to eliminate all ambiguities in the policies of their platform so that such situations do not happen again.
It is unclear if GitHub plans to listen to the feedback it receives from people, or if the company will approve the proposed changes anyway, thus gaining the opportunity to intervene if it believes that certain code can be used for attacks.
The company’s proposal has already sparked a heated debate on the web, and opinions are divided. Some agree with the proposed changes, while others are happy with the current state of affairs, when users can report malicious code to GitHub for removal, but the platform allows posting PoC exploits, even if they are already abused.
The fact is that exploits are often re-posted on other platforms, so removing PoC from GitHub does not mean that attackers will not be able to take advantage of them.