VOOZH about

URL: https://github.com/topics/ai-robustness

⇱ ai-robustness · GitHub Topics · GitHub


Skip to content
#

ai-robustness

Here are 3 public repositories matching this topic...

VISION is a framework for robust and interpretable code vulnerability detection using counterfactual data augmentation. It leverages GNNs, LLM-generated counterfactuals, and graph-based explainability to mitigate spurious correlations and improve generalization on real-world vulnerabilities (CWE-20).

  • Updated
  • Jupyter Notebook

Investigating the "Gradient Noise Paradox" in AI Safety: A study on the conflict between Differential Privacy (DP-SGD) and Adversarial Training. Uses a custom "Shadow Model" pipeline to synchronize Opacus with PGD attacks, demonstrating how privacy-preserving noise systematically degrades model robustness

  • Updated
  • Python

Improve this page

Add a description, image, and links to the ai-robustness topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-robustness topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.