Kejia Zhang
  • Bio
  • Papers
  • News
  • Projects
  • Publications
    • Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs
    • Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment
    • Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness
    • A Collaborative Framework Using Multimodal Data and Adaptive Noise for Human Behavior Anomaly Detection
    • Harmonizing Feature Maps: A Graph Convolutional Approach for Enhancing Adversarial Robustness
    • Extracting Higher Order Topological Semantic via Motif-Based Deep Graph Neural Networks
  • Blog
    • Xiamen University, Westlake University and DAMO Academy propose "Poison as Cure", A novel method using visual noise to mitigate object hallucination in LVMs
    • Released AdverRobust, a open-source framework for adversarial training and robustness evaluation!
  • Projects
    • Poison as Cure-Visual Noise for Mitigating Object Hallucinations in LVMs
    • AdverRobust
    • Motif-based GNNs
  • Recent & Upcoming Talks
    • NLPCC-24
    • PRCV-2023
  • Projects
  • Experience
  • Teaching

AdverRobust

Sep 13, 2024·
Kejia Zhang
Kejia Zhang
· 1 min read
Go to Project Site

An Adversarial Training Framework for Enhancing the Adversarial Robustness of DNNs.

Last updated on Sep 13, 2024
Artificial Intelligence Robust
Kejia Zhang
Authors
Kejia Zhang
Master Student

← Poison as Cure-Visual Noise for Mitigating Object Hallucinations in LVMs Jan 30, 2025
Motif-based GNNs Sep 13, 2024 →

© 2025 Kejia Zhang. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.