Yue Zhao
Avatar of Yue Zhao
Assistant Professor
Thomas Lord Department of Computer Science
School of Advanced Computing

University of Southern California

Los Angeles, CA, USA
Email:

Lab Openings. We are warmly welcoming new members to the FORTIS Lab!

Ph.D. Students (1 Ph.D. student for Fall 2026):
  • Due to the large number of interested candidates, future Ph.D. students should be comparable to our current junior Ph.D. students -- see FORTIS Lab.
  • For Fall 2026, I am only considering Ph.D. applicants interested in crypto systems or distributed systems, especially when combined with AI/ML.
Research Interns (Any Time, All Year Round):
  • We welcome both undergraduate and graduate interns from USC and other institutions.
  • Preferred candidates are located in North America for time zone compatibility.
  • I do not hire in-person summer interns -- I am enjoying summer and working remotely :)
Application Process: To apply for either opportunities, complete the Application Form, email me after submitting the form, and review the FORTIS Lab website for more information before reaching out.

Collaboration with Me. I am open to external opportunities for invited talks, research collaborations, and employment (only on the part-time/advising/visiting basis). Let us have a chat by email. I frequently visit major cities, e.g., Seattle, NYC, Chicago, Boston, Atlanta, and Bay Area to meet people, give talks, and host social events.

Research Interests: My research aims to build trustworthy, robust, and scalable AI that advances science and benefits society. I focus on rigorous algorithmic foundations, open-source system development, and high-impact applications in both human-centric and scientific domains.

  1. Robust & Trustworthy AI: Detecting the Unexpected.
    I design core algorithms to detect anomalies, out-of-distribution (OOD) data, and outliers across diverse modalities (including graph-structured data). These methods reinforce AI systems against rare or unseen scenarios, enhancing reliability, security, and interpretability.
    Keywords: Anomaly Detection, OOD Detection, Trustworthy AI, Graph Anomaly Detection
  2. AI for Science & Society: Foundation Models in Action.
    By pairing robust detection with large language models (LLMs) and generative AI (GenAI), I tackle interdisciplinary challenges—from scientific discovery to political forecasting and computational social science. This approach bridges algorithmic research with real-world decision-making and public policy.
    Keywords: AI for Science, Generative AI, LLMs, Political Forecasting, Computational Social Science
  3. Scalable, Automated & Open-source ML Systems.
    To ensure widespread adoption, I build reproducible and efficient tools—most notably PyOD (27M+ downloads) for anomaly detection, along with PyGOD, ADBench, and other libraries with 20K+ GitHub stars (top 800 worldwide). My work emphasizes automated model selection, distributed inference, and user-friendly designs, democratizing advanced ML across academia and industry.
    Keywords: ML Systems, Automated ML, Open-source AI, Distributed Computing

Biography.


✈ News and Travel

[Jun 2025] We have a new paper accepted to ECML PKDD 2025 on leveraging LLMs for few-shot graph OOD detection; see our Preprint!

[Jun 2025] We have a new paper, “SocialMaze,” introducing a benchmark to evaluate social reasoning in LLMs across games, interactions, and online platforms. See our Preprint!

[May 2025] We have a new paper on benchmarking personalized conversational reasoning for LLMs (PersonaConvBench). See our Preprint!

[May 2025] We have a new paper introducing AD-AGENT, a multi-agent LLM framework for anomaly detection. See our Preprint!

[May 2025] Our paper "AD-LLM: Benchmarking Large Language Models for Anomaly Detection" has been accepted to ACL 2025 Findings! Congrats to Tiankai Yang and see the Preprint.

[May 2025] Our survey paper "From Selection to Generation: A Survey of LLM-based Active Learning" has been accepted to ACL 2025 main conference! See the preprint.

[May 2025] Our tutorial "A Survey on Model Extraction Attacks and Defenses for Large Language Models" was accepted to KDD 2025 as a Lecture-Style Tutorial! Congrats to Kaixiang Zhao, Lincan Li, Kaize Ding, Neil Gong, and Yushun Dong!

[May 2025] We have a new paper on zero-shot graph OOD detection using foundation models (GLIP-OOD); see our Preprint!

[May 2025] We have a new paper introducing GOE-LLM, a framework using LLMs to generate synthetic OOD nodes for graph OOD detection without requiring real OOD data. See our Preprint!

[Apr 2025] We have a new paper on privacy risks in image-based multimodal reasoning (Doxing via the Lens); see our Preprint!

[Apr 2025] Our paper on label‑efficient graph open‑set learning (LEGO‑Learn) has been accepted to TMLR! Read the final version on OpenReview.

[Apr 2025] We have a new paper on mitigating hallucination in LLMs via logical reasoning and retrieval-based verification; see our Preprint!

[Apr 2025] We have a new paper on adversarial prompt optimization to manipulate LLM ranking systems (StealthRank); see our Preprint!

[Apr 2025] We have a new paper on jailbreak detection for MLLMs—JailDAM proposes adaptive memory updates for generalizing to unseen jailbreaks. See our Preprint!

[Apr 2025] DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection is accepted to CVPR 2025 as a highlight paper; see our Preprint!

[Mar 2025] We have a new paper on hierarchical cross-modal alignment for decoupled multimodal representation learning (DecAlign); see our Preprint!

[Mar 2025] We have a new paper exploring a causal approach to mitigating hallucinations in Vision-Language Models (VLMs); see our Preprint!

[Mar 2025] We have a new paper on secure and efficient on-device OOD detection without backpropagation (SecDOOD); see our Preprint!

[Mar 2025] Join the newly established ACM Transactions on AI for Science (TAIS) as an Associate Editor!

[Mar 2025] We have a new paper, TRUSTEVAL: A Dynamic Evaluation Toolkit on Trustworthiness of Generative Foundation Models, accepted to NAACL 2025 Demo Track; see our Preprint soon!

🏅 Awards and Grants

As Principal Investigator (August 2023 onwards)
Prior to Principal Investigator Role (Before August 2023)