I am a fourth-year Ph.D. student at University of Chinese Academy of Sciences (UCAS) and Institute of Automation, Chinese Academy of Sciences (CASIA), advisded by Liang Wang.
Previously, I received my B.Eng. degree from Tsinghua University in 2021.
My research interests include LLM reasoning and AI for Drug Discovery (AIDD).
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Xiangxin Zhou*, Zichen Liu*, Anya Sims*, Haonan Wang, Tianyu Pang, Chongxuan Li, Liang Wang, Min Lin, Chao Du (* equal contribution)
Preprint. 2025
VeriFree is a verifier-free method that bypasses answer verification and instead uses RL to directly maximize the probability of generating the reference answer.
Xiangxin Zhou*, Zichen Liu*, Anya Sims*, Haonan Wang, Tianyu Pang, Chongxuan Li, Liang Wang, Min Lin, Chao Du (* equal contribution)
Preprint. 2025
VeriFree is a verifier-free method that bypasses answer verification and instead uses RL to directly maximize the probability of generating the reference answer.
Xiangxin Zhou*, Mingyu Li*, Yi Xiao, Jiahan Li, Dongyu Xue, Zaixiang Zheng, Jianzhu Ma, Quanquan Gu (* equal contribution)
International Conference on Machine Learning (ICML) 2025
CpSDE is a generative algorithm capable of generating diverse types of cyclic peptides given 3D receptor structures.
Xiangxin Zhou*, Mingyu Li*, Yi Xiao, Jiahan Li, Dongyu Xue, Zaixiang Zheng, Jianzhu Ma, Quanquan Gu (* equal contribution)
International Conference on Machine Learning (ICML) 2025
CpSDE is a generative algorithm capable of generating diverse types of cyclic peptides given 3D receptor structures.
Xiangxin Zhou*, Yi Xiao*, Haowei Lin, Xinheng He, Jiaqi Guan, Yang Wang, Qiang Liu, Feng Zhou, Liang Wang, Jianzhu Ma (* equal contribution)
International Conference on Learning Representations (ICLR) 2025
DynamicFlow is a full-atom (stochastic) flow model that learns to transform apo pockets and noisy ligands into holo pockets and corresponding 3D ligand molecules.
Xiangxin Zhou*, Yi Xiao*, Haowei Lin, Xinheng He, Jiaqi Guan, Yang Wang, Qiang Liu, Feng Zhou, Liang Wang, Jianzhu Ma (* equal contribution)
International Conference on Learning Representations (ICLR) 2025
DynamicFlow is a full-atom (stochastic) flow model that learns to transform apo pockets and noisy ligands into holo pockets and corresponding 3D ligand molecules.
Xiangxin Zhou, Jiaqi Guan, Yijia Zhang, Xingang Peng, Liang Wang, Jianzhu Ma
Conference on Neural Information Processing Systems (NeurIPS) 2024
DualDiff generates dual-target ligand molecules via compositional sampling based on single-target diffusion models.
Xiangxin Zhou, Jiaqi Guan, Yijia Zhang, Xingang Peng, Liang Wang, Jianzhu Ma
Conference on Neural Information Processing Systems (NeurIPS) 2024
DualDiff generates dual-target ligand molecules via compositional sampling based on single-target diffusion models.
Xiangxin Zhou*, Dongyu Xue*, Ruizhe Chen*, Zaixiang Zheng, Liang Wang, Quanquan Gu (* equal contribution)
Conference on Neural Information Processing Systems (NeurIPS) 2024
Direct energy-based preference optimzation guides the generation of antibodies with both rational structures and considerable binding affinities to given antigens.
Xiangxin Zhou*, Dongyu Xue*, Ruizhe Chen*, Zaixiang Zheng, Liang Wang, Quanquan Gu (* equal contribution)
Conference on Neural Information Processing Systems (NeurIPS) 2024
Direct energy-based preference optimzation guides the generation of antibodies with both rational structures and considerable binding affinities to given antigens.
Xiangxin Zhou, Liang Wang, Yichi Zhou
International Conference on Machine Learning (ICML) 2024
Policy gradients in data-scarce regions are ill-defined, leading to instability. Consistency ensured via score matching allows us to correctly estimate the policy gradients with sufficient data that can be efficiently sampled from the forward SDE (i.e., perturbation).
Xiangxin Zhou, Liang Wang, Yichi Zhou
International Conference on Machine Learning (ICML) 2024
Policy gradients in data-scarce regions are ill-defined, leading to instability. Consistency ensured via score matching allows us to correctly estimate the policy gradients with sufficient data that can be efficiently sampled from the forward SDE (i.e., perturbation).
Xiangxin Zhou*, Xiwei Cheng*, Yuwei Yang, Yu Bao, Liang Wang, Quanquan Gu (* equal contribution)
International Conference on Learning Representations (ICLR) 2024
DecompOpt a structure-based molecular optimization method based on a controllable and decomposed diffusion model.
Xiangxin Zhou*, Xiwei Cheng*, Yuwei Yang, Yu Bao, Liang Wang, Quanquan Gu (* equal contribution)
International Conference on Learning Representations (ICLR) 2024
DecompOpt a structure-based molecular optimization method based on a controllable and decomposed diffusion model.
Jiaqi Guan*, Xiangxin Zhou*#, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, Quanquan Gu# (* equal contribution, # corresponding author)
International Conference on Machine Learning (ICML) 2023
DecompDiff is a diffusion model for SBDD with decomposed priors over arms and scaffold, equipped with bond diffusion and additional validity guidance.
Jiaqi Guan*, Xiangxin Zhou*#, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, Quanquan Gu# (* equal contribution, # corresponding author)
International Conference on Machine Learning (ICML) 2023
DecompDiff is a diffusion model for SBDD with decomposed priors over arms and scaffold, equipped with bond diffusion and additional validity guidance.