Poster
DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection
Hongyu Shen · Yici Yan · Zhizhen Jane Zhao
Model-X knockoff, among various feature selection methods, received much attention recently due to its guarantee on false discovery rate (FDR) control. Subsequent to its introduction in parametric design, knockoff is advanced to handle arbitrary data distributions using deep learning-based generative modeling. However, we observed that current implementations of the deep Model-X knockoff framework exhibit limitations. Notably, the "swap property" that knockoffs necessitate frequently encounter challenges on sample level, leading to a diminished selection power. To overcome, we develop "Deep Dependency Regularized Knockoff (DeepDRK)", a distribution-free deep learning method that strikes a balance between FDR and power. In DeepDRK, we propose the novel formulation of the knockoff model as a learning problem under multi-source adversarial attacks. With a novel perturbation technique, we achieve lower FDR and higher power. Our model outperforms other benchmarks in synthetic, semi-synthetic, and real-world data, especially when sample size is small and data distribution is complex.
Live content is unavailable. Log in and register to view live content