
- Home
高级 检索
Chinese
English



1.北京邮电大学网络空间安全学院,北京 100876
2.北京电子科技学院密码科学与技术系,北京 100070
3.中国科学技术大学网络空间安全学院,安徽 合肥 230026
4.北京航空航天大学网络空间安全学院,北京 100091
Received:02 December 2025,
Revised:2026-03-01,
Accepted:02 March 2026,
Published:20 April 2026
移动端阅览
岳梓岩,许盛伟,王志强等.基于机器遗忘的模型能力细粒度访问控制机制[J].通信学报,2026,47(04):80-96.
Yue Ziyan,Xu Shengwei,Wang Zhiqiang,et al.Fine-grained model capability access control mechanism based on machine unlearning[J].Journal on Communications,2026,47(04):80-96.
岳梓岩,许盛伟,王志强等.基于机器遗忘的模型能力细粒度访问控制机制[J].通信学报,2026,47(04):80-96. DOI: 10.11959/j.issn.1000-436x.2026066.
Yue Ziyan,Xu Shengwei,Wang Zhiqiang,et al.Fine-grained model capability access control mechanism based on machine unlearning[J].Journal on Communications,2026,47(04):80-96. DOI: 10.11959/j.issn.1000-436x.2026066.
针对现有人工智能模型在部署中缺乏能力访问控制,导致模型能力可能被未授权用户滥用的问题,提出一种基于机器遗忘的模型能力细粒度访问控制机制Model-Guard,实现不需要重新训练便可对模型任务能力进行细粒度访问控制。首先,基于选择性突触衰减(SSD)算法识别敏感任务能力对应参数,并通过衰减实现模型敏感任务能力默认关闭。其次,设计授权因子计算方法,授权用户通过授权因子恢复模型能力。为保证授权因子的安全分发,Model-Guard 采用对称加密与属性基加密(CP-ABE)混合加密方式,并引入布隆过滤器降低验证开销。实验表明,Model-Guard可在图像识别任务中实现精准能力隔离与恢复,并显著降低部署与维护成本。
A fine-grained model capability access control mechanism
named Model-Guard
was proposed to address the lack of capability access control in deployed artificial intelligence models
which may lead to unauthorized misuse of model capabilities. Without retraining
sensitive task-related parameters were identified by the selective synaptic dampening (SSD) algorithm and attenuated to disable sensitive capabilities by default. An authorization factor calculation method was designed to restore model capabilities for authorized users. To ensure secure distribution of authorization factors
a hybrid scheme combining symmetric encryption and ciphertext-policy attribute-based encryption (CP-ABE) was adopted
and a Bloom filter was introduced to reduce verification overhead. Experimental results demonstrated that Model-Guard achieved precise capability isolation and restoration in image recognition tasks. The proposed mechanism significantly reduces deployment and maintenance costs while enabling fine-grained and secure capability control.
Gursoy D , Cai R Y . Artificial intelligence: an overview of research trends and future directions [J ] . International Journal of Contemporary Hospitality Management , 2025 , 37 ( 1 ): 1 - 17 .
Apicella A , Isgrò F , Prevete R . Don’t push the button! Exploring data leakage risks in machine learning and transfer learning [J ] . Artificial Intelligence Review , 2025 , 58 ( 11 ): 339 .
Anderljung M , Hazell J , von Knebel M . Protecting society from AI misuse: when are restrictions on capabilities warranted? [J ] . AI & Society , 2025 , 40 ( 5 ): 3841 - 3857 .
Rakin A S , Chowdhuryy M H I , Yao F , et al . DeepSteal: advanced model extractions leveraging efficient weight stealing in memories [C ] // Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP) . Piscataway : IEEE Press , 2022 : 1157 - 1174 .
Tang Q , Su C , Tian Y , et al . YOLO-SS: optimizing YOLO for enhanced small object detection in remote sensing imagery [J ] . The Journal of Supercomputing , 2025 , 81 : 303 .
Panigrahi A , Saunshi N , Zhao H , et al . Task-specific skill localization in fine-tuned language models [C ] // International Conference on Machine Learning . New York : ACM Press , 2023 : 27011 - 27033 .
Si C , Shi Z , Zhang S , et al . Unleashing the power of task-specific directions in parameter efficient fine-tuning [C ] // The Thirteenth International Conference on Learning Representations . Vancouver : ICLR , 2024 : 1 - 24 .
李梓童 , 孟小峰 , 王雷霞 , 等 . 机器遗忘综述 [J ] . 软件学报 , 2025 , 36 ( 4 ): 1637 - 1664 .
Li Z T , Meng X F , Wang L X , et al . Survey on machine unlearning [J ] . Journal of Software , 2025 , 36 ( 4 ): 1637 - 1664 .
何黎松 , 杨洋 . 遗忘学习综述 [J ] . 计算机科学与探索 , 2024 , 18 ( 11 ): 2872 - 2886 .
He L S , Yang Y . Review of machine unlearning [J ] . Journal of Frontiers of Computer Science and Technology , 2024 , 18 ( 11 ): 2872 - 2886 .
Chodey M D , Gouthami E , Rao K , et al . Privacy-preserving machine learning models [C ] // Proceedings of the 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI) . Piscataway : IEEE Press , 2025 : 1521 - 1527 .
Schelter S . amnesia-towards machine learning models that can forget user data very fast [C ] // Proceedings of the1st International Workshop on Applied AI for Database Systems and Applications (AIDB19) . Saarland : DBLP , 2019 : 1 - 4 .
Zanella-Béguelin S , Wutschitz L , Tople S , et al . Analyzing information leakage of updates to natural language models [C ] // Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security . New York : ACM Press , 2020 : 363 - 375 .
Shokri R , Stronati M , Song C Z , et al . Membership inference attacks against machine learning models [C ] // Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP) . Piscataway : IEEE Press , 2017 : 3 - 18 .
Cao Y Z , Yang J F . Towards making systems forget with machine unlearning [C ] // Proceedings of the 2015 IEEE Symposium on Security and Privacy . Piscataway : IEEE Press , 2015 : 463 - 480 .
Serra J , Suris D , Miron M , et al . Overcoming catastrophic forgetting with hard attention to the task [C ] // International Conference on Machine Learning . New York : PMLR , 2018 : 4548 - 4557 .
Lee J , Mai Z , Yoo J , et al . Continual unlearning for text-to-image diffusion models: a regularization perspective [PP ] . V2 . ( 2025-11-11 )[ 2025-12-02 ] . arXiv: arXiv. 2511.07970.
Chen M , Zhang Z K , Wang T H , et al . When machine unlearning jeopardizes privacy [C ] // Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security . New York : ACM Press , 2021 : 896 - 911 .
Schelter S , Grafberger S , Dunning T . HedgeCut: maintaining randomised trees for low-latency machine unlearning [C ] // Proceedings of the 2021 International Conference on Management of Data . New York : ACM Press , 2021 : 1545 - 1557 .
Bourtoule L , Chandrasekaran V , Choquette-Choo C A , et al . Machine unlearning [C ] // Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP) . Piscataway : IEEE Press , 2021 : 141 - 159 .
Tarun A K , Chundawat V S , Mandal M , et al . Fast yet effective machine unlearning [J ] . IEEE Transactions on Neural Networks and Learning Systems , 2024 , 35 ( 9 ): 13046 - 13055 .
何可 , 王建华 , 于丹 , 等 . 基于自适应采样的机器遗忘方法 [J ] . 信息网络安全 , 2025 , 25 ( 4 ): 630 - 639 .
He K , Wang J H , Yu D , et al . Adaptive sampling-based machine unlearning method [J ] . Netinfo Security , 2025 , 25 ( 4 ): 630 - 639 .
Graves L , Nagisetty V , Ganesh V . Amnesiac machine learning [J ] . Proceedings of the AAAI Conference on Artificial Intelligence , 2021 , 35 ( 13 ): 11516 - 11524 .
Ginart A , Guan M , Valiant G , et al . Making ai forget you: data deletion in machine learning [C ] // Proceedings of the 33rd International Conference on Neural Information Processing Systems . Massachusetts : MIT Press , 2019 : 3518 - 3531 .
Mirzasoleiman B , Karbasi A , Krause A . Deletion-robust submodular maximization: data summarization with “the right to be forgotten” [C ] // International Conference on Machine Learning . New York : PMLR , 2017 : 2449 - 2458 .
Wang Y T , Shi B J , Zhang H . TSP: task-specific pruning for personalized image classification on edge devices [C ] // Proceedings of the ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Piscataway : IEEE Press , 2025 : 1 - 5 .
Mishra A K , Chakraborty M . Does local pruning offer task-specific models to learn effectively? [C ] // Proceedings of the Student Research Workshop Associated with RANLP 2021 . [ S.l. : s.n. ] , 2021 : 118 - 125 .
Wang G R , Yang J , Sun Y R . Task-oriented memory-efficient pruning-adapter [PP ] . V2 . ( 2023-04-06 )[ 2025-12-02 ] . arXiv: arXiv. 2303.14704.
Zhou J X , Bao W D , Wang J , et al . CUT: pruning pre-trained multi-task models into compact models for edge devices [C ] // International Conference on Intelligent Computing . Berlin : Springer , 2025 : 164 - 177 .
Reda W , Jangda A , Chintalapudi K . How many parameters does your task really need? task specific pruning with LLM-sieve[PP ] . V2 . ( 2025-10-04 )[ 2025-12-02 ] . arXiv: arXiv. 2505.18350.
Chen T Y , Huang S H , Xie Y , et al . Task-specific expert pruning for sparse mixture-of-experts [PP ] . V2 . ( 2022-06-02 )[ 2025-12-02 ] . arXiv: arXiv. 2206.00277.
Lu X D , Liu Q , Xu Y H , et al . Not all experts are equal: efficient expert pruning and skipping for mixture-of-experts large language models [C ] // Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Stroudsburg : ACL , 2024 : 6159 - 6172 .
Chowdhury M N R , Wang M , El Maghraoui K , et al . A provably effective method for pruning experts in fine-tuned sparse mixture-of-experts [PP ] . V3 . ( 2024-05-30 )[ 2025-12-02 ] . arXiv: arXiv. 2405.16646.
Han Z Y , Liu X T , Zhou R T , et al . Faster, smaller, and smarter: task-aware expert merging for online MoE inference [PP ] . V2 . ( 2026-01-23 )[ 2025-12-02 ] . arXiv: arXiv. 2509.19781.
Pochinkov N , Schoots N . Dissecting language models: machine unlearning via selective pruning [PP ] . V2 . ( 2024-07-24 )[ 2025-12-02 ] . arXiv: arXiv. 2403.01267.
Liu Z Y , Dou G Y , Yuan X C , et al . Modality-aware neuron pruning for unlearning in multimodal large language models [C ] // Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics . Stroudsburg : ACL Press , 2025 : 5913 - 5933 .
Foster J , Schoepf S , Brintrup A . Fast machine unlearning without retraining through selective synaptic dampening [J ] . Proceedings of the AAAI Conference on Artificial Intelligence , 2024 , 38 ( 11 ): 12043 - 12051 .
Zhang J , Chen D D , Liao J , et al . Deep model intellectual property protection via deep watermarking [J ] . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2022 , 44 ( 8 ): 4005 - 4020 .
Huang W , Wang Y G , Cheng A D , et al . A fast, performant, secure distributed training framework for LLM [C ] // Proceedings of the ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Piscataway : IEEE Press , 2024 : 4800 - 4804 .
Lloret-Talavera G , Jorda M , Servat H , et al . Enabling homomorphically encrypted inference for large DNN models [J ] . IEEE Transactions on Computers , 2022 , 71 ( 5 ): 1145 - 1155 .
Ji Z L , Lipton Z C , Elkan C . Differential privacy and machine learning: a survey and review [PP ] . V1 . ( 2014-12-24 )[ 2025-12-02 ] . arXiv: arXiv. 1412.7584.
Li L , Fan Y X , Tse M , et al . A review of applications in federated learning [J ] . Computers & Industrial Engineering , 2020 , 149 : 106854 .
Wang Z Q , Du H H , Wang J Y , et al . SECNeuron: reliable and flexible abuse control in local LLMs via hybrid neuron encryption [PP ] . V1 . ( 2025-06-05 )[ 2025-12-02 ] . arXiv: arXiv. 2506.05242.
Tang Y H , Li X S , Liu F C , et al . Pangu pro MoE: mixture of grouped experts for efficient sparsity [PP ] . V2 . ( 2025-05-28 )[ 2025-12-02 ] . arXiv: arXiv. 2505.21411.
Krizhevsky A , Hinton G . Convolutional deep belief networks on cifar-10 [J ] . Unpublished Manuscript , 2010 , 40 ( 7 ): 1 - 9 .
Wang A , Singh A , Michael J , et al . GLUE: A multi-task benchmark and analysis platform for natural language understanding [C ] // Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP . [ S.l. : s.n. ] , 2018 : 353 - 355 .
0
Views
27
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution
京公网安备11010602201714号