CIL-LLM: Incremental Learning Framework Based on Large Language Models for Category Classification
CIL-LLM: Incremental Learning Framework Based on Large Language Models for Category Classification
Blog Article
To enhance classification accuracy in class-incremental learning (CIL) models for text classification and mitigate the issue of catastrophic forgetting, this paper introduces a CIL framework based on a large language model (CIL-LLM).The CIL-LLM framework selects representative samples through sampling and compression, and DIESEL PROTEIN HOT CHOCOLATE leverages the strong contextual learning abilities of the LLM to distill key skills, which serve as the basis for classification, thereby reducing storage costs.Keywords matching is used to select optimal skills, which are then formulated into prompts that guide downstream weak LLM in classification, improving accuracy.
Through skill fusion based on knowledge distillation, the framework effectively expands and updates the skill repository while ensuring the learning of both new and old categories.The comparative experimental results show that, in tests on the THUCNews dataset, the CIL-LLM framework improves the average accuracy by 6.3 percentage points and reduces the performance degradation rate by 3.
1 percentage points compared with the existing L-SCL method.Additionally, in the ablation experiments, the SLEICL model enhanced by the CIL-LLM framework shows an increase in average accuracy of 10.4 percentage points and a reduction in performance degradation rate of 3.
3 percentage points compared with the original model.These results further Nuts validate that sample compression, keyword matching, and skill fusion all contribute to optimizing the accuracy and reducing performance degradation in the model.