- 更新:2023-08-24 13:56:53
- 首发:2023-08-23 23:21:29
- 人工智能
- 3080
Objectives of Supervised Fine-Tuning:
- Enhance Specific Task Performance : Aligning instructions with particular tasks.
- Domain Adaptation : Making the model compatible with specialized areas.
- Improve Interpretability and Controllability : Enhancing the model's ability to be understood and directed.
Overall, the goal is to improve robustness, which refers to the system's resilience.
Core Considerations:
- Diversity : To prevent overfitting, the data must be diverse. Diversity not only enhances generalization but also inference ability. It's not just about having many knowledge categories but also functional ones. The data volume for each category should be as balanced as possible; otherwise, imbalances may lead to oversensitivity to some and undersensitivity to others. Diversity can also be achieved by prompt template construction or data augmentation methods, like expanding translation instructions from Chinese to English.
- Avoid Mistaking SFT for Data Supplementation : SFT is not merely about adding more data; the model may remember some of it, but that's not the main purpose.
- Few-Shot and COT (Chain of Thought) Data Integration : Adding these into training can facilitate the model’s comprehension of instructions and multi-turn dialogue ability.
- Emphasis on Data Quality over Quantity in SFT : Typically, around 10,000 finely labeled data points can achieve good results.
- Quality over Quantity : Expanding data volume without enhancing diversity will significantly reduce benefits, while optimizing data quality will notably increase gains.
Data Quality Requirements:
- Length Constraints : Neither the question nor the answer should be overly long or short. Ideally, no more than 4k tokens.
- No Incorrect Answers : Only select high-quality data.
- Special Industry Requirements : For domains demanding high inference abilities, try to gather more CoT data.
- Diverse NLP Abilities Required : Including classification, structured output, creative writing, multi-turn dialogue, ancient Chinese translation, keyword recognition, reading comprehension, idiom explanation, text correction, sentiment analysis, entity recognition, programming, text matching, copywriting, song reviews, open questions, composition writing, storytelling, structured extraction, summarizing, closed questions, CoT, objective test questions, brainstorming, etc. (Avoid using only vertical domain data).
- Vertical Domain Data Proportions : Avoid too much; secondary pre-training (PT) could lead to better learning, and no vertical domain data might be added to SFT data.
Examples:
Good Dataset: Question: What's the name of the third child of Xiao Ming's mother, who has three children, with the first one named Yi Mao, and the second Er Mao? Answer: The question starts with "Xiao Ming's mother," so the third child is Xiao Ming, as per the premise.
Poor Dataset: Question: Same as above. Answer: Xiao Ming. (This direct answer lacks a thought process, emphasizing CoT)
Q & A
Why include coding ability in SFT? Teaching AI to write code is a way to instruct it to dissect problems and assemble solutions, which greatly enhances reasoning and structured output capabilities. Research supports this, including the increase in translation ability, which also boosts AI's problem-solving skills, along with other seemingly unrelated abilities.
为什么我不建议在不做PT的情况下做SFT?
如果不为了二次预训练,目前大部分模型都提供了Chat版本,直接用就好。SFT对于数据质量要求很高,在数据质量不高的情况下通过Base去做SFT容易反向优化。提升数据质量所消耗的成本也不低。
如何判断SFT的效果?
这是一个非常复杂的问题。但是可以尝试将您的问题对照场景拆解后让AI辅助解答。参考下图。然后你可以继续发送你的具体问题,让AI进行逐步分析。
暂无内容
感谢回复! Clang 在生成时沿用了 GCC 的版本号标识,我是不是可以理解为Clang 18.1.4生成时使用的就是GCC4.8,所以我后续使用gcc 9.4
gcov
就会有不兼容的问题抱歉,这块我也不太清楚,尝试寻求AI的帮助吧。
我在这个过程中遇到了各种问题- -,现在在UDC core: g_serial: couldn't find an available UDC卡住了,请问大佬有什么解决方案吗,还是说我前置的设置就错了呢,> 这个需求很特殊。是可以的,但是比较困难,需要修改驱动配置。
好思路呀!!
关于hex编辑器,网上没找到特别好用的(小白没办法),最后在vscode上扩展一搜hex,第一个安装一下就可以用vscode进行hex编译了