2023-12-01
org.kosen.entty.User@4e9066bf
운영자(kosenadmin)
Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multi-step problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.
Ⅰ. 서론
Ⅱ. 초거대 언어모델 최근 동향
Ⅲ. 수학 추론 연구 동향
Ⅳ. 결론
용어해설
약어 정리
Ⅰ. 서론
Ⅱ. 초거대 언어모델 최근 동향
Ⅲ. 수학 추론 연구 동향
Ⅳ. 결론
용어해설
약어 정리
-
리포트 평점
해당 콘텐츠에 대한 회원님의 소중한 평가를 부탁드립니다. -
0.0 (0개의 평가)