|
2025年12月17日、Google DeepMindの研究チームは、プロンプトを単に2回繰り返して入力するだけで、大規模言語モデルの回答精度が大幅に向上するという画期的な手法を報告しました。特に情報の検索や抽出、分類といったタスクで劇的な改善が見られ、一部のテストでは正答率が21%から97%へ跳ね上がるなど圧倒的な成果を収めました。 一方で、「Chain-of-Thought(思考の連鎖)」を用いる複雑な推論タスクでは、ほとんど変化なし(5勝1敗22引き分け)という結果に終わりました。モデルが内部ですでに情報の反復を行っているためとのことです。 この手法について、生成AIに調査させました。さらに、結果をNotebookLMでインフォグラフィック、スライド資料にさせました。 なお、生成AIによる調査・分析結果は、公開された情報からだけの分析であり、必ずしも実情を示したものではないこと、誤った情報も含まれていることについてはご留意されたうえで、ご参照ください。 『LLMは「同じ質問を2回」入力すると精度が上がる──Google研究者ら、プロンプト反復の効果を短報で報告 https://ledge.ai/articles/prompt_repetition_improves_llm_accuracy Accuracy Improves When the Same Question Is Entered Twice On December 17, 2025, a research team at Google DeepMind reported a groundbreaking method showing that simply repeating the same prompt twice can significantly improve the response accuracy of large language models. Dramatic improvements were observed particularly in tasks such as information retrieval, extraction, and classification. In some tests, accuracy surged from 21% to 97%, demonstrating overwhelming performance gains. By contrast, for complex reasoning tasks that rely on Chain-of-Thought prompting, the results showed almost no improvement (5 wins, 1 loss, and 22 draws). This is believed to be because the models already perform internal repetition of information during their reasoning processes. I asked generative AI to investigate this method in detail. Furthermore, the results were transformed into infographics and slide materials using NotebookLM. Please note that the investigations and analyses conducted by generative AI are based solely on publicly available information and may not necessarily reflect actual conditions. They may also contain inaccuracies, and readers are advised to interpret the content with this in mind. Your browser does not support viewing this document. Click here to download the document. Your browser does not support viewing this document. Click here to download the document. Your browser does not support viewing this document. Click here to download the document. Your browser does not support viewing this document. Click here to download the document. Your browser does not support viewing this document. Click here to download the document. Your browser does not support viewing this document. Click here to download the document.
0 Comments
Leave a Reply. |
著者萬秀憲 アーカイブ
January 2026
カテゴリー |
RSS Feed