株式会社LifePromptが行った実験で、1月13日(土)、24日(日)に実施された大学入試共通テスト2024(5教科7科目)を、GPT-4 (OpenAI)、Bard( Google)、Claude2(Anthropic)の三種類の生成AIに受験させ、AIの性能の比較を行った結果、正答率は、GPT-4が66%、Bardが43%、Claude2が51%で、5教科7科目すべてでGPT-4が圧勝だったという。(ちなみに、受験者平均の正答率は60%)
やはりGPT-4は凄いのか。 【2024年最新】共通テストを色んな生成AIに解かせてみた(ChatGPT vs Bard vs Claude2) https://note.com/lifeprompt/n/n87f4d5510100#a8a9dde2-da26-4460-be5b-d46ef283a7d1 Putting the University Entrance Exam for 2024 with Various Generative AIs (ChatGPT vs. Bard vs. Claude2) In an experiment conducted by LifePrompt Co., the University Entrance Exam for 2024, which took place on January 13 (Saturday) and 24 (Sunday) with five subjects and seven topics, was administered to three different types of generative AI: GPT-4 (OpenAI), Bard (Google), and Claude2 (Anthropic). The results of the AI performance comparison showed that the accuracy rates were as follows: GPT-4 at 66%, Bard at 43%, and Claude2 at 51%. GPT-4 was the clear winner across all five subjects and seven topics. (For reference, the average accuracy rate among human test takers was 60%.) It seems that GPT-4 is indeed quite impressive.
0 Comments
Leave a Reply. |
著者萬秀憲 アーカイブ
December 2024
カテゴリー |