• Home
  • Services
  • About
  • Contact
  • Blog
  • 知財活動のROICへの貢献
  • 生成AIを活用した知財戦略の策定方法
  • 生成AIとの「壁打ち」で、新たな発明を創出する方法

​
​よろず知財コンサルティングのブログ

制御性T細胞(Treg)に関する技術

5/1/2026

0 Comments

 
2025年のノーベル生理学・医学賞を受賞した坂口志文・大阪大学特任教授が発見した、免疫の過剰な働きを制御する「制御性T細胞(Treg)」の実用化に向けた動きが見えてきたとのことです。しかし、特許件数は米国勢が上位を占めていて、製造特許や用途特許などを海外勢に押さえられ、産業化で日本が後れをとっているのが問題視されています。
この「制御性T細胞(Treg)」に関する技術について、生成AIに深掘りさせました。さらに、報告結果をNotebookLMでインフォグラフィック、スライド資料にさせました。
なお、生成AIによる調査・分析結果は、公開された情報からだけの分析であり、必ずしも実情を示したものではないこと、誤った情報も含まれていることについてはご留意されたうえで、ご参照ください。

坂口氏の制御性T細胞の特許23件 でも米国勢先行、実用化支援必要
https://www.nikkei.com/article/DGXZQOSG119B20R11C25A1000000/


Technology Related to Regulatory T Cells (Tregs)
It has been reported that concrete steps toward the practical application of regulatory T cells (Tregs)—which control excessive immune responses and were discovered by Professor Shimon Sakaguchi, Specially Appointed Professor at Osaka University, winner of the 2025 Nobel Prize in Physiology or Medicine—are now coming into view.
However, a major concern is that U.S.-based entities dominate the number of related patents. Key patents, including those covering manufacturing methods and applications, are being secured by overseas players, raising concerns that Japan is falling behind in industrialization.
With respect to this technology related to regulatory T cells (Tregs), I asked generative AI to conduct an in-depth analysis. In addition, the results of this analysis were converted into infographics and slide materials using NotebookLM.
Please note that the research and analysis conducted by generative AI are based solely on publicly available information. As such, they do not necessarily reflect the actual situation and may include inaccuracies. I ask that you keep this in mind when referring to the materials.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
0 Comments

「指示待ちAI」から「共に成長するAI」への転換

4/1/2026

0 Comments

 
これまでのAIは、即時に答えを返す仕組み(直感的処理)、論理的に考えて結論を出す仕組み(熟考的処理)を備えた高性能な“作業エンジン”として活用されてきました。しかし、このタイプのAIは、本質的には「指示待ち」であり、長期的な目的や文脈を自ら管理することはできません。
近年提案されている「System 3」は、この限界を超えるための経営判断に近いメタレイヤーに相当します。System 3は、自らの判断プロセスを監視・修正する能力(メタ認知)、相手や組織全体の意図を推測する能力、短期成果だけでなく長期価値を基準に行動する動機付け、過去の意思決定と結果を蓄積・再利用する記憶、を統合し、AIの行動を“点”ではなく“時間軸”で最適化する役割を担います。
このSystem 3を実装したフレームワークがSophiaです。
Sophiaの本質は、AIを「業務自動化ツール」から、方針・価値観・学習履歴を内在化した“準・組織メンバー”へと進化させる点にあります。
経営・戦略の観点で見ると、そのインパクトは以下に集約されます。
・AIが戦略意図を理解したうえで行動するため、単発業務ではなくプロジェクト全体を任せられる
・過去の成功・失敗を学習し、意思決定の質が時間とともに向上する
・人が常に監督しなくても、方針逸脱や短期最適を自己修正できる
・結果として、AIが人の判断を代替するのではなく、経営判断を支える持続的パートナーになる
つまり、Sophia/System 3型AIは、
「指示待ちAI」から「共に成長するAI」への転換を意味します。
このSystem 3を実装したフレームワーク「Sophia」について、生成AIに調査させました。さらに、結果をNotebookLMでインフォグラフィック、スライド資料にさせました。
なお、生成AIによる調査・分析結果は、公開された情報からだけの分析であり、必ずしも実情を示したものではないこと、誤った情報も含まれていることについてはご留意されたうえで、ご参照ください。

[Submitted on 20 Dec 2025]
Sophia: A Persistent Agent Framework of Artificial Life
Mingyang Sun, Feng Hong, Weinan Zhang
https://arxiv.org/abs/2512.18202

Sophia: A Persistent Agent Framework of
Artificial Life
https://arxiv.org/pdf/2512.18202

Sophia: AIが自ら学び成長する「System 3」アーキテクチャ、メタ認知で80%の推論削減と自律的目標生成を実現(2512.18202)【論文解説シリーズ】
https://www.youtube.com/watch?v=V9kj9WzS5Tw&t=10s


From “Instruction-Following AI” to “AI That Grows Together with Us”
Until now, AI has been used primarily as a high-performance “work engine” equipped with mechanisms for producing immediate answers (intuitive processing) and for reasoning logically to reach conclusions (deliberative processing). However, this type of AI is essentially instruction-following: it cannot autonomously manage long-term goals or broader context.
The recently proposed concept of “System 3” corresponds to a meta-layer akin to executive decision-making, designed to overcome these limitations. System 3 integrates capabilities such as: monitoring and correcting its own decision-making processes (metacognition); inferring the intentions of counterparts and the organization as a whole; acting based not only on short-term outcomes but also on long-term value; and accumulating and reusing memories of past decisions and their results. Through this integration, System 3 optimizes AI behavior not as isolated “points,” but along a continuous time axis.
The framework that implements this System 3 is Sophia.
At its core, Sophia evolves AI from a mere “business automation tool” into a quasi-organizational member that internalizes policies, values, and learning history.
From a management and strategy perspective, its impact can be summarized as follows:
• Because the AI understands strategic intent, it can be entrusted with entire projects rather than isolated tasks.
• By learning from past successes and failures, the quality of its decision-making improves over time.
• Even without constant human supervision, it can self-correct deviations from policy or short-term optimization biases.
• As a result, AI does not replace human judgment, but becomes a sustainable partner that supports executive decision-making.
In other words, Sophia / System 3–type AI represents a transition from “instruction-following AI” to “AI that grows together with us.”
I asked generative AI to research the System 3–implemented framework “Sophia,” and then used NotebookLM to turn the results into infographics and presentation slides.
Please note that the research and analysis conducted by generative AI are based solely on publicly available information; they do not necessarily reflect actual conditions and may contain inaccuracies. Please keep this in mind when referring to the materials.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
0 Comments

「2026年AI展望」に触発されたAI 時代の知財戦略

3/1/2026

1 Comment

 
TBS CROSS DIG with Bloomberg【1on1 Tech】が2025年12月9日に公開した「2026年AI展望」(約63分)に触発されて、AI 時代の知財戦略論説『「賢い AI」から「稼ぐ AI」へ ― 権利保護業務の自動化と、価値創造に向けた戦略業務への構造転換 ―』を創りました。触発された基本アイデアを生成AI ( ChatGPT 5 Pro, Gemini 3 Deep Think, Claude Opus 4.5 )にアップし、それぞれの生成AIと壁打ちを繰り返しました。それぞれ個性あふれる内容になりましたが、今回はClaude Opus 4.5 で仕上げた結果が感覚的にぴったりきました。これをNotebookLMでインフォグラフィック、スライド資料にさせましたのでご参照ください。
 
【“数学オリンピック優勝”のAIは便利なのか】今井翔太「AIは賢くなり過ぎた」「2026年は“仕事で使えるAI”の競争」/ChatGPTとGeminiは「動画と科学」で革命起こす【1on1 Tech】TBS CROSS DIG with Bloomberg
https://www.youtube.com/watch?v=Bt761_2_Fgo&t=25s
 
IP Strategy in the AI Era Inspired by the “2026 AI Outlook”
Inspired by “2026 AI Outlook” (approx. 63 minutes), released on December 9, 2025, by TBS CROSS DIG with Bloomberg 【1on1 Tech】, I created an IP strategy essay for the AI era titled:
“From ‘Smart AI’ to ‘Profitable AI’: Automating Rights-Protection Operations and Structurally Shifting Toward Strategic Work for Value Creation.”
I uploaded the core ideas that inspired me to multiple generative AI systems (ChatGPT 5 Pro, Gemini 3 Deep Think, and Claude Opus 4.5) and repeatedly engaged in back-and-forth discussions with each of them. Each produced outputs with distinctive characteristics, but this time the result refined with Claude Opus 4.5 felt intuitively the most fitting.
I have converted this outcome into infographics and slide materials using NotebookLM, so please refer to them.

Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
1 Comment

「侵害予防調査と無効資料調査のノウハウ」の全文が公開

2/1/2026

0 Comments

 
2020年11月に初版が発売された角渕由英(つのぶちよしひで)弁理士の著書「侵害予防調査と無効資料調査のノウハウ~特許調査のセオリー~」の全文が公開されました。
特許調査のセオリー(第1章)、侵害予防調査(第2章)、無効資料調査(第3章)の構成で、非上位わかりやすく書かれています。場と思います。
ただ、5年前の本だけにその後の変化には対応していないことが気になったので、この5年の特許をめぐる変化を生成AIにピックアップさせました。応用編として、気にしていただければと思います。さらに、報告結果をNotebookLMでインフォグラフィック、スライド資料にさせました。
なお、生成AIによる調査・分析結果は、公開された情報からだけの分析であり、必ずしも実情を示したものではないこと、誤った情報も含まれていることについてはご留意されたうえで、ご参照ください。
 
侵害予防調査と無効資料調査のノウハウ~特許調査のセオリー~#全文公開
https://note.com/tsunobuchi/n/n5100dbf82075
 
 
The full text of “Know-How for Infringement Prevention Searches and Invalidity Evidence Searches” has been released
The complete text of “Know-How for Infringement Prevention Searches and Invalidity Evidence Searches: The Theory of Patent Searching,” written by patent attorney Yoshihide Tsunobuchi and first published in November 2020, has now been made publicly available.
The book is structured into three parts--The Theory of Patent Searching (Chapter 1), Infringement Prevention Searches (Chapter 2), and Invalidity Evidence Searches (Chapter 3)—and is written in a clear and easy-to-understand manner, even for readers who are not advanced specialists.
However, since the book was published five years ago, it does not reflect subsequent changes. With that in mind, I asked generative AI to identify and summarize key developments in the patent landscape over the past five years. I hope readers will find this useful as an applied or supplementary perspective. In addition, the results have been turned into infographics and slide materials using NotebookLM.
Please note that the research and analysis produced by generative AI are based solely on publicly available information. They do not necessarily reflect actual circumstances and may contain inaccuracies. Kindly keep this in mind when referring to the materials.

Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
0 Comments

生成AIの適切な利活用等に向けた知的財産の保護及び透明性に関するプリンシプル・コード(仮称)(案)

1/1/2026

0 Comments

 
2025年12月26日、内閣府知的財産戦略推進事務局が、生成AIの適切な利活用等に向けた知的財産の保護及び透明性に関するプリンシプル・コード(仮称)(案)について、広く国民から意見を募るパブリックコメントを実施することが明らかにされました。
意見募集の概要
募集期間: 2025年12月26日(金)から 2026年1月26日(月)23時59分まで。
対象者: 生成AIの開発者、提供者、利用者、知的財産権の権利者など、広く一般から募集。
目的: 生成AIと知的財産権をめぐるリスク対応や透明性確保に向けた原則(プリンシプル)を策定するため。
案の主な内容
このコードは、AI事業者に対し、特定の項目を「実施する(Comply)」か、実施しない場合はその理由を「説明する(Explain)」というコンプライ・オア・エクスプレイン手法を基本としています。
前提の知識として、米国、欧州、中国におけるAI規制の動向を、生成AIに比較・分析させました。さらに、報告結果をNotebookLMでインフォグラフィック、スライド資料にさせました。
なお、生成AIによる調査・分析結果は、公開された情報からだけの分析であり、必ずしも実情を示したものではないこと、誤った情報も含まれていることについてはご留意されたうえで、ご参照ください。
 
生成AIの適切な利活用等に向けた知的財産の保護及び透明性に関するプリンシプル・コード(仮称)(案)に関する御意見の募集について
https://www.kantei.go.jp/jp/singi/titeki2/ikenboshu_20251226.html
 
 
Principles and Code (Provisional Title) (Draft) on the Protection of Intellectual Property and Transparency for the Appropriate Use and Utilization of Generative AI
On December 26, 2025, the Intellectual Property Strategy Promotion Office of the Cabinet Office announced that it would conduct a public comment process to broadly solicit opinions from the public regarding the Principles and Code (Provisional Title) (Draft) on the Protection of Intellectual Property and Transparency for the Appropriate Use and Utilization of Generative AI.
Overview of the Public Comment
  • Comment period: From Friday, December 26, 2025, to Monday, January 26, 2026, until 23:59
  • Eligible respondents: Open to a wide range of stakeholders, including developers, providers, and users of generative AI, as well as intellectual property rights holders and the general public
  • Purpose: To formulate principles aimed at addressing risks and ensuring transparency in relation to generative AI and intellectual property rights
Main Points of the Draft
This Code adopts a “comply or explain” approach, under which AI business operators are required to either implement (Comply) specified items or, if they do not implement them, explain (Explain) the reasons.
As background knowledge, trends in AI regulation in the United States, Europe, and China were compared and analyzed using generative AI. Furthermore, the results of this analysis were converted into infographics and slide materials using NotebookLM.
Please note that the research and analysis conducted by generative AI are based solely on publicly available information and do not necessarily reflect actual circumstances. They may also contain inaccuracies. Readers are advised to keep this in mind when referring to the materials.

Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
0 Comments

あけましておめでとうございます

1/1/2026

0 Comments

 
2025年は、生成AIが驚異的な進化をとげました。
推論モデルの登場と普及、Deep Research機能の登場と普及、Nano Banana Pro(Gemini 3 Pro Image)の登場、そして、Gemini 3 Deep Think や GPT-5.2 Pro に代表される最新モデルは、IQ140を超える“賢いモデル”として、人間の知的作業の中核領域に本格的に入り込みました。
生成AIの台頭により、プログラマーの仕事は、ゼロからコードを書く仕事から、AIが生成したコードをレビュー、デバッグ、修正、統合する仕事へと変化しました。
同様に、知財実務においても、発明の原案、明細書のたたき台、先行技術調査の一次アウトプットなどを生成AIに出させ、人がそれをレビューし、統合し、最終判断を下すというスタイルが急速に一般化しつつあります。
この変化は一過性の効率化ではありません。
2026年は、いわば「AGI手前(pre-AGI)」の段階において、極めて高い推論能力を持つAIが実務を担う時代に入ります。完全なAGIではないものの、専門職の思考プロセスの大部分を代替・補完できるAIが、現実の業務を動かす前提条件となります。
その結果、2026年に問われるのは「AIを使えるかどうか」ではなく、「AIエージェントをどう設計し、どこまで権限を与え、どの地点で人が責任を引き取るのか」という、設計思想そのものです。
タスク単位で指示を出す従来型の生成AI活用から、複数のAIエージェントが調査・仮説構築・案の生成を自律的に行う前提へと、業務環境は確実に移行していきます。
だからこそ重要になるのが、責任設計(Responsibility by Design)です。
AIが出した結論に対して、どこまでをAIの判断とみなすのか、どこからを人間の最終判断とするのか、誤りや権利侵害、説明責任が生じた場合、誰がどの段階で責任を負うのか、これらを事後的に議論するのではなく、業務プロセスの中にあらかじめ埋め込んでおくことが不可欠になります。
2026年取り組むべきことは明確です。
第一に、AIエージェントを前提とした業務プロセスの再設計。
第二に、AIの出力を評価・統合・棄却するためのレビュー基準と判断ルールの明文化。
第三に、知的財産・コンプライアンス・説明責任を含むAIガバナンスの構築です。
生成AIは、もはや「便利な道具」ではありません。
それは、組織の意思決定と知的生産を内側から形作る知的生産システムの構成要素です。
2026年は、AIを導入した組織と、AIを設計し、統制し、責任を持って使いこなす組織との間に、決定的な差が生まれる年になるでしょう。
 
January 1 — 2026: From “Using AI” to “Designing and Governing AI”
In 2025, generative AI underwent astonishing advances.
The emergence and widespread adoption of reasoning models, the rollout of Deep Research capabilities, the launch of Nano Banana Pro (Gemini 3 Pro Image), and the latest models represented by Gemini 3 Deep Think and GPT-5.2 Pro have entered the core domains of human intellectual work as “highly intelligent models” with IQs exceeding 140.
As generative AI has risen, the role of programmers has shifted—from writing code from scratch to reviewing, debugging, modifying, and integrating code generated by AI.
Similarly, in intellectual property practice, it is rapidly becoming standard to have generative AI produce initial drafts of invention concepts, specification outlines, and first-pass outputs of prior-art searches, with humans reviewing, integrating, and making final judgments.
This change is not a temporary efficiency gain.
In 2026, we enter what might be called the “pre-AGI” phase, in which AI with extremely high reasoning capabilities takes on real operational roles. While not fully AGI, AI that can replace or complement most of the thinking processes of professionals will become a basic assumption underlying real-world work.
As a result, the key question in 2026 will no longer be “Can you use AI?” but rather “How do you design AI agents, how much authority do you grant them, and at what point do humans assume responsibility?”—in other words, the design philosophy itself.
Work environments will steadily transition from conventional generative AI usage, where instructions are given task by task, to a model in which multiple AI agents autonomously conduct research, build hypotheses, and generate proposals.
This is precisely why Responsibility by Design becomes crucial.
Rather than debating after the fact how to treat AI-generated conclusions—where AI judgment ends and human final judgment begins, who bears responsibility at which stage when errors, rights infringements, or accountability issues arise—these boundaries must be embedded in advance within operational processes.
What must be addressed in 2026 is clear.
First, redesigning business processes on the assumption that AI agents are integral participants.
Second, clearly defining review criteria and decision rules for evaluating, integrating, or rejecting AI outputs.
Third, establishing AI governance that encompasses intellectual property, compliance, and accountability.
Generative AI is no longer merely a “convenient tool.”
It is a core component of the intellectual production system that shapes organizational decision-making and knowledge creation from within.
In 2026, a decisive gap will emerge between organizations that merely adopt AI and those that design it, govern it, and use it responsibly.
Your browser does not support viewing this document. Click here to download the document.
Your browser does not support viewing this document. Click here to download the document.
0 Comments
Forward>>

    著者

    萬秀憲

    アーカイブ

    January 2026
    December 2025
    November 2025
    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020

    カテゴリー

    All

    RSS Feed

Copyright © よろず知財戦略コンサルティング All Rights Reserved.
サイトはWeeblyにより提供され、お名前.comにより管理されています
  • Home
  • Services
  • About
  • Contact
  • Blog
  • 知財活動のROICへの貢献
  • 生成AIを活用した知財戦略の策定方法
  • 生成AIとの「壁打ち」で、新たな発明を創出する方法