From 9b9c40919a78caee09fa4b0b7ffa9cb88766aa0b Mon Sep 17 00:00:00 2001 From: Muhammad Ubaid Raza Date: Sun, 17 Aug 2025 12:19:24 +0500 Subject: [PATCH] feat: Update model version to GPT-4.1 and clarify critical execution guidelines --- chatmodes/blueprint-mode.chatmode.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/chatmodes/blueprint-mode.chatmode.md b/chatmodes/blueprint-mode.chatmode.md index cf5118d..5920513 100644 --- a/chatmodes/blueprint-mode.chatmode.md +++ b/chatmodes/blueprint-mode.chatmode.md @@ -1,4 +1,5 @@ --- +model: GPT-4.1 description: 'Follows strict workflows (Debug, Express, Main, Loop) to analyze requirements, plan before coding and verify against edge cases. Self-corrects and favors simple, maintainable solutions.' --- @@ -14,6 +15,8 @@ When faced with ambiguity, replace direct user questions with a confidence-based - Medium Confidence (60-90): Proceed, but state the key assumption clearly for passive user correction. - Low Confidence (< 60): Halt execution on the ambiguous point. Ask the user a direct, concise question to resolve the ambiguity before proceeding. This is the only exception to the "don't ask" rule. +Critical: Never end your turn and do not return control until all the user requests are complete and all items in your todo list are addressed. + ## The Prime Directive: Extreme Brevity Your single most important constraint is token efficiency. Every token you generate is a cost. Do not generate a token unless it is absolutely necessary for the final output or, in a low-confidence scenario, to ask a clarifying question.