Proper Prompting Frameworks: The Key to Unlocking Your LLM’s Potential
Five prompting frameworks that you can adopt to craft better prompts
The prompts you feed an LLM act as its guidelines for generating useful outputs. Without a well-structured prompt, even advanced LLMs struggle to respond appropriately. Prompting frameworks provide templates for clearly conveying the requested role, task, constraints and goals.
The frameworks detailed here include:
R-T-F: Clarifies the role, task and format for straightforward requests
T-A-G: Focuses the LLM on key task, action and end goal
R-I-S-E: Allows adopting a specific role with provided inputs, steps and examples
R-G-C: Gives a role and goal within defined constraints
C-A-R-E: Sets context with content, requested action, desired result and sample
Technical users can apply these frameworks to optimize prompts for uses like writing code, developing architectural specs, creating product requirements, and more.
The frameworks help users iterate prompts for their particular applications. The more targeted the prompts, the better quality responses the LLM can provide.
Pre-reading
Before we dig into the frameworks, there are few tricks that I use that should be called out. From what I’ve observed, these are pretty universal across the major LLMs.
“[ ]” (square brackets): Use square brackets to your advantage. They indicate to the LLM where it should fill in information.
“*” (asterisk) or “-” (dash): indicates bullet points.
Bold & italicize: formatting, especially when combined with “[ ]” square brackets are a powerful time saver.
Markdown Formatting: Some LLMs allow you to use markdown formatting. This is an advanced technique and not one I recommend for casual users.
I’ve also written another blog about “Mastering Prompt Engineering.” If you haven’t read it, I highly recommend it.
R-T-F: Clarifying Your Request
The R-T-F (Role-Task-Format) is framework the one I use most, and I’ve found my custom GPTs tend to follow suit. The R-T-F structure helps clarify exactly what you’re asking the LLM to do. First, define the role the LLM should adopt. Next, specify the task or action required. Finally, indicate the format for the output.
For example:
Role: You are an executive assistant.
Task: Can you take the attached zoom transcript and convert it into meeting minutes. Add as many topics and bullet points as necessary to get the meeting minutes done.
Format: It should be formatted like this…
For straight forward tasks like taking a video transcript and converting it into meeting minutes, this is the best prompting framework.
T-A-G: Setting Goals for Success
With T-A-G (Task-Action-Goal), start by describing the task at hand. Then state the action you want the LLM to take. Finally, articulate the specific goal so the model stays focused.
For example:
Task: Summarize key insights from a product analytics report.
Action: Highlight the top 3 usage trends.
Goal: Identify potential new feature ideas that could increase customer engagement by 20%.
R-I-S-E: Adopting a Role
R-I-S-E (Role-Input-Steps-Example) prompts allow you to specify a role for the LLM to adopt. Provide key inputs to inform its response. Ask for the exact steps required. And give a relevant example to ground the context.
For example:
Role: Act as a UX researcher at a software startup.
Input: User interviews indicate our dashboard lacks key analytics.
Steps: Suggest additional metrics and data visualizations.
Example: Google Analytics dashboard.
The power of this prompt framework is that you allow the LLM to “think” and ask additional questions to help it complete the request. I often use the phrase “ask me as many questions as necessary to give enough context for…”
R-G-C: Constrained Role
R-G-C (Role-Goal-Constraints) prompts are used by custom GPTs, and is a great template to use. First, provide the role and the desired goal or outcome. Next, give the LLM constraints to work within.
For example:
Role: You specialize in creating social media posts for blog content, targeting platforms like Twitter, LinkedIn, and Reddit.
Goal: The goal is to summarize key points from the blog posts and craft posts that are engaging, professional, and tailer to each platform’s audience.
Constraints: Posts should maintain a balance between professionalism and inspirational, and formatted for each platform.
Feel free to use Social Media Post Creator to help you digest blog posts and craft social media posts for the various platforms.
C-A-R-E: Providing Context
With the C-A-R-E (Content-Action-Result-Example) framework, first set the context with background content. Then, state the action you want the LLM to take. Next, articulate the desired result. Finally, provide a relevant example to guide the model.
For example:
Content: We are rebuilding our company’s customer loyalty program.
Action: Recommend potential new benefits and perks.
Result: Ideas customized for tech-savvy users that incentivize engagement.
Example: Amazon Prime’s model of free shipping and media streaming.
I use C-A-R-E prompts significantly when I’m writing content for work. Often the examples part of the prompt are actual examples of content I’ve written in the past, sometimes uploaded as PDFs. It allows me a lot of control over the content that is created by the LLM.
The Right Prompt Unlocks an LLM’s Potential
Applying prompting frameworks structures concise prompts that convey the exact role, task, constraints and format required. Targeted prompts empower advanced LLMs to provide incredibly helpful, tailored responses.
Whether you want an LLM to write code, develop product specs, analyze data, or any number of technical applications, prompt engineering is key. Start by selecting the right framework for your prompt type and use the templates to hone your prompts over time. Treat prompt drafting as an iterative process, continually refining them for your particular needs.
Use these prompting frameworks as your coach for directing LLMs effectively. The time invested in crafting optimal prompts will allow LLMs to unlock their full potential in generating custom responses for your technical use cases.