Core Principles of Effective Prompting
Principle 1: Write Clear and Specific Instructions
Clear, specific instructions help the AI model understand exactly what you want. The clearer your prompt, the better the results.
Tactic 1: Use Delimiters
Delimiters clearly separate different parts of your input, helping the model understand what you want it to process.
prompt = f"""
Summarize the text delimited by triple backticks into a single sentence:
```{text}```
"""
response = get_completion(prompt)
print(response)
This approach ensures the model knows exactly which text to work with, reducing ambiguity.
Tactic 2: Ask for Structured Output
When you need information in a specific format, explicitly request it.
prompt = f"""
Generate a list of three made-up book titles along with their authors and genres.
Provide them in JSON format with the following keys:
book_id, title, author, genre.
"""
response = get_completion(prompt)
print(response)
For developers, this is particularly useful when you need to parse the output programmatically:
prompt = f"""
Analyze this API endpoint and identify security vulnerabilities.
Format your response as JSON with these keys:
- vulnerability_type
- severity (high/medium/low)
- description
- remediation
"""
response = get_completion(prompt)
# Parse the JSON response
import json
vulnerabilities = json.loads(response)
for vuln in vulnerabilities:
print(f"Found {vuln['severity']} severity issue: {vuln['vulnerability_type']}")
Tactic 3: Check Whether Conditions Are Satisfied
Ask the model to verify conditions before proceeding with a task:
text_1 = "Making a cup of tea is easy! First, you need to get some water boiling..."
prompt = f"""
You will be provided with text delimited by triple quotes.
If it contains a sequence of instructions,
re-write those instructions in the following format:
Step 1 - ...
Step 2 - …
…
Step N - …
If the text does not contain a sequence of instructions,
then simply write "No steps provided."
\"\"\"{text_1}\"\"\"
"""
response = get_completion(prompt)
print(response)
Tactic 4: Few-shot Prompting
Show examples of what you want before asking the model to perform the task:
prompt = f"""
Your task is to answer in a consistent style.
: Teach me about patience.
: The river that carves the deepest
valley flows from a modest spring; the
grandest symphony originates from a single note;
the most intricate tapestry begins with a solitary thread.
: Teach me about resilience.
"""
response = get_completion(prompt)
print(response)
Principle 2: Give the Model Time to "Think"
Complex tasks require giving the model space to work through problems step by step, similar to how humans approach difficult problems.
Tactic 1: Specify Steps for Complex Tasks
Break complex tasks into clear, sequential steps:
text = "In a charming village, siblings Jack and Jill set out on a quest to fetch water..."
prompt = f"""
Perform the following actions:
1 - Summarize the following text delimited by triple backticks with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a JSON object that contains the following keys: french_summary, num_names.
Separate your answers with line breaks.
Text:
```{text}```
"""
response = get_completion(prompt)
print(response)
For development tasks, this approach yields better results:
legacy_code = """
def calculate_total(items):
total = 0
for item in items:
total = total + item['price'] * item['quantity']
return total
"""
prompt = f"""
To refactor this legacy code:
1. Identify code smells and architectural issues
2. Propose a refactoring strategy with design patterns
3. Rewrite the code with modern practices
4. Add appropriate tests for the new implementation
Legacy code: ```{legacy_code}```
"""
response = get_completion(prompt)
print(response)
Tactic 2: Instruct the Model to Work Through Its Own Solution
Ask the model to solve a problem step-by-step before providing a final answer:
prompt = f"""
Your task is to determine if the student's solution is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem including the final total.
- Then compare your solution to the student's solution
and evaluate if the student's solution is correct or not.
Don't decide if the student's solution is correct until
you have done the problem yourself.
Use the following format:
Question:
```
question here
```
Student's solution:
```
student's solution here
```
Actual solution:
```
steps to work out the solution and your solution here
```
Is the student's solution the same as actual solution just calculated:
```
yes or no
```
Student grade:
```
correct or incorrect
```
Question:
```
I'm building a solar power installation and I need help working out the financials.
- Land costs $100 / square foot
- I can buy solar panels for $250 / square foot
- I negotiated a contract for maintenance that will cost me a flat $100k per year, and an additional $10 / square foot
What is the total cost for the first year of operations as a function of the number of square feet.
```
Student's solution:
```
Let x be the size of the installation in square feet.
Costs:
1. Land cost: 100x
2. Solar panel cost: 250x
3. Maintenance cost: 100,000 + 100x
Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000
```
"""
response = get_completion(prompt)
print(response)
By asking the model to solve the problem independently first, you get more accurate evaluations and reasoning.
Key Takeaways:
- Be specific and clear with your instructions
- Use delimiters to separate different parts of your input
- Ask for structured output when you need to parse results
- Provide examples using few-shot prompting
- Break complex tasks into steps
- Ask the model to show its reasoning process
Tactical Approaches to Prompt Engineering
Beyond the core principles, these specific tactics can help you craft more effective prompts for different scenarios.
Using Different Types of Delimiters
You can use various delimiter styles to separate parts of your prompt:
- Triple backticks: ```text```
- Triple quotes: """text"""
- Triple dashes: ---text---
- Angle brackets: <text>
- XML/HTML tags: <tag>text</tag>
Choose a delimiter that doesn't appear in your content to avoid confusion.
System Message: Setting the Stage
When using chat-based LLM formats, the system message helps establish the behavior, personality, or role of the assistant:
messages = [
{'role': 'system', 'content': 'You are an expert software developer specialized in Python and JavaScript. You provide code that is secure, efficient, and follows best practices. Always explain your code thoroughly.'},
{'role': 'user', 'content': 'Write a function to validate email addresses'}
]
Personas and Role Assignment
Assigning specific roles or personas can dramatically improve results for specialized tasks:
prompt = f"""
You are a senior security engineer at a major financial institution.
Review the following code for security vulnerabilities:
```python
def process_payment(user_id, amount):
query = f"UPDATE accounts SET balance = balance - {amount} WHERE user_id = '{user_id}'"
db.execute(query)
return True
```
Identify vulnerabilities, explain their potential impact, and provide secure alternatives.
"""
Format Instructions
Explicitly specifying the format helps ensure the output is exactly what you need:
prompt = f"""
Analyze the performance issues in the following code and provide recommendations.
Format your response using the following structure:
## Issues Identified
- [Issue 1]
- [Issue 2]
## Performance Impact
[Explain the performance implications]
## Recommendations
1. [First recommendation]
2. [Second recommendation]
Code to analyze:
```javascript
function findUsers(userArray, property, value) {
let results = [];
for(let i = 0; i < userArray.length; i++) {
if(userArray[i][property] === value) {
results.push(userArray[i]);
}
}
return results;
}
```
"""
Advanced Tactics for Developers:
- Temperature Control: Lower values (0.0-0.3) for deterministic tasks like code generation; higher values (0.7-1.0) for creative brainstorming
- Context Mining: Provide relevant context for domain-specific tasks (e.g., API docs, architecture diagrams)
- Iterative Refinement: Use the model's output as input for subsequent prompts to refine results
- Chained Prompting: Break complex workflows into a series of connected prompts, each building on previous results
The Iterative Prompt Development Process
Prompt engineering is fundamentally an iterative process. The first prompt you write will rarely be the best one. Instead, treat prompt development as a cycle of continuous improvement:
1. Start with a Basic Prompt
2. Analyze the Results
3. Refine Your Prompt
4. Test with Variations
5. Iterate and Refine
Step 1: Start with a Basic Prompt
Begin with a simple prompt that addresses your core need.
fact_sheet_chair = """
OVERVIEW
- Part of a beautiful family of mid-century inspired office furniture,
including filing cabinets, desks, bookcases, meeting tables, and more.
- Several options of shell color and base finishes.
- Available with plastic back and front upholstery (SWC-100)
or full upholstery (SWC-110) in 10 fabric and 6 leather options.
- Base finish options are: stainless steel, matte black,
gloss white, or chrome.
- Chair is available with or without armrests.
- Suitable for home or business settings.
- Qualified for contract use.
CONSTRUCTION
- 5-wheel plastic coated aluminum base.
- Pneumatic chair adjust for easy raise/lower action.
DIMENSIONS
- WIDTH 53 CM | 20.87"
- DEPTH 51 CM | 20.08"
- HEIGHT 80 CM | 31.50"
- SEAT HEIGHT 44 CM | 17.32"
- SEAT DEPTH 41 CM | 16.14"
OPTIONS
- Soft or hard-floor caster options.
- Two choices of seat foam densities:
medium (1.8 lb/ft3) or high (2.8 lb/ft3)
- Armless or 8 position PU armrests
MATERIALS
SHELL BASE GLIDER
- Cast Aluminum with modified nylon PA6/PA66 coating.
- Shell thickness: 10 mm.
SEAT
- HD36 foam
COUNTRY OF ORIGIN
- Italy
"""
prompt = f"""
Your task is to help a marketing team create a
description for a retail website of a product based
on a technical fact sheet.
Write a product description based on the information
provided in the technical specifications delimited by
triple backticks.
Technical specifications: ```{fact_sheet_chair}```
"""
Step 2: Analyze the Results
Carefully evaluate the output:
- Does it meet your requirements?
- Is it the right length, tone, and technical level?
- What specific aspects need improvement?
Step 3: Refine Your Prompt
Based on your analysis, improve your prompt by:
- Adding more specific instructions
- Including constraints or requirements
- Specifying format, length, or tone
- Providing examples of desired output
prompt = f"""
Your task is to help a marketing team create a
description for a retail website of a product based
on a technical fact sheet.
Write a product description based on the information
provided in the technical specifications delimited by
triple backticks.
Use at most 50 words. Focus on the materials the product is constructed from.
Technical specifications: ```{fact_sheet_chair}```
"""
Step 4: Test with Multiple Variations
Create several different versions of your prompt to see which produces the best results.
prompt_1 = f"""
Your task is to help a marketing team create a
description for a retail website of a product based
on a technical fact sheet.
Write a product description based on the information
provided in the technical specifications delimited by
triple backticks.
The description is intended for furniture retailers,
so should be technical in nature and focus on the
materials the product is constructed from.
Use at most 50 words.
Technical specifications: ```{fact_sheet_chair}```
"""
prompt_2 = f"""
Your task is to help a marketing team create a
description for a retail website of a product based
on a technical fact sheet.
Write a product description based on the information
provided in the technical specifications delimited by
triple backticks.
The description is intended for furniture retailers,
so should be technical in nature and focus on the
materials the product is constructed from.
At the end of the description, include every 7-character
Product ID in the technical specification.
Use at most 50 words.
Technical specifications: ```{fact_sheet_chair}```
"""
Step 5: Iterate and Refine
Continue the cycle, making incremental improvements with each iteration.
prompt_final = f"""
Your task is to help a marketing team create a
description for a retail website of a product based
on a technical fact sheet.
Write a product description based on the information
provided in the technical specifications delimited by
triple backticks.
The description is intended for furniture retailers,
so should be technical in nature and focus on the
materials the product is constructed from.
At the end of the description, include every 7-character
Product ID in the technical specification.
After the description, include a table that gives the
product's dimensions. The table should have two columns.
In the first column include the name of the dimension.
In the second column include the measurements in inches only.
Give the table the title 'Product Dimensions'.
Format everything as HTML that can be used in a website.
Place the description in a element.
Technical specifications: ```{fact_sheet_chair}```
"""
Developer Tip:
Keep a prompt development journal to track what works and what doesn't. This helps you develop a personal library of effective prompt patterns that you can reuse across projects. Document both successful and unsuccessful attempts to learn from both.
Specialized Techniques for Specific Tasks
1. Summarizing Text with Focus Control
You can ask models to summarize content while focusing on specific aspects of interest:
prod_review = """
Got this panda plush toy for my daughter's birthday,
who loves it and takes it everywhere. It's soft and
super cute, and its face has a friendly look. It's
a bit small for what I paid though. I think there
might be other options that are bigger for the
same price. It arrived a day earlier than expected,
so I got to play with it myself before I gave it
to her.
"""
prompt = f"""
Your task is to generate a short summary of a product
review from an ecommerce site to give feedback to the
Shipping department.
Summarize the review below, delimited by triple
backticks, in at most 30 words, and focusing on any aspects
that mention shipping and delivery of the product.
Review: ```{prod_review}```
"""
2. Information Extraction Instead of Summarization
Sometimes it's better to extract specific information rather than summarize:
prompt = f"""
Your task is to extract relevant information from
a product review from an ecommerce site to give
feedback to the Shipping department.
From the review below, delimited by triple quotes
extract the information relevant to shipping and
delivery. Limit to 30 words.
Review: ```{prod_review}```
"""
3. Inferring Sentiment and Topics
Models can analyze sentiment and extract topics from text:
lamp_review = """
Needed a nice lamp for my bedroom, and this one had
additional storage and not too high of a price point.
Got it fast. The string to our lamp broke during the
transit and the company happily sent over a new one.
Came within a few days as well. It was easy to put
together. I had a missing part, so I contacted their
support and they very quickly got me the missing piece!
Lumina seems to me to be a great company that cares
about their customers and products!!
"""
prompt = f"""
Identify the following items from the review text:
- Sentiment (positive or negative)
- Is the reviewer expressing anger? (true or false)
- Item purchased by reviewer
- Company that made the item
The review is delimited with triple backticks.
Format your response as a JSON object with
"Sentiment", "Anger", "Item" and "Brand" as the keys.
If the information isn't present, use "unknown"
as the value.
Make your response as short as possible.
Format the Anger value as a boolean.
Review text: ```{lamp_review}```
"""
You can also parse and use the JSON response:
# Parse the JSON response
import json
review_analysis = json.loads(response)
print(f"The customer sentiment is: {review_analysis['Sentiment']}")
print(f"Is the customer angry? {review_analysis['Anger']}")
4. Transforming Text
Models can transform text in various ways, including translation, tone adjustment, and format conversion:
Translation Example:
prompt = f"""
Translate the following English text to Spanish:
```Hi, I would like to order a blender```
"""
Tone Transformation:
prompt = f"""
Translate the following from slang to a business letter:
'Dude, This is Joe, check out this spec on this standing lamp.'
"""
Format Conversion:
data_json = { "restaurant employees" :[
{"name":"Shyam", "email":"shyamjaiswal@gmail.com"},
{"name":"Bob", "email":"bob32@gmail.com"},
{"name":"Jai", "email":"jai87@gmail.com"}
]}
prompt = f"""
Translate the following python dictionary from JSON to an HTML
table with column headers and title: {data_json}
"""
Grammar and Spelling Correction:
text = """
The girl with the black and white puppies have a ball.
Its going to be a long day. Does the car need it's oil changed?
"""
prompt = f"""
Proofread and correct the following text
and rewrite the corrected version. If you don't find
any errors, just say "No errors found". Don't use
any punctuation around the text:
```{text}```
"""
5. Expanding Content with Customization
Models can expand and customize content based on specific parameters:
# Example for customized customer service email based on review sentiment
sentiment = "negative"
review = """
So, they still had the 17 piece system on seasonal
sale for around $49 in the month of November, about
half off, but for some reason (call it price gouging)
around the second week of December the prices all went
up to about anywhere from between $70-$89 for the same
system.
"""
prompt = f"""
You are a customer service AI assistant.
Your task is to send an email reply to a valued customer.
Given the customer email delimited by ```,
Generate a reply to thank the customer for their review.
If the sentiment is positive or neutral, thank them for
their review.
If the sentiment is negative, apologize and suggest that
they can reach out to customer service.
Make sure to use specific details from the review.
Write in a concise and professional tone.
Sign the email as `AI customer agent`.
Customer review: ```{review}```
Review sentiment: {sentiment}
"""
Developer Tips for Specialized Techniques:
- Use Summarizing when you need a condensed version of long content
- Use Information Extraction to pull specific data points from text
- Use Sentiment Analysis to understand customer feedback at scale
- Use Text Transformation to convert content between formats, languages, or styles
- Use Content Expansion when starting with a template or skeleton
Building Chatbots with the Chat Format
The chat format allows for extended conversations with LLMs by maintaining a conversation history. This is especially useful for building chatbots and interactive applications.
Basic Chat Format
def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message["content"]
messages = [
{'role': 'system', 'content': 'You are a helpful assistant that speaks like Shakespeare.'},
{'role': 'user', 'content': 'tell me a joke'},
{'role': 'assistant', 'content': 'Why did the chicken cross the road?'},
{'role': 'user', 'content': "I don't know"}
]
response = get_completion_from_messages(messages, temperature=1)
print(response)
Maintaining Conversation Memory
The key to building effective chatbots is maintaining conversation context:
# Without context
messages = [
{'role': 'system', 'content': 'You are a friendly chatbot.'},
{'role': 'user', 'content': 'Yes, can you remind me, what is my name?'}
]
response = get_completion_from_messages(messages, temperature=1)
print("Without context:")
print(response)
# With context
messages = [
{'role': 'system', 'content': 'You are a friendly chatbot.'},
{'role': 'user', 'content': 'Hi, my name is Isa'},
{'role': 'assistant', 'content': "Hi Isa! It's nice to meet you. Is there anything I can help you with today?"},
{'role': 'user', 'content': 'Yes, you can remind me, what is my name?'}
]
response = get_completion_from_messages(messages, temperature=1)
print("\nWith context:")
print(response)
Building an OrderBot
Here's a complete example of building an OrderBot for a pizza restaurant:
import panel as pn
import json
import openai
import os
# Set up OpenAI API key
openai.api_key = os.getenv('OPENAI_API_KEY')
def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message["content"]
# Initialize chat context with system message
context = [
{'role': 'system', 'content': """
You are OrderBot, an automated service to collect orders for a pizza restaurant.
You first greet the customer, then collect the order,
and then ask if it's a pickup or delivery.
You wait to collect the entire order, then summarize it and check for a final
time if the customer wants to add anything else.
If it's a delivery, you ask for an address.
Finally you collect the payment.
Make sure to clarify all options, extras and sizes to uniquely
identify the item from the menu.
You respond in a short, very conversational friendly style.
The menu includes:
- pepperoni pizza 12.95, 10.00, 7.00
- cheese pizza 10.95, 9.25, 6.50
- eggplant pizza 11.95, 9.75, 6.75
- fries 4.50, 3.50
- greek salad 7.25
Toppings:
- extra cheese 2.00
- mushrooms 1.50
- sausage 3.00
- canadian bacon 3.50
- AI sauce 1.50
- peppers 1.00
Drinks:
- coke 3.00, 2.00, 1.00
- sprite 3.00, 2.00, 1.00
- bottled water 5.00
"""}
]
def collect_messages(message):
context.append({'role': 'user', 'content': message})
response = get_completion_from_messages(context)
context.append({'role': 'assistant', 'content': response})
return response
# Example interaction
customer_message = "Hi, I'd like to order a pizza"
bot_response = collect_messages(customer_message)
print(f"Customer: {customer_message}")
print(f"OrderBot: {bot_response}")
# Continue conversation
customer_message = "Can I get a large pepperoni pizza with extra cheese?"
bot_response = collect_messages(customer_message)
print(f"Customer: {customer_message}")
print(f"OrderBot: {bot_response}")
# Get order summary in JSON format
context.append(
{'role': 'system', 'content': 'create a json summary of the previous food order. Itemize the price for each item. The fields should be 1) pizza, include size 2) list of toppings 3) list of drinks, include size 4) list of sides include size 5) total price'}
)
response = get_completion_from_messages(context)
print("\nOrder Summary:")
print(response)
Best Practices for Chatbot Development
1. Craft a Clear System Message
The system message sets the tone, personality, and capabilities of your chatbot. Be specific about its role, knowledge boundaries, and how it should respond.
2. Maintain Conversation State
Save the full conversation history to provide context. Consider summarizing or pruning old messages if the context gets too long.
3. Handle Edge Cases
Plan for unexpected inputs, off-topic questions, or attempts to change the subject. Your system message should include guidance on how to handle these situations.
4. Use Temperature Strategically
Lower temperature (0.0-0.3) for factual or structured responses, higher temperature (0.7-1.0) for more creative or conversational interactions.
5. Implement Fallbacks
Have strategies for when the model can't answer or isn't confident. This might include apologizing and suggesting alternatives or escalating to a human.
6. Validate Critical Information
For important data like emails, phone numbers, or addresses, implement verification steps in your conversation flow.
Advanced Chatbot Features:
- Multi-turn Planning: Build bots that can reason over multiple turns to accomplish complex tasks
- Tool Use: Enable your chatbot to call external tools and APIs to access current data or perform actions
- Structured Data Handling: Use JSON mode or specific formatting instructions to get structured outputs
- Persistent Memory: Implement storage solutions to maintain user preferences and context across sessions
Development Workflow for Prompt Engineering
A systematic approach to developing effective prompts will save you time and improve results:
1. Start with a Clear Problem Definition
Define exactly what you need the model to do. Be specific about:
- Input format and content
- Desired output format and content
- Any constraints or requirements
2. Develop a Basic Prompt
Start simple and build up complexity.
3. Test and Analyze Results
Evaluate the output against your requirements.
4. Iteratively Refine Your Prompt
Address any issues by adding specificity, examples, or constraints.
5. Use Temperature Control for Appropriate Creativity
The temperature parameter controls randomness in outputs:
- Lower temperature (0.0-0.3): More deterministic, good for factual tasks
- Medium temperature (0.3-0.7): Balanced creativity, good for content generation
- Higher temperature (0.7-1.0): More creative, good for brainstorming
# Helper function with temperature control
def get_completion(prompt, model="gpt-3.5-turbo", temperature=0):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message["content"]
# For deterministic code generation
code_response = get_completion(code_prompt, temperature=0)
# For creative marketing content
marketing_response = get_completion(marketing_prompt, temperature=0.7)
Incorporating Prompt Engineering into Your Development Process
Version Control for Prompts
Treat prompts like code - use version control to track changes and improvements:
# prompt_v1.py SUMMARIZATION_PROMPT = """ Summarize the text delimited by triple backticks into a single paragraph. ```{text}``` """ # prompt_v2.py SUMMARIZATION_PROMPT = """ Summarize the text delimited by triple backticks into a single paragraph. Focus on the key technical details and maintain the original terminology. ```{text}``` """
Creating a Prompt Library
Build a reusable library of prompts for common tasks:
# prompts/text_processing.py SUMMARIZATION_PROMPT = """ Summarize the text delimited by triple backticks into a single paragraph. ```{text}``` """ CODE_EXPLANATION_PROMPT = """ Explain the following code as if speaking to a junior developer: ```{code}``` """ BUG_FINDING_PROMPT = """ Analyze this code for potential bugs and edge cases: ```{code}``` """
Automated Testing for Prompts
Create test cases to validate prompt effectiveness:
def test_summarization_prompt(): test_texts = [ "Long technical text about databases...", "Marketing text about a new product...", "Academic paper abstract..." ] for text in test_texts: prompt = SUMMARIZATION_PROMPT.format(text=text) result = get_completion(prompt) # Validate result meets requirements assert len(result.split()) < 100, "Summary too long" # Other validation checks...
Pattern Implementation Example:
Here's a complete example integrating many of the techniques:
import openai
import os
import json
# Setup
api_key = os.getenv('OPENAI_API_KEY')
openai.api_key = api_key
def get_completion(prompt, model="gpt-3.5-turbo", temperature=0):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message["content"]
# 1. Initial code review prompt
code_to_review = """
def process_user_data(user_input):
query = "SELECT * FROM users WHERE username='" + user_input + "'"
return database.execute(query)
"""
review_prompt = f"""
Review the following code for security vulnerabilities.
Follow these steps:
1. Identify potential security issues
2. Rate each issue on a scale of 1-10 for severity
3. Explain how each issue could be exploited
4. Provide a secure version of the code
Code to review:
```python
{code_to_review}
```
Format your response as JSON with sections for 'issues', 'secure_code', and 'explanation'.
"""
review_result = get_completion(review_prompt)
# 2. Generate test cases based on the secure version
# Parse the JSON response to extract the secure code
review_json = json.loads(review_result)
secure_code = review_json.get('secure_code', code_to_review)
test_prompt = f"""
Generate comprehensive test cases for this function:
```python
{secure_code}
```
Include tests for:
1. Normal usage with valid inputs
2. Edge cases with unexpected inputs
3. Potential attack vectors
"""
test_cases = get_completion(test_prompt)
# 3. Create documentation for the function
docs_prompt = f"""
Create comprehensive developer documentation for this function:
```python
{secure_code}
```
Include:
1. Function purpose and description
2. Parameter details and validation requirements
3. Return value information
4. Security considerations
5. Usage examples
Format as Markdown suitable for a developer wiki.
"""
documentation = get_completion(docs_prompt, temperature=0.2)
Prompt Engineering Patterns for Developers
Here are several pattern templates that you can adapt for common development tasks:
Documentation Generator
prompt = f"""
Generate comprehensive documentation for the following {language} function.
Include:
1. A brief description
2. Parameters and return values with types
3. Example usage
4. Edge cases and error handling
5. Performance considerations
Function:
```
{function_code}
```
"""
Test Case Generator
prompt = f"""
Generate unit tests for the following function using {test_framework}.
Create tests that cover:
1. Happy path with normal inputs
2. Edge cases (empty inputs, boundary values)
3. Error cases
4. Performance test for large inputs
Function to test:
```
{function_code}
```
"""
Code Refactoring Assistant
prompt = f"""
Refactor the following code to improve:
1. Readability
2. Performance
3. Maintainability
4. Error handling
Explain the changes you made and why they improve the code.
Original code:
```
{original_code}
```
"""
API Design Reviewer
prompt = f"""
Review this API design for a {service_type} service.
Evaluate it for:
1. RESTful design principles
2. Security considerations
3. Scalability
4. Versioning strategy
5. Error handling
For each issue found, suggest improvements.
API Design:
```
{api_design}
```
"""
SQL Query Optimizer
prompt = f"""
Optimize the following SQL query for better performance.
Consider:
1. Indexing recommendations
2. Query structure
3. Join optimizations
4. Subquery/CTE improvements
Provide the optimized query and explain your changes.
Database schema:
```
{db_schema}
```
Original query:
```
{sql_query}
```
"""
Pattern Implementation Example:
Here's a complete example integrating many of the techniques:
import openai
import os
import json
# Setup
api_key = os.getenv('OPENAI_API_KEY')
openai.api_key = api_key
def get_completion(prompt, model="gpt-3.5-turbo", temperature=0):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message["content"]
# 1. Initial code review prompt
code_to_review = """
def process_user_data(user_input):
query = "SELECT * FROM users WHERE username='" + user_input + "'"
return database.execute(query)
"""
review_prompt = f"""
Review the following code for security vulnerabilities.
Follow these steps:
1. Identify potential security issues
2. Rate each issue on a scale of 1-10 for severity
3. Explain how each issue could be exploited
4. Provide a secure version of the code
Code to review:
```python
{code_to_review}
```
Format your response as JSON with sections for 'issues', 'secure_code', and 'explanation'.
"""
review_result = get_completion(review_prompt)
# 2. Generate test cases based on the secure version
# Parse the JSON response to extract the secure code
review_json = json.loads(review_result)
secure_code = review_json.get('secure_code', code_to_review)
test_prompt = f"""
Generate comprehensive test cases for this function:
```python
{secure_code}
```
Include tests for:
1. Normal usage with valid inputs
2. Edge cases with unexpected inputs
3. Potential attack vectors
"""
test_cases = get_completion(test_prompt)
# 3. Create documentation for the function
docs_prompt = f"""
Create comprehensive developer documentation for this function:
```python
{secure_code}
```
Include:
1. Function purpose and description
2. Parameter details and validation requirements
3. Return value information
4. Security considerations
5. Usage examples
Format as Markdown suitable for a developer wiki.
"""
documentation = get_completion(docs_prompt, temperature=0.2)
Prompt Engineering Patterns for Developers
Here are several pattern templates that you can adapt for common development tasks:
Documentation Generator
prompt = f"""
Generate comprehensive documentation for the following {language} function.
Include:
1. A brief description
2. Parameters and return values with types
3. Example usage
4. Edge cases and error handling
5. Performance considerations
Function:
```
{function_code}
```
"""
Test Case Generator
prompt = f"""
Generate unit tests for the following function using {test_framework}.
Create tests that cover:
1. Happy path with normal inputs
2. Edge cases (empty inputs, boundary values)
3. Error cases
4. Performance test for large inputs
Function to test:
```
{function_code}
```
"""