March, 14 2024

How to write prompts that work

Strategies and techniques to drive Generative AI toward amazing results

Generative AI is a type of artificial intelligence that has been gaining popularity recently, especially with the release of Large Language Models such as GPT-4, among others. Generative AI can create new content, much like humans can. It can write essays, solve math problems, and even win art competitions. This enabled automating tasks previously only done by humans and this change has led to greater efficiency and optimization of time for individuals, becoming increasingly important for companies that can use Artificial Intelligence to automate their operations and improve their efficiency. For instance, at indigo.ai, Generative AI is used to develop next-generation chatbots for customer service.
While this evolution of AI is useful, it needs human direction to work correctly. It's like having a new employee in a company who needs clear instructions to do their job well. Gen AIs need a prompt, usually a text instruction, to know what to do. Let’s see together what a prompt is and how to write prompts that work for the more common scenarios and situations.

What is a prompt and what is prompt engineering

A prompt is a text or other input to a Generative AI. As an example, we can request GPT-4 to generate a tweet about a new, fictional AI product we are thinking of creating, FoundersFableAI, which tells stories about famous entrepreneurs.

Prompt
Write a tweet about FoundersFableAI.

Output AI
🚀 We’re excited to dive into #FoundersFableAI – a groundbreaking platform revolutionizing how we think about startup innovation and growth! 🌟 Whether you're an aspiring entrepreneur or a seasoned founder, this is the place to be for insights, inspiration, and real-world wisdom. #StartupLife #Innovation

This tweet is inaccurate since it suggests that FoundersFableAI is a "platform" focused on "revolutionizing how we think about startup innovation and growth," which may not accurately reflect its essence. However, this is not GPT-4’s fault! It did not know anything about FoundersFableAI, so it's important to include relevant information in the prompt.

Prompt
Write a tweet about FoundersFableAI. FoundersFableAI tells stories of famous entrepreneurs.

Output AI
🚀 Dive into the epic journeys of the world's most renowned entrepreneurs with #FoundersFableAI! 📚 From struggle to success, get inspired by the tales that shaped the future of business. #Entrepreneurship #StartupStories #Innovation 

The one above is much better! Let's make GPT-4 write the tweet in Steve Jobs' style.

Prompt
Write a tweet about FoundersFableAI, in the style of Steve Jobs. FoundersFableAI tells stories of famous entrepreneurs.

Output AI
Innovation thrives on the tales of those who've traversed the path before us. FoundersFableAI isn't merely a collection of stories, it's a testament to the spirit of entrepreneurship that defines our era. 🚀 
By sharing the trials and triumphs of renowned entrepreneurs, it reminds us that behind every breakthrough lies a story of relentless pursuit and unwavering belief. ✨
Here's to the dreamers, the innovators, and the pioneers—may your journey be guided by the legends who paved the way. #FoundersFableAI #Entrepreneurship #Inspiration

This alternative sounds much better! That may be the message we need to target famous entrepreneurs’ fans! By testing multiple prompts, we can see which one gives the best output.

This process of refining our prompt over time is called prompt engineering. It's crucial to understand that crafting the perfect prompt is a process that requires refinement over time. You can't expect to get it right on your first try. Therefore, it's crucial to develop your proficiency in prompt engineering by practicing and learning from trial and error. You can use various strategies to refine your prompts, and it's essential to explore them during the process.

How Large Language Models work

Large Language Models (LLMs) like Claude, Gemini, GPT-4, and Llama use vast datasets that contain millions of books, web pages, and other forms of language. Every word or part of a word in billions of sentences is converted into a token and ranked based on how often it appears with every other token in the dataset.
When you provide an AI model with a prompt, the model analyzes your request using these rankings and provides a response that it deems ideal. This process seems relatively simple for straightforward prompts. However, LLMs can sometimes deviate from their token rankings to create an innovative response, so they are called Generative AI. The outcome can be unsettling, and that is why writing effective prompts is such a critical step.
The question of how language models' token-ranking approach generates sophisticated language and conversational skills is still an active area of research. Nonetheless, it is a fascinating topic that continues to intrigue researchers.

How to write effective prompts for Generative AI

Generative AI models are trained on large conversational datasets and can be used to generate natural language text. Writing effective prompts for these models is essential to ensure the generated text is accurate and relevant. Here are some tips to help you write effective prompts for Generative AI.

Communicate naturally

Approach your interactions with the model as if speaking to another person. This human-like dialogue facilitates more natural and effective responses.

Prioritize clarity and conciseness

Aim to craft your prompts in clear and direct language that thoroughly details your request to prevent misunderstandings, ensuring the AI accurately interprets your intent.

Avoid negative constructions

Instead of using negative constructions, phrase your instructions positively to reduce the risk of misinterpretation. For example:
Less Effective: "Don't include..."
More Effective: "Include only..."
This approach minimizes the risk of the AI focusing on the unintended part of your instruction.

Define the AI's role

To improve the relevance and accuracy of AI-generated responses, specifying the role or expertise you expect the AI to assume is helpful. This action allows the model to identify and leverage token patterns associated with that specific domain. To achieve this, you can provide the AI with a step-by-step process. First, instruct the AI to take a breath, which may seem weird but can be effective. Then, tell it to respect the rules of the process, formulate an answer, and control the answer. Instruct the AI to reformulate the answer if it doesn't follow the rules.

Demand specificity

Given that AI models process language token by token, every word matters. Avoid vague instructions to prevent ambiguous outcomes. The more precise your prompt, the better the AI can meet your expectations.

Embrace detailed requests

As you become more adept at prompt writing, don't hesitate to make more complex requests. Generative AI models can handle detailed prompts, allowing you to leverage their capability to digest and process large amounts of information effectively.

Incorporate guiding statements

Anticipate potential issues or decisions the AI might face and address them within your prompt. This proactive approach can help mitigate the occurrence of "hallucination," where the AI might generate incorrect or misleading information.

Utilize examples

Incorporate examples within your prompts, especially when aiming for specific outputs. Known as "few-shot learning," this method can significantly improve the AI's understanding and execution of your request by providing a clear template for the desired outcome.

Try indigo.ai, in a few minutes, without installing anything.
Try a demo

In conclusion, mastering the art of writing effective prompts for Generative AI is an invaluable skill in harnessing the full potential of these advanced technologies. As we've seen, Generative AI, including Large Language Models like GPT-4, Claude, Gemini, and Llama, offers remarkable capabilities in automating tasks, generating content, and solving complex problems. However, the quality of their output heavily depends on the clarity, specificity, and intent of the prompts provided by human users.

Prompt engineering, therefore, is not just a technical skill but a creative process that requires understanding, practice, and refinement. By adopting strategies such as communicating naturally, prioritizing clarity, conciseness, and defining the AI's role, users can significantly improve the accuracy and relevance of AI-generated content.

The journey of learning to craft effective prompts is ongoing and evolving with the AI's capabilities. As Generative AI advances, the potential for creating innovative, accurate, and engaging content is boundless. Whether for automating business operations, enhancing customer service through chatbots, or generating creative content, the ability to write effective prompts is crucial in unlocking the transformative power of Generative AI. This skill enhances the efficiency and optimization of tasks and opens up new avenues for creativity and innovation in various fields.

FAQs

What are the ethical considerations in using Generative AI for content creation?

The ethical considerations in using Generative AI are multifaceted. As these technologies become more capable of producing content that closely mimics human output, concerns about copyright infringement, the propagation of misinformation, and the impact on jobs in creative fields have emerged. Ethically deploying Generative AI involves ensuring that the content produced respects copyright laws, accurately represents factual information, and is used to complement human creativity rather than replace it. It also includes developing and adhering to guidelines that prevent the misuse of AI in creating deceptive or harmful content.

How can one measure the effectiveness of a prompt?

Measuring the effectiveness of a prompt in Generative AI involves evaluating the accuracy, relevance, and coherence of the AI's response. This analysis can be subjective, depending on the specific goals of the prompt. However, effectiveness can generally be gauged by how well the response meets the intended purpose, whether generating creative content, providing accurate information, or solving a problem. This assessment often requires human judgment to determine the quality of the AI's output in the task context. Over time, feedback from these evaluations can be used to refine prompts and improve the AI's performance.

Can Generative AI models understand and respond to prompts in languages other than English? 

Regarding language capabilities, modern Generative AI models are increasingly proficient in understanding and responding to prompts in languages other than English. This results from training on diverse datasets that include multiple languages, enabling them to generate responses in the language of the prompt. However, the quality and accuracy of these responses can vary significantly between languages, especially for those less represented in the training data. Developers and users of Generative AI must be mindful of these limitations and work towards making these technologies more inclusive and effective across a broader spectrum of languages. They must expand the datasets to include more languages and understand cultural nuances affecting language use and interpretation.

Don't take our word for it
Try indigo.ai, in a few minutes, without installing anything and see if it lives up to our promises
Try a demo
Non crederci sulla parola
This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.