Tuesday, June 18, 2024

Prompting 101: Revenge of the Humanities, Part Two

How do you get the most out of interacting with generative AI? It’s an evolving discipline, to put it mildly. Try it all. In part one of Prompting 101, we looked at the fundamentals. Once you’ve established the basics, keep doing what works for you, with lots of experimentation. Using a conversational approach, giving as much detail as possible, and sequencing your prompts so they build on the last is the combination that works best for me. 

Prompting 101 – Part II

Beyond the basics of prompt length and sequencing, there are several other approaches and techniques to optimize interactions with language models:

1. Temperature Setting: Adjusting the “temperature” of the model can influence its output. A higher temperature makes the output more random, while a lower temperature makes it more deterministic. This can help in controlling the diversity of responses. 

In most language model interfaces, like the OpenAI API, you can control the temperature by setting a specific parameter in the API call. This is usually a number between 0 and 1; for instance, a temperature setting of 0.7 might strike a balance between creativity and reliability. 

For end users who aren’t directly interacting with the API, such as in some apps or platforms, there might not be an explicit way to adjust the temperature. Instead, the temperature setting is predetermined by the developers or administrators of the service to suit the typical use cases of their audience.

2. Limiting Response Length: You can set a maximum token (word piece) limit to control the length of the model’s responses. This can be useful when you need concise answers.

3. Prompt with Examples: Providing examples within your prompt can guide the model. For instance: “Translate the following English sentences to French, just like ‘Hello’ is translated to ‘Bonjour’.”

4. Explicit Constraints: Clearly state any constraints or requirements in your prompt. E.g., “Write a vegetarian recipe excluding any meat products.”

5. Multiple Questions: Instead of asking one question at a time, ask multiple questions in a single prompt to get a more comprehensive response. 

prompting 101

If you try this and compare the results to a sequenced approach, you may see interesting differences. 

6. Neutral vs. Biased Prompts: A neutral prompt is more likely to get an unbiased response. If you give a biased or leading prompt, you may get answers that reflect that bias.

7. Iterative Refinement: Re-phrase or refine your prompt based on the output. If the first answer isn’t satisfactory, use it as feedback to improve the next prompt.

8. Role Playing: Asking the model to assume a specific role can guide its responses. For instance, “Pretend you’re Shakespeare and write a sonnet about the moon.”

9. Feedback Loops: After getting an answer, you can feed it back into the model in its entirety, or edited, as a new prompt to get further elaboration or clarification.

10. Comparison Queries: Instead of asking for an absolute answer, ask the model to compare options. E.g., “Compare the advantages of solar energy to wind energy.”

11. Negative Prompts: Specify what you don’t want in the answer. For example, “Explain quantum mechanics without using any mathematical formulas.”

12. Validation: For tasks where accuracy is paramount, you can use multiple models and compare their answers or even use ensembling methods to arrive at a consensus.

  • Using Multiple Models:
    • Deploy various models to address the same problem
    • Compare outcomes to enhance result confidence.
    • Useful in high-stakes areas like medical diagnosis or financial forecasting.
  • Ensembling Methods:
    • Bagging (Bootstrap Aggregating): Training different models on overlapping subsets of data, then combining their predictions.
    • Boosting: Sequentially training models, with each focusing on previous errors, then taking a weighted average of their predictions.
    • Stacking: Using a new model to combine the predictions of multiple existing models.
  • Cross-Validation:
    • Dividing the dataset into different parts for separate training and testing. (Validation is a much bigger topic, especially given the hallucination phenomenon. Chinese researchers recently introduced Woodpecker, a hallucination killer with high success rates, which could have a big impact.)   

13. Add multimodal to your prompts: Visual cues can add so much to prompts and sophistication in ChatGPT when using images is now high. For example, “Compare the size on this map to that map – how big are the areas each relatively” or “Give me ten words of literature that were popular at the time this image was taken” or “Which ten artists most famously were part of this movement?” 

Join us for an upcoming session on Prompting 101. Follow us on X for more information! 


Unleashing the Power of AI in B2B Marketing: Strategies for 2023

The digital marketing landscape is evolving rapidly, with artificial...

How To Check if a Backlink is Indexed

Backlinks are an essential aspect of building a good...

How to Find Any Business Owner’s Name

Have you ever wondered how to find the owner...

Do You Have the Right Attributes for a Career in Software Engineering?

Software engineers are in high demand these days. With...

6 Strategies to Make Sure Your Business Survives a Recession

Small businesses are always hit the hardest during an...
Jennifer Evans
Jennifer Evanshttp://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.