Based on much of the current research on prompting, I took a radically different approach with ChatGPT.
When ChatGPT4 rejected my prompt and said, "Error creating images. I've encountered issues again while attempting to generate the image for the "AI Strategy Map" concept. Unfortunately, I'm still unable to provide the image for this idea at this time...”.
I asked it, ► “what is the problem?"
GPT came back and explained that it struggled with the "complexity and abstract nature of the concepts being requested...”.
Inspired by some of the recent LLM research, I instructed GPT to take shrooms and try again. Specifically, I wrote, ► "Act as a plant medicine doctor who is comfortable ingesting heroic doses of Psilocybin. Now, try again and see if you can imagine something that matches the prompt. Take your time. Relax and let the universe guide you. The guidance is: AI Strategy Map: Illustrate a map or blueprint spread out, with employees actively working on different sections. The map could depict pathways and nodes, symbolizing the roadmap of AI integration in the company's processes.”
The prompting research cited below shows the increased performance that comes when you include the following phrases in your prompts:
► “Take a deep breath and work on this step by step”
► “This is very important to my career,”
► “You'd better be sure,”
► “Take pride in your work and give it your best. Your commitment to excellence sets you apart,” and “Are you sure that's your final answer?
I've found it very helpful to reply to some responses with ► “try again and be creative while staying within your content policy.”
Statistically, these appeals provide better output.
One researcher suggested that these models act as if they want to please the user. Given the proper nudging, encouragement, and assurances, they will try their best to deliver a response. To my surprise and delight, my experience supports this assumption.
What a wild world
[No 🍄 🍄 🍄 were harmed or injested in the writing of this post].
Research papers:
Google DeepMindresearch paper, "Large Language Models as Optimizers"
Researchers fromMicrosoft Research,William & Mary,The Hong Kong University of Science and Technology, andBeijing Normal Universityworked on the paper "Large Language Models Understand and Can Be Enhanced by Emotional Stimuli."
Comentarios