- RuntheAI
- Posts
- Prompts are not moats
Prompts are not moats
Good Morning AI Runners
Here's what we've got for you today:
Prompt Engineering is not a moat
Google Launched ChatGPT For Healthcare
Prompts are not moats
Imagine you have a magic lamp that can grant you any wish you want. All you have to do is tell the lamp what you want, and it will make it happen. This is kind of like how a chatbot works. To make a chatbot work, you have to give it a prompt, which is like a question or a request. For example, you might give the chatbot the prompt "Write me three facts about French bulldogs." The chatbot will then generate three facts about French bulldogs for you.
But the way you phrase the prompt can make a big difference in the output of the chatbot. For example, if you give the chatbot the prompt "Write me three angry tweets about French bulldogs," the output might be very different from the prompt "Write me three facts about French bulldogs." This is because the chatbot is trying to understand what you want based on the words you use in the prompt.
Prompt engineering is the process of designing and testing different prompts to get the best results from a chatbot or other machine learning model. This can involve trying out different language or formatting to see how it affects the output of the model.
There are now even marketplaces that sell prompts for different machine learning models, like DALL·E, GPT-3, Mid journey, and Stable Diffusion. (promptbase.com)
Some people think that prompt engineering might not be a long-term thing, but it's still an interesting way for people to make money in the short term.
Reverse prompt engineering
Reverse prompt engineering is when someone tries to reverse engineer the prompts that a company used to create their AI enabled product. This means trying to figure out what prompts the company used to make their chatbot or other product work the way it does.
Reverse prompt engineering can potentially undermine the value of a company's product and might even strip them of their "moats". Think Jasper.ai and copy.ai.
In the blog linked below, swyx takes the prompts that the Notion AI uses to create its features and tries to understand how they work. This might involve trying different techniques, such as asking the AI questions in different ways, to see if we can get it to reveal the prompts that it uses. This could be a fun and interesting way to learn more about how AI works, and how we can use it to do things like create new features or solve problems:
Google Launched ChatGPT For Healthcare
Google has launched a chatbot called MultiMedQA for healthcare, which is based on OpenAI's ChatGPT and specifically designed to answer medical queries.
The chatbot combines HealthSearchQA, a database of medical questions, with six datasets covering professional medical exams, consumer queries, and research.
The chatbot also includes a methodology for evaluating the responses of human models on various dimensions, including precision, factuality, and potential harm.
Pic of the day:
That's it from RuntheAI for today.
THANK YOU FOR READING AND SEE YOU TOMORROW, SUBSCRIBE TO STAY UPDATED!