These are clear instructions you give an LLM to guide its output. The quality and usefulness of your output however depends on;
Since we are not LLM developers, the factor that we have control here is how effective we can make the prompt, and we can do that by following some prompting techniques
Task Specification: This means clearly stating exactly what you want the model to do.
Contextual Guidance: is about adding details so the model stays on topic
Domain Expertise: This includes using field‑specific terms to produce precise information on that field/domain
Bias Mitigation: Instructing the model to avoid favoring any group, entity, gender, country e.t.c
Framing: Setting boundaries on how the model should respond (length, focus)
User feedback loop technique: before we go into this lets first introduce something called "Zero Shot"
Because LLMs nowadays are trained on vast datasets, they can perform task in a zero shot manner, which is a method in a which a model can generate output to a prompt without prior training on that particular prompt.
In a Zero-Shot Prompting Style, the model is given only the task description or instruction without any examples. It relies entirely on its pretrained knowledge to respond.
Example: *Summarize this article from NASA “NASA exploration”*
Sometimes or most of the times, a single prompt may not produce the exact response you’re looking for. In such cases, this is where the User Feedback Loop Technique becomes useful. This involves giving feedback on the model’s output and adjusting the prompt step by step. By repeating this process, users can gradually enhance the quality of the response until it meets their expectations
Example:
from the output we got from our zero shot prompt we can give another prompt as:
*Summarize the article in 2-3 sentences. Highlighting NASA’s long-term goals and the global cooperation involved*