Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (2024)

Tobias Schnabel  Jennifer Neville
Microsoft Research, Redmond, USA
{toschnab,jenneville}@microsoft.com

Abstract

Large language models (LLMs) can now handle longer and more complex inputs, which facilitate the use of more elaborate prompts. However, prompts often require some tuning to improve performance for deployment. Recent work has proposed automatic prompt optimization methods, but as prompt complexity and LLM strength increase, many prompt optimization techniques are no longer sufficient and a new approach is needed to optimize meta prompt programs. To address this, we introduce SAMMO, a framework for compile-time optimizations of metaprompt programs, which represent prompts as structured objects that allows for a rich set of transformations that can be searched over during optimization. We show that SAMMO generalizes previous methods and improves the performance of complex prompts on (1) instruction tuning, (2) RAG pipeline tuning, and (3) prompt compression, across several different LLMs.We make all code available open-source at https://github.com/microsoft/sammo.

1 Introduction

{forest}

for tree=rectangle,draw, font=[Metaprompt π[X]𝜋delimited-[]𝑋\pi[X],fill=lightergray[RenderText()
θtextsubscript𝜃text\theta_{\text{text}} = “In the following task,
you will have to classify…”][RenderSection(),[RenderText()
θtextsubscript𝜃text\theta_{\text{text}}
= “Here are
some examples:”][RenderData()
θformatsubscript𝜃format\theta_{\text{format}} = “json”
θfewshotsubscript𝜃fewshot\theta_{\text{fewshot}} = [{“Good meal!”:
“positive”}, …],]][RenderSection(X𝑋X),fill=lightergray[RenderText()
θtextsubscript𝜃text\theta_{\text{text}}
= “Classify this:”][RenderData(X𝑋X)
θformatsubscript𝜃format\theta_{\text{format}} = “json”,fill=lightergray]]]

With the recent development of large language models (LLMs) such as Mixtral 8x7BJiang etal. (2024) or GPT-4, it is now possible to provide LLMs with longer inputs including richer context and more detailed instructions. As a result, the complexity of prompts has increased, including longer strings of instructions, demonstrations or examples, and specification of more structured output. These lengthy instructions are often reused as dynamic templates (aka metaprompts), where static information (eg, instructions) is combined with input-dependent information (e.g., user queries, retrieved documents in Retrieval Augmented Generation (RAG) systems) at runtime to generate the desired output. There is a clear trend towards prompts becoming complex programs in and of themselves.

To achieve an acceptable level of accuracy for deployment, prompts are often manually fine-tuned. This process typically involves adding information to handle exceptions and corner cases.To improve this process, there has been a great deal of recent work on automatic optimization methods. The first wave of work in this area focused on simpler prompts (i.e., with less structure) and unstructured mutators such as text rewrite operations Chen etal. (2023); Pryzant etal. (2023); Zhou etal. (2023). The second wave of work has focused on optimizing prompts with more complex programmatic structure such as meta-prompts. Initial work in this direction focused on applying textual mutators to different prompt components Fernando etal. (2023); Ye etal. (2023). Subsequent work considered both textual mutation and hyperparameter selection Sclar etal. (2023). However, with the increasing strength of LLMs and increasing complexity of prompt structure, many prompt optimization techniques are no longer applicable and a new approach is needed that is able to optimize metaprompt programs.

To address this, we introduce Sammo, a general purpose framework for compile-time optimizations of prompt programs. Individual operations and parts of the prompt are represented through components in a call graph, in spirit similar to DSpy(Khattab etal., 2023). However, Sammo goes beyond this previous work by (i) allowing this call graph to be changed (ii) also representing the internal structure of prompts and (ii) generalizing “compile-time” optimizations to all prompt components (eg. textual content, hyperparameters). It considers metaprompts as programs—to allow for modular construction of complex prompts and facilitate compilation of more effective prompts.

Specifically, Sammo represents metaprompts as structured objects, which allows for a much richer set of transformations to be searched over during optimization. We formalize the optimization as a problem of searching over the structure of a complex metaprompt π𝜋\pi, which may include many structured components such as: task description, guidelines, examples, and input/output format. Note, a metaprompt is designed to be combined dynamically with different input data X𝑋X at run time, i.e., LLM(π[X])=YLLM𝜋delimited-[]𝑋𝑌\operatorname{LLM}(\pi[X])=Y.

Figure1 shows a possible structured program for a review classification task.Treating prompts as programs allows us to (i) treat the increasingly complex nature of metaprompts where older methods are not longer applicable and (ii) have general purpose mutation prompt operators to define a meaningful search space.Drawing from approaches for neural architecture search, Sammo carries out a genetic search over a set of mutation operators that can change the structure and information contained in non-trivial ways.

Sammo is a general framework for prompt optimization in a black-box setting where the practitioner can only sample from the output of LLMs, reflecting common API capabilitiesSun etal. (2022). We show that Sammo encompasses several instruction optimization and compression techniques as special cases. We demonstrate the utility of Sammo in three scenarios: (1) instruction tuning, (2) retrieval augmented generation (RAG) pipeline tuning, and (3) prompt compression for annotation tasks. Our experiments show the Sammo generalizes previous methods and provides gains of 10-100% in instruction tuning, 26-133% in RAG tuning, and over 40% in prompt compression in performance. Moreover, we show that complex prompts need to be optimized separately for each LLM and that gains are more pronounced for weaker LLMs. Code is available at https://github.com/microsoft/sammo under an MIT license.

2 Problem Definition & Notation

Let π𝜋\pi refer to a metaprompt function that takes input data X𝑋X and maps it to a string that is fed to an LLM to generate an output, i.e., Y^=LLM(π[X])^𝑌𝐿𝐿𝑀𝜋delimited-[]𝑋\hat{Y}=LLM(\pi[X]). Our goal is to optimize the performance of π𝜋\pi by transforming it into a more performant metaprompt πsuperscript𝜋\pi^{*} via modifications to its structure, parameters, and content. More precisely, our goal is to find a metaprompt π𝜋\pi with minimal loss across the entire data distribution:

π=argminπΠ𝔼DsP(X,Y)[Sϕ(π,Ds)].superscript𝜋𝜋Πargminsubscript𝔼similar-tosubscript𝐷𝑠𝑃𝑋𝑌delimited-[]subscript𝑆italic-ϕ𝜋subscript𝐷𝑠\pi^{*}=\underset{\pi\in\Pi}{\operatorname{arg\,min}}\;\mathbb{E}_{D_{s}\sim P(X,Y)}[{S_{\phi}({\pi},D_{s})}].(1)

Here, P(X,Y)𝑃𝑋𝑌P(X,Y) is the distribution that the labeled samples Dssubscript𝐷𝑠D_{s} are drawn from, ΠΠ\Pi represents the set of all metaprompts, and Sϕsubscript𝑆italic-ϕS_{\phi}\in\mathbb{R} is a scalarized loss function that specifies trade-offs between potentially multiple individual quality objectives Sisubscript𝑆𝑖S_{i}, (e.g., Si:=Si(Y,LLM(π[X]))assignsubscript𝑆𝑖subscriptsubscript𝑆𝑖𝑌𝐿𝐿𝑀𝜋delimited-[]𝑋S_{i}:=\mathcal{L}_{S_{i}}(Y,LLM(\pi[X])).

Since the data distribution P(X,Y)𝑃𝑋𝑌P(X,Y) is unknown and the evaluation cost is prohibitive, we resort to using the following sampled score in the spirit of empirical risk minimization (ERM):

π^=argminπΠSϕ(π,Dtrain).^𝜋𝜋Πargminsubscript𝑆italic-ϕ𝜋subscript𝐷train\hat{\pi}=\underset{\pi\in\Pi}{\operatorname{arg\,min}}\;{S_{\phi}({\pi},D_{\text{train}})}.(2)

2.1 Compile-time Optimization

In this paper, we focus on optimization that can only be done once before deploying the metaprompt, which we refer to compile-time optimization and correponds to the problem of solving Eq.2. More formally, the optimization itself is a function returning another function (the metaprompt):

π^=ρcompile(Π,Dtrain,Sϕ).^𝜋subscript𝜌compileΠsubscript𝐷trainsubscript𝑆italic-ϕ\hat{\pi}=\rho_{\text{compile}}(\Pi,D_{\text{train}},S_{\phi}).(3)

In this case, the optimization can only be done once before deploying the metaprompt. This is different from run-time optimization where the optimizer ρrunsubscript𝜌run\rho_{\text{run}} gets called with the filled in metaprompt, i.e., ρrun(π[X])=X~subscript𝜌run𝜋delimited-[]𝑋~𝑋\rho_{\text{run}}(\pi[X])=\tilde{X} and then the LLM is prompted as with LLM(X~)𝐿𝐿𝑀~𝑋LLM(\tilde{X}). This optimization procedure would need to be executed repeatedly for each different input data X𝑋X. Since we aim to amortize the optimization cost over multiple uses of the metaprompt, we do not consider run-time optimization further. However, we note that run-time optimizations can compliment compile-time optimizations if sufficient resources are available.

Motivated by real-world needs, we make the following assumptions:

  1. 1.

    We only have small amounts of data for optimization (on the order of hundreds of examples). This represents a reasonable amount of data that can be hand-labeled; increasing the amount would limit the applicability in practice substantially.

  2. 2.

    The language model is a closed black box and does not output probabilities, reflecting recent changes in how access to LLMs is provided. However, we can sample from it:𝐲LLM(Yπ[X])similar-to𝐲LLMconditional𝑌𝜋delimited-[]𝑋\mathbf{y}\sim\operatorname{LLM}(Y\mid\pi[X]).

2.2 Metaprompts As Programs

To search efficiently over the exponentially large space of all metaprompts ΠΠ\Pi, we consider their complex programmatic structure, which typically comprises parameterized components executed in sequence. More specifically, we represent the structure of π𝜋\pi as a directed acyclic function graph G𝐺G, i.e., π[X]=fGπ(X)𝜋delimited-[]𝑋subscript𝑓subscript𝐺𝜋𝑋\pi[X]=f_{G_{\pi}}(X). Here Gπ=(V,E)subscript𝐺𝜋𝑉𝐸G_{\pi}=(V,E) is a DAG with a single root node vrsubscript𝑣𝑟v_{r}. Directed edges eijEsubscript𝑒𝑖𝑗𝐸e_{ij}\in E represent parent (visubscript𝑣𝑖v_{i}) to child (vjsubscript𝑣𝑗v_{j}) relationships in the metaprompt structural components.

Each node vV𝑣𝑉v\in V has a function type ψvsubscript𝜓𝑣\psi_{v} and static parameters θvsubscript𝜃𝑣\theta_{v}. We use θvsubscript𝜃𝑣\theta_{v} to refer to both static textual input in the meta prompt that is common across calls (e.g., instructions) and any hyperparameters of the components (e.g., format specifiers). Each node also gets dynamic information from (1) X𝑋X (which may be transformed by its parents) and (2) messages from its children.

Then the graph is recursively evaluated from the rode node vrsubscript𝑣𝑟v_{r} as follows:

fGπ(X)=ψvr((Xψchildrenr),θvr),subscript𝑓subscript𝐺𝜋𝑋subscript𝜓subscript𝑣𝑟𝑋superscriptsubscript𝜓children𝑟subscript𝜃subscript𝑣𝑟f_{G_{\pi}}(X)=\psi_{v_{r}}\bigg{(}(X\cup\psi_{\text{children}}^{r}),\theta_{v_{r}}\bigg{)},

where

ψchildrenr={ψvi(X˙ψvr(X),(θviθvr))}visuperscriptsubscript𝜓children𝑟subscriptsubscript𝜓subscript𝑣𝑖subscript˙𝑋subscript𝜓subscript𝑣𝑟𝑋subscript𝜃subscript𝑣𝑖subscript𝜃subscript𝑣𝑟subscript𝑣𝑖\displaystyle\psi_{\text{\text{children}}}^{r}=\Big{\{}\psi_{v_{i}}\left(\dot{X}_{\psi_{v_{r}}}(X),(\theta_{v_{i}}\cup\theta_{v_{r}})\right)\Big{\}}_{v_{i}\>}
s.t.eriGπ.formulae-sequence𝑠𝑡subscript𝑒𝑟𝑖subscript𝐺𝜋\displaystyle s.t.\>e_{ri}\in G_{\pi}.

Here X𝑋X to refer to dynamic input that changes based on the call to the meta prompt (e.g., user query) and we note that X𝑋X may consist of a batch of more than one query. We use X˙ψv(.)\dot{X}_{\psi_{v}}(.) to refer to a possible transformation of the dynamic input that a parent v𝑣v can send its children, e.g., to perform minibatching.Note that the input can also be passed on directly without transformation, e.g., X˙ψv(X)=Xsubscript˙𝑋subscript𝜓𝑣𝑋𝑋\dot{X}_{\psi_{v}}(X)=X.

We denote the space of metaprompts as ΠΠ\Pi, and each πΠ𝜋Π\pi\in\Pi consists of a different program structure G𝐺G. With this notation, we can see how searching over ΠΠ\Pi in Eq.2 amounts to combinatorial search over G𝐺G.We initialize the search with a baseline prompt π0subscript𝜋0\pi_{0} and generate successor functions through mutations to Gπ0subscript𝐺subscript𝜋0G_{\pi_{0}}. Mutators (see Sec.3.1) operate on Gπsubscript𝐺𝜋G_{\pi} at the node or neighborhood level. They can reorder, replace, or delete nodes. They can also modify θvsubscript𝜃𝑣\theta_{v} associated with a node (eg. by modifying instructions or parameter values).

We note the similarity of metaprompt search with neural architectures search—nodes functions that ignore X𝑋X can be seen as global parameters, ψvrsubscript𝜓subscript𝑣𝑟\psi_{v_{r}} correspond to projection operators or activation layers as they aggregate input from their predecessors.

To give some intuition for what this looks like in practice, Figure1 shows a metaprompt for a review classification task which has already been instantiated with some input data X𝑋X. It has a section with task instructions at the beginning, followed by a set of static in-context examples (although they could be selected dynamically as well) and the unlabeled input data. Note that the data format is explicitly represented—here it serializes data to a JSON array with each example being a dictionary.

3 Sammo: Structure-Aware Multi-objective Metaprompt Optimization

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (1)

1:Set of mutators M𝑀M, training set Dtrainsubscript𝐷trainD_{\text{train}}, baseline prompt π0subscript𝜋0\pi_{0}, objective Sϕsubscript𝑆italic-ϕS_{\phi}

2:Π𝒞\Pi{{}_{\mathcal{C}}}\leftarrow InitCandidates(π0subscript𝜋0\pi_{0})

3:whileconditiondo

4:ΠactiveSampleCandidates(Π,𝒞Dtrain,Sϕ)\Pi_{\text{active}}\leftarrow\textsc{SampleCandidates}(\Pi{{}_{\mathcal{C}}},D_{\text{train}},S_{\phi})

5:ΠnextgensubscriptΠnextgen\Pi_{\text{nextgen}}\leftarrow\emptyset

6:for eachπΠactive𝜋subscriptΠactive\pi\in\Pi_{\text{active}}do

7:Mvalid={mcan be applied toπmM}subscript𝑀validconditional-setmcan be applied toπ𝑚𝑀M_{\text{valid}}=\{\text{$m$ can be applied to $\pi$}\mid m\in M\}

8:Mπsubscript𝑀𝜋M_{\pi} = SampleMutators(Mvalidsubscript𝑀validM_{\text{valid}})

9:mMπfor-all𝑚subscript𝑀𝜋\forall m\in M_{\pi}: Add Mutate(m,Πactive)Mutate𝑚subscriptΠactive\textsc{Mutate}(m,\Pi_{\text{active}}) to ΠnextgensubscriptΠnextgen\Pi_{\text{nextgen}}

10:endfor

11:Π𝒞Π𝒞Prune(Πnextgen,Dtrain,Sϕ)subscriptΠ𝒞subscriptΠ𝒞PrunesubscriptΠnextgensubscript𝐷trainsubscript𝑆italic-ϕ\Pi_{\mathcal{C}}\!\leftarrow\!\!\Pi_{\mathcal{C}}\cup\textsc{Prune}(\Pi_{\text{nextgen}},D_{\text{train}},S_{\phi})

12:endwhile

13:return best candidate from Π𝒞subscriptΠ𝒞\Pi_{\mathcal{C}}

We outline Sammo, which is a general framework for optimizing the performance of metaprompts. Sammo uses general-purpose search algorithms with a rich set of mutation operators to explore the space of prompts. Figure2 presents an overview of Sammo’s main components that allow for efficient prompt optimization. First, there is a base layer of programming primitives from which more complex prompt programs are being built (cf. Figure1). These comprise the nodes of the metaprompt function graph G𝐺G, which we described in Section 2.2. On top of that sits a layer of prompt mutators which are then used with the search algorithms in the top-most layer. We outline these two layers next.

3.1 Prompt Mutation Operators

TypeOperatorDescription
Text attributes θtextsubscript𝜃text\theta_{\text{text}}ParaphraseRewrite to keep meaning
InduceInstructionsGenerate instructions from examples
ShortenTextReduce length to certain number of words
TextToBulletPointsTurn into bullet list
RemoveStopwordsFilter out stopwords
Other attributes θ𝜃\thetaChangeSectionFormatHow sections are rendered (e.g., markdown, XML)
ChangeDataFormatHow data is rendered (e.g., JSON, XML)
DecreaseInContextExamplesResample a smaller number of examples
Structure Gπsubscript𝐺𝜋G_{\pi}DropSectionRemove a section
RepeatSectionRepeat a section somewhere

At the heart of Sammo optimization are mutation operators. Formally, a mutation operator is a probabilistic function m:Π×ΠΩ[0,1]:𝑚ΠsubscriptΠΩ01m:\Pi\times\Pi_{\Omega}\longrightarrow[0,1] that specifies how to transition from a metaprompt π𝜋\pi to an edited version πΠΩsuperscript𝜋subscriptΠΩ\pi^{\prime}\in\Pi_{\Omega}. This structure-aware component of Sammo opens up a new class of operators, for example operators that only modify specific sections or paragraphs. These can range from trivial (e.g., rephrasing a sentence) to complex (e.g., inducing a new set of task instructions). We call an operator finite, if for all inputs π𝜋\pi, the set of possible outputs is finite, and infinite otherwise. A mutation operator’s type will also determine what search algorithms can be used.

Table1 shows a non-comprehensive set of mutation operators, grouped by what part of a metaprompt they change. Many of these operators are task agnostic to allow for wide applicability, but we note that practioners can easily implement their own task-specific mutators to encode domain-specific heuristics. To the best of our knowledge, Sammo is the first optimization method that can also optimize for large structural changes and data formatting.

3.2 Search Algorithms

There are two classes of search algorithms that Sammo provides. First, if all mutators used are finite, then Sammo can explore the space via enumerative search, either exhaustively (grid search) or by sampling search points at random (random search). As we show in Section5, this simple approach can be very effective in practice.When some of the mutators are infinite, Sammo offers iterative search algorithms to optimize prompts. Algorithm1 shows the skeleton of how Sammo implements iterative search. Starting with an initial baseline metaprompt (π0subscript𝜋0\pi_{0}) and small amount of training data as input, iteratively modifies a current set of candidates ΠactivesubscriptΠactive\Pi_{\text{active}} through mutations to generate a new generation of candidates.

Specific choices for the functions InitializeCandidates, SampleCandidates, condition, and Prune in Algorithm1 yield common search algorithms such as beam search, regularized evolutionary searchReal etal. (2019) or breadth-first search. For example, beam search typically starts with one candidate and then selects all parents at the current depth, keeping only the top k𝑘k performing children for the next round.The novelty of Sammo lies not in this outer search but in the fact that we represent the higher level structure of a metaprompt explicitly which in turn allows us to define a rich set of operators that can both transform the structure as well the content.

3.3 Specializations of Sammo

We note that Sammo is a rich framework that allows practitioners to mix-and-match search strategies with a large variety of mutation operators.The following methods are examples for special cases of Sammo:

  • APE – Automatic Prompt Engineering(Zhou etal., 2023). Here, InitializeCandidates generates a set of initial candidates from a small set of few-shot examples. Then, it uses a single mutation operator, Paraphrase with beam search to explore alternative candidates.

  • GrIPS – Gradient-free instruction searchPrasad etal. (2023). This approach builds a syntax parse tree from the task instructions and then performs beam search with add, delete, swap and paraphrase mutation operations on the constituents.

4 Use Case: Instruction Tuning

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (2)

In theory, Sammo subsumes many existing methods that do instruction tuning where the task is to generate static instructions that tell the LLM how a task should be solved. To better align with previous work, we use eight zero-shot BigBench classification tasksSrivastava etal. (2023) that still have headroom to improve (i.e., the accuracy of the baseline prompt π0subscript𝜋0\pi_{0} with GPT-3.5 is <0.9absent0.9<0.9). We subsample training and test sets of size n=100𝑛100n=100 from the official splits.

We compare against APE(Zhou etal., 2023), Automatic Prompt Optimization(Pryzant etal., 2023) and GrIPS(Prasad etal., 2023). For the backend LLMs, we consider two open-source models, Mixtral 7x8BJiang etal. (2024) and Llama-2 70BTouvron etal. (2023) and GPT-3.5Brown etal. (2020). We do not include GPT-4 since it showed negligible headroom for improving instructions in pilot experiments (cf. also Section5).

Results. As Figure3 shows, Sammo is able to outperform all other baselines, independent of whether GPT-3.5, LLama-2 or Mixtral was used as a backend model. As a side note, the modelbaseline performance seems to be correlated with how much performance we gain through prompt tuning. Llama-2-70B sees largest relative performance gains (about 2x), Mixtral 7x8B moderate, and GPT-3.5 smallest gains (around 10% compared to the baseline instructions). However, the zero-shot tasks here are relatively simple, with more headroom existing for more complex tasks as the next use case shows.

5 Use Case: Optimizing Retrieval Augmentation

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (3)
Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (4)

Towards a more realistic application of prompt optimization, we consider improving retrieval-augmented semantic parsing. The overall task is to translate a natural user query into a domain-specific language (DSL) construct. Our goal is to show how application of Sammo can yield substantial gains in a realistic few-shot scenario. To stay within our overall setting of limited labeled data availability, we create a training/few-shot split and test split with (n=500𝑛500n=500 and n=100𝑛100n=100 each). To measure candidate performance, we subsample 100 examples from the training split as an evaluation set. To the best of our knowledge, there are no existing baselines for this scenario.

Following Bogin etal. (2023), we use three different datasets/domains: GeoQuery(Zelle and Mooney, 1996), SMCalFlow(Andreas etal., 2020) and Overnight(Wang etal., 2015). We use the same i.i.d. splits, DSLs and starting prompt format asBogin etal. (2023). We used Sammo with enumerative search to optimize exact match accuracy over a search space of size 24, trying out different data formats, number of few shot examples and DSL specifications. For details, see AppendixA.5. We consider all backend LLMs as before, plus GPT-4.

Results. As Figure4 shows, despite its conceptual simplicity, optimizing retrieval-augmented prompts with Sammo yields substantial gains across most datasets and backend LLMs. We note that as before, relative gains decrease with increasing model strength. Llama-2 sees an average improvement of 133% and Mixtral of 44%. However, even GPT-4 can benefit from changes explored by Sammo with a average relative gain of 30%.

Since we searched over the same set of mutations with Sammo across all models, we also measure how well search trajectories align between differing backend LLMs. Figure5 plots the correlation of the training scores of the 24 candidates explored between LLMs, averaged over all three datasets. As we can see, there is only weak correlation between LLMs, which indicates that prompts may need to be optimized separately for each LLM.

6 Use Case: Prompt Compression

In these experiments, our goal is to optimize the weighted costs of a metaprompt subject its accuracy not dropping below a certain threshold τaccsubscript𝜏𝑎𝑐𝑐\tau_{acc} and its parse error rate being below 10%.

The weighted costs are given as

cost(π,D)=Sin(π,D)+2Sout(π,D).cost𝜋𝐷subscript𝑆in𝜋𝐷2subscript𝑆out𝜋𝐷\text{cost}(\pi,D)=S_{\text{in}}(\pi,D)+2\cdot S_{\text{out}}(\pi,D).(4)

where we weight the number of output tokens double output tokens Soutsubscript𝑆outS_{\text{out}} compared to the input tokens Sinsubscript𝑆inS_{\text{in}} to reflect current billing schemes for publicly available LLMs, but the loss function is easily configurable upfront. Assuming a baseline prompt π0subscript𝜋0\pi_{0} that has a reasonable level of accuracy, we set the threshold τaccsubscript𝜏acc\tau_{\text{acc}} such that it is above the baseline prompt performance with ϵ=0.02italic-ϵ0.02\epsilon=0.02.

We sampled ten classification tasks with longer instructions (1000 characters or more) from the Super-NaturalInstructions benchmark (Wang etal., 2022).

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (5)

Baselines.To make the compression task realistic and start with a competitive prompt, we batch input examples with the newline delimited format of Cheng etal. (2023). Based on pilot experiments, we chose input batch sizes b𝑏b such that performance between no batching and input batching was within ϵitalic-ϵ\epsilon. This resulted in b=10𝑏10b=10 for GPT-4, b=5𝑏5b=5 for Mixtral and GPT-3.5, and b=1𝑏1b=1 for Llama-2. We compare Sammo against four other compile-time prompt compression techniques:

  • Baseline Prompt: Our baseline prompt π0subscript𝜋0\pi_{0} uses the official instructions provided with a task followed by k=5𝑘5k=5 in-context examples.

  • APE: Automatic Prompt Engineering(Zhou etal., 2023): Instead of optimizing for accuracy, we used the same objective as for Sammo (Eq.4)

  • STDC: Syntax-guided Task Definition Compression(Yin etal., 2023a) runs a sequential search to prune syntax tree constituents.

  • Stopwords: We search over two different stop word lists as well as whether to apply them to examples, to the task definition, or both (implemented as RemoveStopwords operators from Table1).

  • GPT-4 Rewrite: Using the templates fromLi etal. (2023), we try out ten different instructions to compress the task definition and the in-context examples.

  • Sammo: We use beam search as the backbone of Algorithm1 and initialize ΠCsubscriptΠ𝐶\Pi_{C} with four variants for θformatsubscript𝜃format\theta_{\text{format}} of the baseline prompt π0subscript𝜋0\pi_{0}. We use all mutation operators listed in Table1 as possible operations, and choose mutators uniformly at random during the search.

Results. Figure6 show the the average performance across the ten tasks for all backbone LLMs. We show the final weighted costs on the test set (left y-axis), as well as the difference of performance relative to the baseline prompt which should ideally not exceed ϵ=0.02italic-ϵ0.02\epsilon=0.02 (right y-axis).

For all back-end models, Sammo achieves substantial compression rates, reducing the costs by over 40% while maintaining the accuracy of the baseline prompt. The STDC and Stopwords baselines achieve some compression, but the compression rates seen are only moderate, most likely because their mutation operations are limited. APE and GPT-4 Rewrite manage to compress prompts to a larger degree, but can find prompts that do not generalize well to the test set and experience large drops in accuracy.

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (6)

For each mutation operation chosen by Sammo during the search process, we recorded whether it resulted in an improvement of the objective (Eq.4). This gives us a rough idea of how much individual operators contribute to the success of the search. Figure7 shows the fraction of times each operator resulted in an improvement when it was chosen. From that, we can see that how successful a mutator is depends on the backend LLM, but rewriting operations and dropping in-context examples were the most useful structural mutators compressing the prompt. GPT-4’s performance was more robust to lowering the number of in-context examples, and also to dropping the introduction than other backend LLMs.

7 Related Work

Related work can be categorized into two areas: prompt optimization and prompt compression.

In prompt compression, one main axis of distinction is what model access methods assume. Assuming full model access, prompt tuning learns a mapping function that translates an initial prompt to either soft or hard tokens for efficiency (e.g.,Lester etal. (2021)). For example, Wingate etal. (2022) uses a distillation objective to steer text generation with compressed instructions and Mu etal. (2023) use meta-learning to compresses prompts into “gist” tokens.

Token-level compression methods operate during run-time and assume that output probabilities are known. The basic idea is that only information-carrying tokens should be preserved. For example, Li etal. (2023) uses self-information to select and merge less probable tokens to compress in-context examples. Jiang etal. (2023) extend this approach by doing it segment-wise and adding a prefiltering step. Jung and Kim (2023) use a reinforcement learning approach to learn which tokens can be excluded from a prompt without degradation in accuracy. However, this requires extensive fine-tuning with external datasets. Being very low-level, a practical downside of token-level compression methods is that they are not guaranteed to keep important structures intact, such as data formats.

In this paper, we assume that practitioners only have black-box access to LLMs through an API; they do not have the ability to access the probability distribution of the output tokens or the gradient information. Focusing on compressing task definitions, Yin etal. (2023b) propose STDC, a method that uses breadth-first search to prune the syntax-tree after parsing the instructions. Complementary to that are efforts to improve call efficiency by batching instances together that share the same overall task instructionsCheng etal. (2023); Lin etal. (2023). As shown byCheng etal. (2023), batching only minimally impacts performance as long as batch sizes do not exceed a certain model-specific threshold. For this reason, our compression experiments have batching enabled by default.

In prompt optmization, the main focus in on improving the accuracy of prompts. Past work has typically focused on simpler (e.g. single task, non-batched) prompts with less structure. Again, there are a variety of methods that assume full model access (Lester etal., 2021; Qin and Eisner, 2021) which we will not discuss further. Working with continuous prompt representations using a smaller surrogate model, InstructZeroChen etal. (2023) optimizes prompts locally via Bayesian optimization and uses calls to the LLM API as feedback. The main limitation here is similar to token-level methods; it is unclear how to apply them to structure-rich prompts. On the discrete optimization side, Automatic Prompt Engineer (APE)Zhou etal. (2023) generates instruction candidates from a few input-output pairs, and then uses beam search over paraphrased candidates. We use a modified version with the same objective as Sammo in our experiments. Targeting mainly accuracy and not compression, GrIPSPrasad etal. (2023) uses beam search with edit, add and delete operations on the syntax tree after parsing the instructions. Similarily, Automatic Prompt Optimization (APO)Pryzant etal. (2023) re-writes instructions by generating explanations for errors, changing the prompt to reflect explanations, and then generating more prompt candidates by paraphrasing.

Limitations

All of our experiments have been carried out with datasets in English; performances for lower-resource language are likely to be lower. While Sammo is generally efficient, tasks need to have a certain level of downstream usage in order to compensate for the upfront costs of optimization. Finally, Sammo adopts a supervised learning scenario where labels are required; we plan to address more unsupervised tasks in the future.

8 Conclusion

In this paper, we introduced a new framework, Structure-aware Multi-Objective Metaprompt Optimization (Sammo) to enable efficient optimization of metaprompt programs during compile-time.

Sammo represents metaprompts as dynamic function graphs, and employs a set of mutation operators to alter the structure and content of metaprompts. This approach notably outperforms and generalizes existing methods of prompt optimization and compression, as demonstrated through several use cases tasks in our experimental evaluation.

References

  • Andreas etal. (2020)Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, etal. 2020.Task-oriented dialogue as dataflow synthesis.TACL.
  • Bogin etal. (2023)Ben Bogin, Shivanshu Gupta, Peter Clark, and Ashish Sabharwal. 2023.Leveraging code to improve in-context learning for semantic parsing.
  • Brown etal. (2020)Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, JaredD Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, etal. 2020.Language models are few-shot learners.NeurIPS.
  • Chen etal. (2023)Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, and Tianyi Zhou. 2023.Instructzero: Efficient instruction optimization for black-box large language models.
  • Cheng etal. (2023)Zhoujun Cheng, Jungo Kasai, and Tao Yu. 2023.Batch prompting: Efficient inference with large language model apis.
  • Fernando etal. (2023)Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023.Promptbreeder: Self-referential self-improvement via prompt evolution.
  • Jiang etal. (2024)AlbertQ Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, DevendraSingh Chaplot, Diego delas Casas, EmmaBou Hanna, Florian Bressand, etal. 2024.Mixtral of experts.
  • Jiang etal. (2023)Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023.LLMLingua: Compressing prompts for accelerated inference of large language models.In EMNLP.
  • Jung and Kim (2023)Hoyoun Jung and Kyung-Joong Kim. 2023.Discrete prompt compression with reinforcement learning.
  • Khattab etal. (2023)Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, ThomasT. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. 2023.Dspy: Compiling declarative language model calls into self-improving pipelines.
  • Lester etal. (2021)Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.The power of scale for parameter-efficient prompt tuning.In EMNLP.
  • Li etal. (2023)Yucheng Li, BoDong, Frank Guerin, and Chenghua Lin. 2023.Compressing context to enhance inference efficiency of large language models.In EMNLP.
  • Lin etal. (2023)Jianzhe Lin, Maurice Diesendruck, Liang Du, and Robin Abraham. 2023.Batchprompt: Accomplish more with less.
  • Mu etal. (2023)Jesse Mu, XiangLisa Li, and Noah Goodman. 2023.Learning to compress prompts with gist tokens.In NeurIPS.
  • Prasad etal. (2023)Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. 2023.GrIPS: Gradient-free, edit-based instruction search for prompting large language models.In EACL.
  • Pryzant etal. (2023)Reid Pryzant, Dan Iter, Jerry Li, YinTat Lee, Chenguang Zhu, and Michael Zeng. 2023.Automatic prompt optimization with "gradient descent" and beam search.
  • Qin and Eisner (2021)Guanghui Qin and Jason Eisner. 2021.Learning how to ask: Querying lms with mixtures of soft prompts.In NAACL.
  • Real etal. (2019)Esteban Real, Alok Aggarwal, Yanping Huang, and QuocV Le. 2019.Regularized evolution for image classifier architecture search.In AAAI.
  • Sclar etal. (2023)Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. 2023.Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting.
  • Srivastava etal. (2023)Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu AwalMd Shoeb, Abubakar Abid, Adam Fisch, AdamR Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, etal. 2023.Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.TMLR.
  • Sun etal. (2022)Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. 2022.Black-box tuning for language-model-as-a-service.In ICML.
  • Touvron etal. (2023)Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, etal. 2023.Llama 2: Open foundation and fine-tuned chat models.
  • Wang etal. (2022)Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, ArutSelvan Dhanasekaran, Atharva Naik, David Stap, etal. 2022.Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks.In EMNLP.
  • Wang etal. (2015)Yushi Wang, Jonathan Berant, and Percy Liang. 2015.Building a semantic parser overnight.In ACL.
  • Wingate etal. (2022)David Wingate, Mohammad Shoeybi, and Taylor Sorensen. 2022.Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models.In EMNLP: Findings.
  • Ye etal. (2023)Qinyuan Ye, Maxamed Axmed, Reid Pryzant, and Fereshte Khani. 2023.Prompt engineering a prompt engineer.
  • Yin etal. (2023a)Fan Yin, Jesse Vig, Philippe Laban, Shafiq Joty, Caiming Xiong, and Chien-Sheng Wu. 2023a.Did you read the instructions? Rethinking the effectiveness of task definitions in instruction learning.In ACL.
  • Yin etal. (2023b)Fan Yin, Jesse Vig, Philippe Laban, Shafiq Joty, Caiming Xiong, and Chien-ShengJason Wu. 2023b.Did you read the instructions? rethinking the effectiveness of task definitions in instruction learning.
  • Zelle and Mooney (1996)JohnM Zelle and RaymondJ Mooney. 1996.Learning to parse database queries using inductive logic programming.In National Conference on Artificial Intelligence.
  • Zhou etal. (2023)Yongchao Zhou, AndreiIoan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023.Large language models are human-level prompt engineers.

Appendix A Appendix

A.1 Implementation Details

Model versions used:

  • GPT 3.5: gpt-3.5-turbo-16k-0613

  • GPT 4: gpt-4-0613

  • LLama-2: meta-llama/Llama-2-70b-chat-hf

  • Mixtral 7x8B: cognitivecomputations/dolphin-2.6-mixtral-8x7b

A.2 Instruction Tuning

See Table2

modeltaskAPEAPOBaseline PromptGRIPSSAMMO
GPT 3.5 Turboimplicaturestest0.780.780.560.760.77
train0.780.830.510.810.87
metaphortest0.890.860.870.880.87
train0.890.900.840.880.87
navigatetest0.640.680.620.620.59
train0.650.750.720.720.77
presuppositionstest0.490.480.390.420.52
train0.560.570.370.470.54
sportstest0.770.890.750.740.87
train0.840.880.750.800.88
vitaminctest0.710.740.740.730.73
train0.750.690.670.680.73
winowhytest0.530.450.480.500.61
train0.600.530.490.600.61
wordtest0.760.750.720.720.74
train0.850.850.810.810.83
Llama-2 70Bimplicaturestest0.720.610.350.790.78
train0.720.530.370.750.73
metaphortest0.340.500.470.470.50
train0.480.480.450.450.48
navigatetest0.200.150.080.080.25
train0.140.170.020.020.19
presuppositionstest0.140.110.110.110.11
train0.180.190.190.190.19
sportstest0.610.520.130.540.53
train0.640.470.160.490.50
vitaminctest0.570.490.260.260.50
train0.540.480.260.260.47
winowhytest0.160.350.090.110.39
train0.190.350.080.130.44
wordtest0.000.000.000.000.00
train0.000.000.000.000.00
Mixtral 8x7Bimplicaturestest0.800.680.640.840.84
train0.790.690.650.820.82
metaphortest0.850.870.860.860.85
train0.850.880.860.860.87
navigatetest0.590.500.500.500.54
train0.620.450.450.450.66
presuppositionstest0.530.590.550.600.55
train0.680.690.640.700.65
sportstest0.320.580.390.620.62
train0.400.510.260.630.63
vitaminctest0.750.770.750.760.78
train0.730.740.670.710.73
winowhytest0.680.570.340.520.62
train0.610.450.310.570.66
wordtest0.140.230.090.170.24
train0.280.310.100.220.27

A.3 Prompt Compression: Table form of main results

See Table3 for numeric results.

LLMmethodaccuracycosts
GPT-4Baseline0.74613949
STDC0.74211927
APE0.71510791
Stopwords0.74410752
GPT-4 Rewrite0.7339754
SAMMO0.7368410
GPT-3Baseline0.58721872
STDC0.56818608
APE0.46414702
Stopwords0.57619022
GPT-4 Rewrite0.48415938
SAMMO0.59912691
MIXTRALBaseline0.61022894
STDC0.60719932
APE0.61118702
Stopwords0.62918854
GPT-4 Rewrite0.48513999
SAMMO0.63715292
LAMABaseline0.380103606
STDC0.42665728
APE0.32877980
Stopwords0.337103573
GPT-4 Rewrite0.33553192
SAMMO0.44753087

A.4 Prompt Compression: Examples prompts

Below prompts are for task 346 with a backend LLM of GPT-3.5.

A.5 RAG optimization

Mutation operations searched over:

  • In-context examples format: JSON, Plaintext, XML

  • In-context examples grouping: by item, by input/output

  • No. of in-context examples: 5, 10

  • DSL specifications: full, only signatures

RAG retrieved examples via OpenAI’s text-embedding-3-small embedding model.

A.5.1 Baseline

#TaskInthistask,youwillbepresentedwithaquestion,aword,andaPOStag.Youhavetodeterminewhetherthepart-of-speechtagofthegivenwordinthequestionisequaltothegivenPOStagornot.GiveyouranswerwithTrueorFalse.HereistheAlphabeticallistofpart-of-speechtagsusedinthistask:CC:Coordinatingconjunction,CD:Cardinalnumber,DT:Determiner,EX:Existentialthere,FW:Foreignword,IN:Prepositionorsubordinatingconjunction,JJ:Adjective,JJR:Adjective,comparative,JJS:Adjective,superlative,LS:Listitemmarker,MD:Modal,NN:Noun,singularormass,NNS:Noun,plural,NNP:Propernoun,singular,NNPS:Propernoun,plural,PDT:Predeterminer,POS:Possessiveending,PRP:Personalpronoun,PRP$:Possessivepronoun,RB:Adverb,RBR:Adverb,comparative,RBS:Adverb,superlative,RP:Particle,SYM:Symbol,TO:to,UH:Interjection,VB:Verb,baseform,VBD:Verb,pasttense,VBG:Verb,gerundorpresentparticiple,VBN:Verb,pastparticiple,VBP:Verb,non-3rdpersonsingularpresent,VBZ:Verb,3rdpersonsingularpresent,WDT:Wh-determiner,WP:Wh-pronoun,WP$:Possessivewh-pronoun,WRB:Wh-adverb#ExamplesQ[0]:WhatisthenicknameoftheinstitutionwhosecurrentVicePresidentofthePastoralAnimationoftheschoolisRev.Fr.JohnVernilQ.Lopez,S.D.B?,Word:Rev,POStag:NNPQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNQ[3]:HowmanymainecosystemsdoesthestateinBrazilwithanamemeaningthickgrassordensewoodscontain?,Word:with,POStag:DTQ[4]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNA[0]:TrueA[1]:FalseA[2]:FalseA[3]:FalseA[4]:True#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

A.5.2 STDC

#Taskwillbepresentedwithaquestion,aword,andaPOStagGiveyouranswerwithTrueorFalseHereis:,IN:PrepositionorsubordinatingconjunctionJJ:AdjectiveJJR,JJS:Adverb,RBR:Adverb,comparative,RBS:Adverb,superlative,RP:Particle,SYM:Symbol,TO:to,UH:Interjection,VB:Verb,baseform,VBD:Verb,pasttense,VBG:Verb,gerundorpresentparticiple,VBN:Verb,pastparticiple,VBP:#ExamplesQ[0]:WhatisthenicknameoftheinstitutionwhosecurrentVicePresidentofthePastoralAnimationoftheschoolisRev.Fr.JohnVernilQ.Lopez,S.D.B?,Word:Rev,POStag:NNPQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNQ[3]:HowmanymainecosystemsdoesthestateinBrazilwithanamemeaningthickgrassordensewoodscontain?,Word:with,POStag:DTQ[4]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNA[0]:TrueA[1]:FalseA[2]:FalseA[3]:FalseA[4]:True#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

A.5.3 APE

#TaskProvideatrueorfalseresponseforeachinputbasedonthequestionorstatement.#ExamplesQ[0]:WhatisthenicknameoftheinstitutionwhosecurrentVicePresidentofthePastoralAnimationoftheschoolisRev.Fr.JohnVernilQ.Lopez,S.D.B?,Word:Rev,POStag:NNPQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNQ[3]:HowmanymainecosystemsdoesthestateinBrazilwithanamemeaningthickgrassordensewoodscontain?,Word:with,POStag:DTQ[4]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNA[0]:TrueA[1]:FalseA[2]:FalseA[3]:FalseA[4]:True#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

A.5.4 Stopwords

#Tasktask,presentedquestion,word,POStag.determine--speechtaggivenwordquestionequalgivenPOStag.answerTrueFalse.Alphabeticallist--speechtagstask:CC:Coordinatingconjunction,CD:Cardinalnumber,DT:Determiner,EX:Existential,FW:Foreignword,:Prepositionsubordinatingconjunction,JJ:Adjective,JJR:Adjective,comparative,JJS:Adjective,superlative,LS:Listitemmarker,MD:Modal,NN:Noun,singularmass,NNS:Noun,plural,NNP:Propernoun,singular,NNPS:Propernoun,plural,PDT:Predeterminer,POS:Possessiveending,PRP:Personalpronoun,PRP$:Possessivepronoun,RB:Adverb,RBR:Adverb,comparative,RBS:Adverb,superlative,RP:Particle,SYM:Symbol,:,UH:Interjection,VB:Verb,baseform,VBD:Verb,pasttense,VBG:Verb,gerundpresentparticiple,VBN:Verb,pastparticiple,VBP:Verb,non-3rdpersonsingularpresent,VBZ:Verb,3rdpersonsingularpresent,WDT:Wh-determiner,WP:Wh-pronoun,WP$:Possessivewh-pronoun,WRB:Wh-adverb#ExamplesQ[0]:WhatisthenicknameoftheinstitutionwhosecurrentVicePresidentofthePastoralAnimationoftheschoolisRev.Fr.JohnVernilQ.Lopez,S.D.B?,Word:Rev,POStag:NNPQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNQ[3]:HowmanymainecosystemsdoesthestateinBrazilwithanamemeaningthickgrassordensewoodscontain?,Word:with,POStag:DTQ[4]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNA[0]:TrueA[1]:FalseA[2]:FalseA[3]:FalseA[4]:True#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

A.5.5 GPT-4 Rewrite

#TaskDetermineifthepart-of-speech(POS)tagofthegivenwordinthequestionmatchestheprovidedPOStag.AnswerwithTrueorFalse.HerearethePOStags:CC,CD,DT,EX,FW,IN,JJ,JJR,JJS,LS,MD,NN,NNS,NNP,NNPS,PDT,POS,PRP,PRP$,RB,RBR,RBS,RP,SYM,TO,UH,VB,VBD,VBG,VBN,VBP,VBZ,WDT,WP,WP$,WRB.#ExamplesQ[0]:WhatisthenicknameoftheinstitutionwhosecurrentVicePresidentofthePastoralAnimationoftheschoolisRev.Fr.JohnVernilQ.Lopez,S.D.B?,Word:Rev,POStag:NNPQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNQ[3]:HowmanymainecosystemsdoesthestateinBrazilwithanamemeaningthickgrassordensewoodscontain?,Word:with,POStag:DTQ[4]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNA[0]:TrueA[1]:FalseA[2]:FalseA[3]:FalseA[4]:True#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

A.5.6 SAMMO

#Task-Checkifwordmatchespart-of-speechtag(True/False)-Tags:conjunction,number,determiner,adjective,noun,verb#ExamplesQ[0]:Whatresultwasgiventothecouplethatdancedtoasongfroma2005crime-comedy?,Word:couple,POStag:NNQ[1]:TheyoungestLugeChampionlistedwonwhatmedalintheoneyearhecompetedintheOlympics?,Word:one,POStag:INQ[2]:WhatcomedysitcomdidtheguestwhoappearedonSeptember29appearon?,Word:did,POStag:NNA[0]:TrueA[1]:FalseA[2]:False#CompleteandoutputinthesameformatasaboveQ[0]:Inwhattownwasthedirectorofthefilmtitled``TakeaSixer’’inEnglishborn?,Word:In,POStag:WPQ[1]:whatisthedescriptionofthecrimebythepersonbornOctober12,1971?,Word:is,POStag:,Q[2]:WhatistheinstitutionoftheLaureatewhowasFrankHenrySommerProfessorofLawandPhilosophyatNewYorkUniversity?,Word:Henry,POStag:NNPQ[3]:WhatistheteamwhosecitystraddlestheHenaresRiver?,Word:the,POStag:VBZQ[4]:TheriderbornonJuly161973playedonwhichteam?,Word:July,POStag:IN

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization (2024)
Top Articles
Latest Posts
Article information

Author: Rueben Jacobs

Last Updated:

Views: 6456

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.