LLM orchestration

splitting a big task into several smaller subtasks with JSON for the LLM
2026-01-11 07:00
// updated 2026-01-11 07:46

Oftentimes with LLM-based AI, we would like to split a complex task into smaller subtasks through a process called orchestration:

  • create a prompt to split a big task into smaller subtasks
  • the LLM will split the task into smaller subtasks
  • for each subtask, then
    • create another prompt to perform each subtask
    • output the result as an item in a list
  • take the results from that list and join them together
  • create a prompt that will include
    • a summary of what to do with those results
    • the formatting (e.g. bullet points or paragraphs)
    • the style (i.e. time-based or goal-based)
  • the LLM will then generate a big plan based on that last prompt

Simply put, we will ask an LLM to break a task into smaller parts, ask an LLM to perform each subtask, and then finally recombine the subtasks to make a summary!

Setup

For more information about the following, please refer to this post about connecting to an LLM:

# app.py

import os
import json
from openai import OpenAI

# connection to LLM
client = OpenAI(
  api_key=os.environ.get("GROQAPIKEY"),
  base_url="https://api.groq.com/openai/v1"
)

# chat completion function
def get_completion(prompt, system_prompt="You are a digital assistant", json_mode=False):  
  response = client.chat.completions.create(
    model="llama-3.3-70b-versatile",
    messages=[
      {"role": "system", "content": system_prompt}, 
      {"role": "user", "content": prompt}
    ],
    response_format={"type": "json_object"} if json_mode else None
  )
  return response.choices[0].message.content

Note any differences between the code used in the link and the code above!

Prompting for subtasks

Continuing in the same file as the last snippet, let's make a prompt to do a big task by asking the LLM to make subtasks:

prompt = """
<task>
  Your task is to [big task]. You'll do so by [desired result]. Try to [desired outcome]. Output your result as a JSON object with the key "data" containing a list of subtasks in the format specified. Each "content" field should be a description of what we need to accomplish for the subtask. Make the content field [desired length].
</task>

<specified_format>
{{"data" : [
  {
    "content": "Subtask description and instructions here",
    "budget": "Portion of the budget that can be allocated for this subtask"    
  }
]}}
</specified_format>

<budget>
$1,000
</budget>

Output only valid JSON. Do not use markdown backticks. 
"""

Of course, substitute the [big task], [desired result], [desired outcome] and [desired length] with any suitable value!

(Also, note that in the <specified_format> tag, we have used "budget" as an example but we can substitute that for any other criterion or restriction like timelimit)

Generating a JSON list of subtasks

Continuing in the same file as the last snippet, let's get the LLM to create the subtasks:

# the LLM will take the prompt above and generate a JSON list of subtasks
json_string = get_completion(prompt, json_mode=True)
result = json.loads(json_string)
subtasks = result["data"]

Executing the subtasks

Now that we have our list of subtasks, let's execute them via a prompt chain:

# prepare a list to store the subtask responses
subtask_solutions = []

# prepare a function to perform a subtask
def execute_subtask(subtask):
    prompt = f"""
    <task>Your task is to [big task]. Try to [desired outcome]. Try not to [undesired action].  
    The task below is one aspect of our overall task that we want you to plan out for us.
    Output [subtask result format] using your knowledge of what to do.    
    Itemize what you'd spend this subsection of the budget on to the best of your knowledge.
    Begin your output with a [format] summary, then [format] of the itemized plan.
    </task>

    <subtask_content>
    {subtask['content']}
    </substask_content>

    <subtask_budget>
    {subtask['budget']}
    </subtask_budget>"""

    result = get_completion(prompt)
    subtask_solutions.append(result)

# execute the subtasks
for subtask in subtasks:
  execute_subtask(subtask)

As with the first prompt, replace anything in square brackets inside the <task> tag; alternatively, as an exercise, we can use the {variable} syntax to make this more applicable for various scenarios!

Recombining the orchestra

Finally, let's "get the band back together" by taking the subtask_solutions list, combining its items and creating an overall plan of action for the "big request":

# put all the subtasks' results together
def fuse_solutions(subtask_solutions):    
    combined_solutions = "\n\n".join(subtask_solutions)
    prompt = f"""
    <task>
    Your task is to condense a detailed list of plans for our [big plan in 1-3 words] into a [format].
    Include only actionable information we can directly use to make our [big plan in 1-3 words] possible.
    Output the following:
    - Offer a detailed, time-based plan
    - List out all purchases
    - Specify locations, times, and contingency plans
    - Use bullet point lists
    </task>

    <detailed_plans>
    {combined_solutions}
    </detailed_plans>"""

    # prompt the LLM to make a final plan
    return get_completion(prompt)

# get the final results of the big task
final_plan = fuse_solutions(subtask_solutions)
print(final_plan)

Once again, we can change the words inside the [square brackets] (and then drop the brackets) for any use case! (Do not hesitate to change the wording in anything inside the <task> tag, as that's the whole fun in prompt engineering!)

⬅️ older (in snippets)
❇️📒 LLM-generated JSON output
⬅️ older (posts)
❇️📒 LLM-generated JSON output