Prompt Template for GPT-5

OpenAI has published a Prompting Guide for GPT-5, offering a compact compilation of proven strategies to control the new GPT-5 model. It shows how to:

  • Make agentic workflows more predictable through clear role, goal, and structure specifications.
  • Approach coding tasks more efficiently—from planning to debugging and execution.
  • Optimize intelligence and instruction adherence through clear formats, few-shot examples, and consistent language.

Additionally, for developers, OpenAI introduces the Prompt Optimizer Tool, which detects common prompt errors such as ambiguities or contradictory instructions and automatically corrects them—ideal for migrating existing prompts and for iterative improvements.

Here’s a universal prompt template based on the Prompting Guide:

  1. Role / Persona
    “Adopt the role of a [ROLE], specialized in [DOMAIN]. Your goal is to [GOAL].”
  2. Task Description
    “Your task is to [TASK]. Focus on [FOCUS], avoid [EXCLUSIONS].”
  3. Response Structure
    “Start with [INTRO, e.g., checklist, overview]. Then provide [DETAILS, e.g., 3 options, a ranking]. Ensure that each output includes [REQUIRED ELEMENTS].”
  4. Quality and Validation Rules
    *“All results must be:
    • [CORRECT] → if necessary, cross-checked against credible sources
    • [UNIQUE] → avoid trivial or generic answers
    • [PRACTICALLY USEFUL] → include realistic time or resource indications
    • [CLEAR] → concise, structured, and easy to apply”*
  5. Output Formatting
    “Return the results as [FORMAT, e.g., Markdown table / list / JSON] with the columns/fields: [X, Y, Z].”
  6. Stop Condition
    “The task is considered complete when [SUCCESS CRITERION].”

Example Prompt

Role / Persona

Act as a personal productivity coach focused on recommending lesser-known, effective learning methods for mastering a new skill within three months.

Task Description

Your task is to provide structured guidance on how to learn efficiently. Exclude common methods such as generic YouTube tutorials, mainstream MOOCs like Coursera/edX, or simply reading a standard textbook.

Response Structure

  1. Begin with a concise checklist (3–7 bullets) of steps you will follow, focusing on conceptual planning rather than lesson details.
  2. Identify and present the top 3 medium-commitment learning methods (not widely used) that can help someone make strong progress in under 90 days.
  3. For each method, provide:
    • A short description of the method
    • The unique advantage it offers (efficiency, engagement, adaptability)
    • A realistic estimate of time and resources required
  4. Ensure each method is practically useful and can be applied across different skills.

Quality and Validation Rules

  • Accuracy: method names must match official or widely recognized sources.
  • Uniqueness: avoid trivial or overly common recommendations.
  • Practicality: include realistic time/resource estimates.
  • Clarity: use concise, structured explanations.

Output Formatting

Return the results as a Markdown list with bolded method names, followed by a short explanation and bullet points for key aspects.

Stop Condition

The task is complete when three distinct, lesser-known methods are presented, each with clear advantages and realistic implementation advice.

Agentic AI: Can an AI Agent Clean Up the Hard Drive?

I’m probably not the only person who has a folder on their hard drive that should be cleaned up at some point. Mine happens to be called “Aufräumen” (Clean Up), and whenever I manage not just to keep filling it but actually want to clean it up, I’m overcome by a leaden fatigue after 2-3 files.

Anthropic’s Claude offers the possibility to connect a Large Language Model with external software through the Model Context Protocol. This means the LLM is no longer “just” limited to text output, but can make decisions or even execute actions. Claude already comes with some integrations, for example the ability to access the local file system (at the very bottom of the screenshot).

Claude integrations screenshot

My prompt: “Can you please look at my ‘Aufräumen’ folder on the desktop and see how the files there could be sensibly organized?”

First, Claude got an overview of the files in the folder. With more than 600 files, Claude needed two attempts. In doing so, Claude only examined the file names, although it would have been possible to open at least some of the files. However, that would have taken even longer. Claude then came up with a proposal in which my files were divided into 12 categories. These made a lot of sense, I only gave Claude one note before the AI cheerfully started sorting my data:

Folder structure screenshot

Just for sorting the 600 files, nearly 4 hours were necessary, partly with longer breaks because my usage limit was exhausted and I had to wait until I could use Claude again. This waiting time is not included in the 4 hours. But even so, Claude needed breaks again and again, sometimes the server was unreachable, then a context window was full and I had to type the prompt again in a new chat, or a “Continue” button appeared that had to be pressed. The computer actually got hot, Claude required a lot of power, but really needed a lot of time for each file. Sometimes it looked like this:

Claude working screenshot

For this to work at all and to avoid having to approve the action for each individual file, Claude needs approval for each action category. You should only do this if you also have a backup of your own files.

Did it work? A clear “partially”. Not all documents were correctly categorized. This is naturally also due to the fact that files were only categorized by their names. But it’s definitely a good pre-sorting, and in that time I probably would have fallen asleep several times out of self-defense. So it was worth it, even if the result isn’t perfect yet.

In the next step, I will build such a system with a locally installed LLM that also opens and reads files to sort them correctly.

Artificial Intelligence (AI), Large Language Models (LLMs), Data Science, Machine Learning, Data Mining, and Statistics: What’s the difference?

The terms Artificial Intelligence (AI), Machine Learning, Data Science, Data Mining, Statistics, and Large Language Models (LLMs) are often used interchangeably or misunderstood. Clearly differentiating between these concepts helps you navigate discussions and make informed decisions in data-driven contexts.

Artificial Intelligence (AI)

AI encompasses techniques and algorithms that enable computers to perform tasks traditionally requiring human intelligence, such as reasoning, decision-making, and pattern recognition.

Machine Learning (ML)

ML is a subset of AI where systems learn from data to improve decision-making or predictions without explicit programming. Applications include recommendation engines, fraud detection, and image recognition.

Data Science

Data Science is an interdisciplinary field combining scientific methods, processes, and systems to extract actionable insights from data. It integrates domain expertise, statistical techniques, and data analysis skills to make informed business decisions.

Data Mining

Data Mining involves exploring large datasets to discover meaningful patterns, correlations, or trends. Common applications include customer segmentation, market basket analysis, and anomaly detection.

Statistics

Statistics forms the mathematical basis for Data Science and Machine Learning. It includes methods for collecting, analyzing, interpreting, and presenting data, ensuring rigorous analysis and reliable results.

Large Language Models (LLMs)

Large Language Models are a specialized, advanced type of Machine Learning model that process and generate natural language text. They excel at tasks such as content summarization, text generation, language translation, and interactive dialogue (e.g., ChatGPT).

The Connection Between These Terms:

  • Artificial Intelligence is the overarching goal of creating systems that simulate human intelligence.
  • Machine Learning is a key approach to achieving AI through data-driven learning.
  • Data Science covers the broader methodology of turning data into actionable insights.
  • Data Mining focuses specifically on finding meaningful patterns in large datasets.
  • Statistics underpins these fields, providing the mathematical rigor needed for trustworthy analysis.
  • Large Language Models are an advanced application of Machine Learning, focusing on language understanding and generation.

Why clarity matters

While “Data Science” has dominated conversations in recent years, many discussions have now shifted towards AI and especially Large Language Models. However, even with the buzz around AI, it’s important to remember that successful projects often rely heavily on foundational Data Science and robust statistical methods. Clearly distinguishing these concepts allows you to harness the full potential of data-driven solutions and avoid common misconceptions.

Lifebeam Vi Update Experiences and Problems


After running more than 50 kilometers and almost 6 hours with the Vi, there is still no coaching effect. On the contrary, the “intelligentsia” talks to me less and less, and recently it hasn’t even advised me to take smaller steps. She only tells me that last time I slowed down at the end and that this time I should please keep the pace. What she measures, but apparently doesn’t evaluate: I always walk up a mountain at the end (well, not a real mountain, the Elbberg holds, but at least more than 25 meters difference in altitude), and she should understand that. Supposedly, one of the next software versions will take this into account.

After I had “fought” my way through the support forum, it was clear to me that it was not up to me. An email to the support, who then answers immediately despite Saturday evening. I should please uninstall and reinstall the app. OK. And what is the point of that? “Apologies for any confusion. I just took a closer look into your log files and it looks like you are running into a calibration issue.” And now I’m waiting for the bugfix I hope that the data collected so far is not useless and I don’t have to start all over again.

Other points of criticism:

  • The power button is not very responsive, sometimes you have to press for a very long time until the device is turned on.
  • I don’t understand the logic with the Spotify lists, I definitely won’t get into running mode that way. But first of all, solving the missing intelligence is more important to me.
  • In the forums, users complain that Lifebeam had running on the treadmill in the Kickstarter video, but this functionality is now not available. And without further ado, the video was also removed. Supposedly, however, this functionality is still to come.
  • The battery lasted 3 runs for me, so probably less than 3 hours.

I’ve definitely gotten fitter now, at least I can do the Elbberg better now. But I could have done that with Runkeeper.

What’s cool is that you get the raw data. Apparently every second is logged, and this is what the data looks like:

[code]

53.5439109802246

9.93943977355957

3.01765032412035

2380.0

Value>157

158

[/code]

In principle, you could also do something with this data yourself…

Running with Artificial Intelligence: LifeBEAM Vi


In the middle of last year, I supported LifeBEAM’s Kickstarter campaign, and this week the LifeBEAM Vi arrived: “The first true artificial intelligence personal trainer”, supposedly the first real personal trainer based on artificial intelligence. LifeBEAM has so far mainly produced helmets for fighter pilots, which can measure vital signs with special sensors; so the company can already be expected to have some experience with sensors. And measuring the pulse via the ear definitely works better than with a watch on the wrist, the Fitbit Blaze has often disappointed me here. AI is “the next big thing”, so why not have an assistant like in Her to improve my fitness?

We are still a long way from “Her”, but a “Her” for only one area (domain-specific), in this case sports or even more limited running, is realistic. It is also her-like that LifeBEAM has kept the interface with voice very minimalist, so that your imagination can play with what Vi looks like for YOU.

Great packaging, not so great manual

The fun cost $219 including shipping, plus another 50€ customs, which I think is an impudence, but apparently also not discussable. I was also just too excited. I don’t have an unboxing video, there are enough of them on the net. The packaging is great, it all feels very valuable, only the documentation has been saved. Although there is a small manual, it does not say, for example, which voice commands exist, and other questions can only be found out by rummaging through the forum posts. The support side, on the other hand, is rather poor.

Setting up the LifeBEAM Vi

It’s great that the LifeBEAM Vi doesn’t come with an empty battery, so you can get started right away. It’s a pity, however, that you can’t see anywhere how full the battery is. The supposedly possible charging within 45 minutes does not work either.

The Vi cannot speak German and does not understand German, and the corresponding app only exists in English. But at least you can switch the units to the metric system, so you don’t have to convert miles while running.

The suboptimal thing about the setup is that you have the headphones in to hear “them”, but then you have to pay attention to a blue light, which you can only see if you don’t have the headphones inside. By the way, the sound is great. At least when you’re not running.

The first run

First of all, you don’t feel this stirrup at all. It’s super light anyway. Due to the fact that you get different pins for different ear sizes and also a small hook that is supposed to hold the headphone in the ear, the in-ear headphones hold very well for me, which is rarely the case. You just have to be very careful that the small green light, the heart rate monitor, is not visible, because then it does not measure the pulse.

For the first run, I just wanted to run 5 kilometers, started the app, the connection was found immediately, and off I went. The LifeBEAM Vi talks quite a lot at first, while a Spotify playlist is playing, although I didn’t understand which one it was. Spotify’s running feature didn’t work. The music gets a little quieter when Vi speaks, but it was still too loud at times, so I didn’t always understand Vi.

What I think she said is that she has to spend 2 hours of training with one until she has enough data together to be able to make suggestions. In the forum, some users complain that nothing happened after 2 hours. In this case, however, it was probably because the coaching apparently only works on the iPhone at the moment. By the way, the sound didn’t seem quite as great when running, but this may also be due to the fact that I had replaced the earbuds again shortly before.

First Coaching

What the LifeBEAM Vi says is definitely more personal than what Runkeeper tells me. The voice is more natural, and it’s less predictable. It was also nice that she told me after 2 kilometers that my steps were too big and “put on” a cool beat with which I could try a different stride length. She praised me after 2/3 of the run that I could keep the speed well. That seemed much more individual than Runkeeper.

What I didn’t understand was how to get her to tell me my heart rate. At first I tried it in the classic way with “Vi, what is my Heart Rate?”, but she hadn’t responded. Then I pressed the right headphone, because I thought I dimly remembered that it worked. In fact, all you have to do is say “Heart Rate”, and if there isn’t quite as much wind blowing against the Elbe, then she understands it. It’s just funny that 20 seconds later she explains to me how I can ask her for the heart rate. She didn’t say anything about the fact that my pulse was relatively high. She also didn’t understand the question of “distance”; Sometimes I would have liked to know how much I still have ahead of me.

You’ll never run alone

Shortly before the end of the run, she told me that I had made it right away. It seemed like an eternity to me afterwards, but that may also have been due to the Elbberg, which gets in my way every time (hence the high pulse and low speed in the screenshot).

When it was over, it was nice to have someone to give you feedback, even if it was only a summary. Overall, it was a good experience to have someone to talk to you, because sometimes I’m bored while running. Some people think about problems while running, but I try to clear my head. The LifeBEAM Vi helped a lot with this.

Next steps

So I still have an hour and a half missing until the LifeBEAM can coach Vi; I will report on that then. Until then, I can also report which other voice commands of the Vi work and how fast the battery actually lasts and charges. Overall, the impression is positive for the time being, even if the high expectations raised by the initial videos were not completely fulfilled.

Update: The video I had embedded here was set to private by LifeBeam. The background is probably that a man was shown on a treadmill in the video. But Vi can’t handle that at all