Show HN: Im making a tutorial Zero to LLM Agent; and it wrote its own agent loop

My minimal LLM Agent (GPT4.1) just wrote it's own Agent Loop function.

Here is the backstory. I'm writing a tutorial: Zero to LLM Agent. Right now it is past 2 am here. I just wrote my 6th post. In the first five posts I wrote code that can talk to the OpenAI LLM. Then I gave it exactly one tool: a Python interactive environment (basically a Python REPL).

But, I used ipython and never actually implemented an agent loop. I defined an infer function infer(context, toolkit, evaluator) -> context, where context is just a simple list. The infer function then does the work. It takes the context, gives it to the LLM API, and performs the toolcall if needed.

Then I told the LLM Agent "The CWD is a Python project of an LLM Agent. Ignore the md file. Look only at the PY files in the CWD. Determine what is missing to make an LLM agent."

With a few more instructions by me, it implemented a perfectly reasonable agent loop function. The LLM Agent just improved it's own code.

Are we at a point where a bootstrap agent code can and will improve itself?

I would love to hear your thoughts on the matter.


Comments URL: https://news.ycombinator.com/item?id=47106996

Points: 1

# Comments: 0