Logo LeBonLLM
Carnet de code et de réflexions autour de l’IA générative à la française
codecorpuscontributionsconversationscurationlexiquefaqrecherche
Communauté

Pour échanger, demandez l’accès au :

Infolettre

Nous avons le projet d’une infolettre, seriez-vous intéressé ?

Misc
XLinkedInMentions légales
Contact

princeton-nlp-SWE-agent

02/04/2024

Tags : IA, NLP

Toutes les notes de veille : [[+ Sommaire veille]] Date de récolte : [[2024-04-02-mardi]]

Titre : SWE Agents

Mon avis :

Approche intéressante pour permettre à des agents IA de répondre à des issues github même si ça reste encore balbutiant.

Mots clés :

Texte complet :

SWE-agent turns LMs (e.g. GPT-4) into software engineering agents that can fix bugs and issues in real GitHub repositories.

✨ Agent-Computer Interface (ACI)

We accomplish these results by designing simple LM-centric commands and feedback formats to make it easier for the LM to browse the repository, view, edit and execute code files. We call this an Agent-Computer Interface (ACI) and build the SWE-agent repository to make it easy to iterate on ACI design for repository-level coding agents.

Just like how typical language models requires good prompt engineering, good ACI design leads to much better results when using agents. As we show in our paper, a baseline agent without a well-tuned ACI does much worse than SWE-agent.

SWE-agent contains features that we discovered to be immensly helpful during the agent-computer interface design process:

  1. We add a linter that runs when an edit command is issued, and do not let the edit command go through if the code isn't syntactically correct.
  2. We supply the agent with a special-built file viewer, instead of having it just cat files. We found that this file viewer works best when displaying just 100 lines in each turn. The file editor that we built has commands for scrolling up and down and for performing a search within the file.
  3. We supply the agent with a special-built full-directory string searching command. We found that it was important for this tool to succintly list the matches- we simply list each file that had at least one match. Showing the model more context about each match proved to be too confusing for the model.
  4. When commands have an empty output we return a message saying "Your command ran successfully and did not produce any output."