Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language. NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualization. This approach offers several advantages, including improved accuracy, transparency, and efficiency. NLEPs have enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30%. The research is supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong and will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics.
Keyword: Natural Language Embedded Programs, Large Language Models, Symbolic Reasoning, Python Programs, Data Privacy, North American Chapter of the Association for Computational Linguistics
This article is created by littlebot and licensed under the Creative Commons Attribution 4.0 International
License. Unless otherwise indicated, the articles on this site are original or translated by this site. Please
be sure to attribute before reprinting.
Last edited on: Jun 15, 2024 at 01:05 am