Introducing Natural Language Embedded Programs to Enhance Large Language Model Reasoning Capabilities

in AI with 0 comment

Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language. NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualization. This approach offers several advantages, including improved accuracy, transparency, and efficiency. NLEPs have enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30%. The research is supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong and will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics.

Keyword: Natural Language Embedded Programs, Large Language Models, Symbolic Reasoning, Python Programs, Data Privacy, North American Chapter of the Association for Computational Linguistics

Comments are closed.