Code Hallucination

Date:

arXiv:2407.04831v1 Announce Type: new
Abstract: Generative models such as large language models are extensively used as code copilots and for whole program generation. However, the programs they generate often have questionable correctness, authenticity and reliability in terms of integration as they might not follow the user requirements, provide incorrect and/or nonsensical outputs, or even contain semantic/syntactic errors – overall known as LLM hallucination. In this work, we present several types of code hallucination. We have generated such hallucinated code manually using large language models. We also present a technique – HallTrigger, in order to demonstrate efficient ways of generating arbitrary code hallucination. Our method leverages 3 different dynamic attributes of LLMs to craft prompts that can successfully trigger hallucinations from models without the need to access model architecture or parameters. Results from popular blackbox models suggest that HallTrigger is indeed effective and the pervasive LLM hallucination have sheer impact on software development.

Share post:

Subscribe

Popular

More like this
Related

Monarch Tractor는 자율 사료 추진을위한 유제품 농민과 파트너 관계

MK-V Dairy Autodrive Feed Pushing을 사용하면 농민들은 더 자주...

Nuro는 데이터 수집을 위해 자율 주 차량 테스트 차량을 일본에 가져옵니다.

Nuro는 캘리포니아와 텍사스에서 8 년 이상의 개발과 4 년간의...

ABB는 로봇 부서를 분사 할 계획입니다

ABB Robotics는이 IRB 로봇과 같은 산업 시스템을 마찰 교반...

AI와 로봇 공학의 미래는 아마존의 차세대 창고가 이끄는 것입니다.

이것은 후원 기사입니다 아마존.로봇 공학 및 인공 지능 (AI)의...