Looking into Black Box Code Language Models

Date:

arXiv:2407.04868v1 Announce Type: new
Abstract: Language Models (LMs) have shown their application for tasks pertinent to code and several code~LMs have been proposed recently. The majority of the studies in this direction only focus on the improvements in performance of the LMs on different benchmarks, whereas LMs are considered black boxes. Besides this, a handful of works attempt to understand the role of attention layers in the code~LMs. Nonetheless, feed-forward layers remain under-explored which consist of two-thirds of a typical transformer model’s parameters.
In this work, we attempt to gain insights into the inner workings of code language models by examining the feed-forward layers. To conduct our investigations, we use two state-of-the-art code~LMs, Codegen-Mono and Ploycoder, and three widely used programming languages, Java, Go, and Python. We focus on examining the organization of stored concepts, the editability of these concepts, and the roles of different layers and input context size variations for output generation. Our empirical findings demonstrate that lower layers capture syntactic patterns while higher layers encode abstract concepts and semantics. We show concepts of interest can be edited within feed-forward layers without compromising code~LM performance. Additionally, we observe initial layers serve as “thinking” layers, while later layers are crucial for predicting subsequent code tokens. Furthermore, we discover earlier layers can accurately predict smaller contexts, but larger contexts need critical later layers’ contributions. We anticipate these findings will facilitate better understanding, debugging, and testing of code~LMs.

Share post:

Subscribe

Popular

More like this
Related

LG Electronics는 호텔 청소를위한 로봇 진공 청소, 계획 메리어트 조종사

LG 로봇 진공 청소기는 연회 공간 및 객실 복도와...

MIT의 덩굴 로봇은 잔해를 짜서 긴급 대응자를 도울 수 있습니다.

왼쪽에서 오른쪽으로 : 연구 인턴 Ankush Dhawan과 Lincoln 실험실...

현대인 보스턴 다이내믹 로봇의 ‘수만’구매

현대 모터 그룹은 Atlas Humanoid, Spot Quadruped 및 Stretch...

소규모 추론 모델의 부상 : AI가 GPT 수준의 추론과 일치 할 수 있습니까?

최근 몇 년 동안 AI 필드는 LLMS (Large Language...