Large Language Models for Code (or code LLMs) are increasingly gaining popularity and capabilities, offering a wide array of application modernization use cases such as code explanation, test generation, code repair, refactoring, translation, code generation, code completion and more. To leverage code LLMs to their full potential, developers must provide code-specific contextual information to the models. We would like to demonstrate generic pipelines we built, that incorporate static analysis to guide LLMs in generating code explanation at various levels (application, method, class) and automated test generation to produce compilable, high-coverage and natural looking test cases. We will also demonstrate how these pipelines can be built using “codellm-devkit”, an open-source library that significantly simplifies the process of performing program analysis at various levels of granularity, by making it easier to integrate detailed, code-specific insights that enhance the operational efficiency and effectiveness of LLMs in coding tasks. And how these use cases can be extended to different programming languages, specifically Java and Python.