Lab Instructions
Lab Instructions for Local LLM
Prerequisites
Checked out Lab Repo from GitHub CS595 Lab Repo
Pull the code to get the latest updates for GitHub Repo
Python version 3.10 or later
A Python virtual environment to link and use for the project
Download Models
Meta-Llama-3-8B-Instruct-Q4_K_S.gguf
andPhi-3-mini-4k-instruct.Q6_K.gguf
and place them in <Project Root>/labs/local_llm folder
Instructions
Open <Project Root>
Activate the python virtual environment
Go to /labs/local_llm
Install Requirements
pip install -r requirements.txt
Implement the below code in file local_llm.ipynb:
def chunk_text(text, chunk_size=500, chunk_overlap=50)
def llama_embed_text(text)-> np.ndarray
Logic to Process embeddings for all text chunks :
for i, chunk in enumerate(text_chunks)
def search_similar_chunks(query, k=3)
def run_llm(prompt: str)
def run_llm_with_pdf_knowledge(user_query, k=3)
Run the local_llm.ipynb using Cursor, VSCode or any other jupyter supported editors
Submission
Short Report (1–2 pages PDF)
Summarise your local LLM setup steps.
Document difficulties or errors you encountered, and how you resolved them.
Highlight your key observations, especially comparing queries with and without PDF
context:
For instance, ask: “What is the patient's ID?” or “What is the patient's last recorded
blood pressure?”
Observe whether your LLM can correctly retrieve and interpret those fields.
Completed Notebook/Script
Include code showing how you loaded the PDF, chunked the text, generated
embeddings, and queried the model.
Evidence of Successful Queries
Provide screenshots or copied console outputs that demonstrate your queries and the
model’s responses.
Clearly identify which responses used contextual knowledge and which did not.
Links
Reference Tutorials
Last updated