{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Welcome to the Second Lab - Week 1, Day 3\n", "\n", "Today we will work with lots of models! This is a way to get comfortable with APIs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Important point - please read

\n", " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, after watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.

If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Start with imports - ask ChatGPT to explain any package that you don't know\n", "\n", "import os\n", "import json\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "from anthropic import Anthropic\n", "from IPython.display import Markdown, display" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Always remember to do this!\n", "load_dotenv(override=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Print the key prefixes to help with any debugging\n", "\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", "google_api_key = os.getenv('GOOGLE_API_KEY')\n", "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n", "groq_api_key = os.getenv('GROQ_API_KEY')\n", "\n", "if openai_api_key:\n", " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", "else:\n", " print(\"OpenAI API Key not set\")\n", " \n", "if anthropic_api_key:\n", " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", "else:\n", " print(\"Anthropic API Key not set (and this is optional)\")\n", "\n", "if google_api_key:\n", " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n", "else:\n", " print(\"Google API Key not set (and this is optional)\")\n", "\n", "if deepseek_api_key:\n", " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n", "else:\n", " print(\"DeepSeek API Key not set (and this is optional)\")\n", "\n", "if groq_api_key:\n", " print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n", "else:\n", " print(\"Groq API Key not set (and this is optional)\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n", "request += \"Answer only with the question, no explanation.\"\n", "messages = [{\"role\": \"user\", \"content\": request}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "messages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openai = OpenAI()\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4o-mini\",\n", " messages=messages,\n", ")\n", "question = response.choices[0].message.content\n", "print(question)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "competitors = []\n", "answers = []\n", "messages = [{\"role\": \"user\", \"content\": question}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The API we know well\n", "\n", "model_name = \"gpt-4o-mini\"\n", "\n", "response = openai.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Anthropic has a slightly different API, and Max Tokens is required\n", "\n", "model_name = \"claude-3-7-sonnet-latest\"\n", "\n", "claude = Anthropic()\n", "response = claude.messages.create(model=model_name, messages=messages, max_tokens=1000)\n", "answer = response.content[0].text\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n", "model_name = \"gemini-2.0-flash\"\n", "\n", "response = gemini.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n", "model_name = \"deepseek-chat\"\n", "\n", "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n", "model_name = \"llama-3.3-70b-versatile\"\n", "\n", "response = groq.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## For the next cell, we will use Ollama\n", "\n", "Ollama runs a local web service that gives an OpenAI compatible endpoint, \n", "and runs models locally using high performance C++ code.\n", "\n", "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n", "\n", "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n", "\n", "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+\\`) and run `ollama serve`\n", "\n", "Useful Ollama commands (run these in the terminal, or with an exclamation mark in this notebook):\n", "\n", "`ollama pull ` downloads a model locally \n", "`ollama ls` lists all the models you've downloaded \n", "`ollama rm ` deletes the specified model from your downloads" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Super important - ignore me at your peril!

\n", " The model called llama3.3 is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized llama3.2 or llama3.2:1b and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the the Ollama models page for a full list of models and sizes.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!ollama pull llama3.2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", "model_name = \"llama3.2\"\n", "\n", "response = ollama.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# So where are we?\n", "\n", "print(competitors)\n", "print(answers)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# It's nice to know how to use \"zip\"\n", "for competitor, answer in zip(competitors, answers):\n", " print(f\"Competitor: {competitor}\\n\\n{answer}\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's bring this together - note the use of \"enumerate\"\n", "\n", "together = \"\"\n", "for index, answer in enumerate(answers):\n", " together += f\"# Response from competitor {index+1}\\n\\n\"\n", " together += answer + \"\\n\\n\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# print(together)\n", "display(Markdown(together))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n", "Each model has been given this question:\n", "\n", "{question}\n", "\n", "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n", "Respond with JSON, and only JSON, with the following format:\n", "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n", "\n", "Here are the responses from each competitor:\n", "\n", "{together}\n", "\n", "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(judge)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "judge_messages = [{\"role\": \"user\", \"content\": judge}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Judgement time!\n", "\n", "openai = OpenAI()\n", "response = openai.chat.completions.create(\n", " model=\"o3-mini\",\n", " messages=judge_messages,\n", ")\n", "results = response.choices[0].message.content\n", "print(results)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# OK let's turn this into results!\n", "\n", "results_dict = json.loads(results)\n", "ranks = results_dict[\"results\"]\n", "for index, result in enumerate(ranks):\n", " competitor = competitors[int(result)-1]\n", " print(f\"Rank {index+1}: {competitor}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Exercise

\n", " Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Implement Evaluator-Optimizer workflow design pattern - An Optimizer LLM analyzes the response of the top-ranked competitor\n", "# and creates a system prompt designed to improve the response. The system prompot is then\n", "# sent back to the top-ranked competitor to deliver a new response. \n", "# The optimizer LLM then compares the new response to the old response and surmises\n", "# what aspects of the system prompt may be responsible for the differences in the responses.\n", "\n", "\n", "\n", "# Get the top competitor (model name) and their response\n", "top_rank_index = int(ranks[0]) - 1\n", "top_competitor_name = competitors[top_rank_index]\n", "top_competitor_response = answers[top_rank_index]\n", "top_competitor_prompt = question\n", "\n", "# Compose a system prompt for GPT-5 to act as an expert evaluator of question quality and answer depth\n", "system_prompt = (\n", " \"You are an expert evaluator of LLM prompt quality and answer depth. \"\n", " \"Your task is to analyze the comprehensiveness and depth of thought in the following answer, \"\n", " \"which was generated by a language model in response to a challenging question. \"\n", " \"Consider aspects such as completeness, insight, reasoning, and nuance. \"\n", " \"Provide a detailed analysis of the answer's strengths and weaknesses and store in the 'markdown_analysis' property.\"\n", " \"Generate a suggested system prompt that will improve the answer and store in the 'system_prompt' property.\"\n", ")\n", "\n", "# Compose the user prompt for GPT-5\n", "user_prompt = (\n", " f\"Prompt:\\n{top_competitor_prompt}\\n\\n\"\n", " f\"Answer:\\n{top_competitor_response}\\n\\n\"\n", " \"Please analyze the comprehensiveness and depth of thought of the above answer. \"\n", " \"Discuss its strengths and weaknesses in detail.\"\n", ")\n", "\n", "# Call GPT-5 to perform the evaluation\n", "gpt5 = OpenAI()\n", "\n", "# Define the tool schema\n", "tools = [\n", " {\n", " \"type\": \"function\",\n", " \"function\": {\n", " \"name\": \"markdown_and_structured_data\",\n", " \"description\": \"Provide both markdown analysis and structured data\",\n", " \"parameters\": {\n", " \"type\": \"object\",\n", " \"properties\": {\n", " \"markdown_analysis\": {\n", " \"type\": \"string\",\n", " \"description\": \"Detailed markdown analysis\"\n", " },\n", " \"system_prompt\": {\n", " \"type\": \"string\"\n", " }\n", " },\n", " \"required\": [\"markdown_analysis\", \"sentiment\", \"confidence\", \"key_phrases\"]\n", " }\n", " }\n", " }\n", "]\n", "\n", "gpt5_response = gpt5.chat.completions.create(\n", " model=\"gpt-5\",\n", " messages=[\n", " {\"role\": \"system\", \"content\": system_prompt},\n", " {\"role\": \"user\", \"content\": user_prompt}\n", " ],\n", " tools=tools,\n", " tool_choice={\"type\": \"function\", \"function\": {\"name\": \"markdown_and_structured_data\"}}\n", ")\n", "\n", "tool_call = gpt5_response.choices[0].message.tool_calls[0]\n", "arguments = json.loads(tool_call.function.arguments)\n", "\n", "markdown_analysis = arguments[\"markdown_analysis\"]\n", "system_prompt = arguments[\"system_prompt\"]\n", "\n", "\n", "\n", "\n", "# Display the evaluation\n", "from IPython.display import Markdown, display\n", "display(Markdown(\"### GPT-5 Evaluation of Top Competitor's Answer\"))\n", "display(Markdown(f\"Top Competitor: {top_competitor_name}\"))\n", "display(Markdown(markdown_analysis))\n", "display(Markdown(\"### Suggested System Prompt\"))\n", "display(Markdown(system_prompt))\n", "\n", "\n", "# The top competitor was gemini-2.0-flash, so send the original question and suggested system prompt to generate a new response\n", "# Send the system_prompt and original question to gemini-2.0-flash to generate a new answer\n", "\n", "gemini_response = gemini.chat.completions.create(\n", " model=\"gemini-2.0-flash\",\n", " messages=[\n", " {\"role\": \"system\", \"content\": system_prompt},\n", " {\"role\": \"user\", \"content\": question}\n", " ]\n", ")\n", "\n", "new_answer = gemini_response.choices[0].message.content\n", "\n", "display(Markdown(\"### Gemini-2.0-Flash New Answer with Suggested System Prompt\"))\n", "display(Markdown(new_answer))\n", "\n", "comparison_prompt = f\"\"\"You are an expert LLM evaluator. Compare the following two answers to the same question, where the only difference is that the second answer was generated using a system prompt suggested by you (GPT-5) after evaluating the first answer.\n", "\n", "Original Answer (from {top_competitor_name}):\n", "{top_competitor_response}\n", "\n", "New Answer (from {top_competitor_name} with your system prompt):\n", "{new_answer}\n", "\n", "System Prompt Used for New Answer:\n", "{system_prompt}\n", "\n", "Please analyze:\n", "- What are the key differences between the two answers?\n", "- What aspects of the system prompt likely contributed to these differences?\n", "- Did the system prompt improve the quality, accuracy, or style of the answer? How?\n", "- Any remaining limitations or further suggestions.\n", "\n", "Provide a detailed, structured analysis.\n", "\"\"\"\n", "\n", "gpt5_comparison_response = gpt5.chat.completions.create(\n", " model=\"gpt-5\",\n", " messages=[\n", " {\"role\": \"system\", \"content\": \"You are an expert LLM evaluator.\"},\n", " {\"role\": \"user\", \"content\": comparison_prompt}\n", " ]\n", ")\n", "\n", "comparison_analysis = gpt5_comparison_response.choices[0].message.content\n", "\n", "display(Markdown(\"### GPT-5 Analysis: Impact of System Prompt on Gemini-2.0-Flash's Answer\"))\n", "display(Markdown(comparison_analysis))\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Commercial implications

\n", " These kinds of patterns - to send a task to multiple models, and evaluate results,\n", " are common where you need to improve the quality of your LLM response. This approach can be universally applied\n", " to business projects where accuracy is critical.\n", " \n", "
" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.7" } }, "nbformat": 4, "nbformat_minor": 2 }