{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Welcome to the Second Lab - Week 1, Day 3\n", "\n", "Today we will work with lots of models! This is a way to get comfortable with APIs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Important point - please read

\n", " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, after watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.

If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Start with imports - ask ChatGPT to explain any package that you don't know\n", "\n", "import os\n", "import json\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "from anthropic import Anthropic\n", "from IPython.display import Markdown, display" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Always remember to do this!\n", "load_dotenv(override=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Print the key prefixes to help with any debugging\n", "\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", "google_api_key = os.getenv('GOOGLE_API_KEY')\n", "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n", "groq_api_key = os.getenv('GROQ_API_KEY')\n", "\n", "if openai_api_key:\n", " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", "else:\n", " print(\"OpenAI API Key not set\")\n", " \n", "if anthropic_api_key:\n", " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", "else:\n", " print(\"Anthropic API Key not set (and this is optional)\")\n", "\n", "if google_api_key:\n", " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n", "else:\n", " print(\"Google API Key not set (and this is optional)\")\n", "\n", "if deepseek_api_key:\n", " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n", "else:\n", " print(\"DeepSeek API Key not set (and this is optional)\")\n", "\n", "if groq_api_key:\n", " print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n", "else:\n", " print(\"Groq API Key not set (and this is optional)\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n", "request += \"Answer only with the question, no explanation.\"\n", "messages = [{\"role\": \"user\", \"content\": request}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "messages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openai = OpenAI()\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4o-mini\",\n", " messages=messages,\n", ")\n", "question = response.choices[0].message.content\n", "print(question)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "competitors = []\n", "answers = []\n", "messages = [{\"role\": \"user\", \"content\": question}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The API we know well\n", "\n", "model_name = \"gpt-4o-mini\"\n", "\n", "response = openai.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Anthropic has a slightly different API, and Max Tokens is required\n", "\n", "model_name = \"claude-3-7-sonnet-latest\"\n", "\n", "claude = Anthropic()\n", "response = claude.messages.create(model=model_name, messages=messages, max_tokens=1000)\n", "answer = response.content[0].text\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n", "model_name = \"gemini-2.0-flash\"\n", "\n", "response = gemini.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n", "model_name = \"deepseek-chat\"\n", "\n", "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n", "model_name = \"llama-3.3-70b-versatile\"\n", "\n", "response = groq.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## For the next cell, we will use Ollama\n", "\n", "Ollama runs a local web service that gives an OpenAI compatible endpoint, \n", "and runs models locally using high performance C++ code.\n", "\n", "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n", "\n", "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n", "\n", "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+\\`) and run `ollama serve`\n", "\n", "Useful Ollama commands (run these in the terminal, or with an exclamation mark in this notebook):\n", "\n", "`ollama pull ` downloads a model locally \n", "`ollama ls` lists all the models you've downloaded \n", "`ollama rm ` deletes the specified model from your downloads" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Super important - ignore me at your peril!

\n", " The model called llama3.3 is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized llama3.2 or llama3.2:1b and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the the Ollama models page for a full list of models and sizes.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!ollama pull llama3.2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", "model_name = \"llama3.2\"\n", "\n", "response = ollama.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# So where are we?\n", "\n", "print(competitors)\n", "print(answers)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# It's nice to know how to use \"zip\"\n", "for competitor, answer in zip(competitors, answers):\n", " print(f\"Competitor: {competitor}\\n\\n{answer}\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's bring this together - note the use of \"enumerate\"\n", "\n", "together = \"\"\n", "for index, answer in enumerate(answers):\n", " together += f\"# Response from competitor {index+1}\\n\\n\"\n", " together += answer + \"\\n\\n\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(together)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n", "Each model has been given this question:\n", "\n", "{question}\n", "\n", "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n", "Respond with JSON, and only JSON, with the following format:\n", "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n", "\n", "Here are the responses from each competitor:\n", "\n", "{together}\n", "\n", "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(judge)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "judge_messages = [{\"role\": \"user\", \"content\": judge}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Judgement time!\n", "\n", "openai = OpenAI()\n", "response = openai.chat.completions.create(\n", " model=\"o3-mini\",\n", " messages=judge_messages,\n", ")\n", "results = response.choices[0].message.content\n", "print(results)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# OK let's turn this into results!\n", "\n", "results_dict = json.loads(results)\n", "ranks = results_dict[\"results\"]\n", "for index, result in enumerate(ranks):\n", " competitor = competitors[int(result)-1]\n", " print(f\"Rank {index+1}: {competitor}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Exercise

\n", " Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Commercial implications

\n", " These kinds of patterns - to send a task to multiple models, and evaluate results,\n", " are common where you need to improve the quality of your LLM response. This approach can be universally applied\n", " to business projects where accuracy is critical.\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# First use cursor to create a basic_lab_setup.py module for easy lab\n", "## prompt (add @2_lab2.ipynb to cursor context)\n", " there is setup logic involved in this notebook, please create basic_lab_setup.py. this will check what keys are available, and create a set of importable OpenAI objects with the correct base_url and api_key and default model, use load_dotenv(override=True) for safe handling of API keys. llama should use my localhost, use the most up to date (as of 08/1/2025) api endpoints, We are working with third party libraries avoid making API calls" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ OpenAI client initialized (key starts with sk-proj-...)\n", "✓ Anthropic client initialized (key starts with sk-ant-...)\n", "✓ Ollama client initialized (localhost)\n", "✓ Google client initialized (key starts with AI...)\n", "⚠ DeepSeek API Key not set (optional)\n", "⚠ Groq API Key not set (optional)\n", "\n", "Setup complete! Available clients:\n", " OpenAI, Anthropic, Ollama, Google\n", "Provider 'open' not available\n" ] }, { "ename": "TypeError", "evalue": "can only concatenate str (not \"NoneType\") to str", "output_type": "error", "traceback": [ "\u001b[31m---------------------------------------------------------------------------\u001b[39m", "\u001b[31mTypeError\u001b[39m Traceback (most recent call last)", "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[11]\u001b[39m\u001b[32m, line 36\u001b[39m\n\u001b[32m 27\u001b[39m messages = [\n\u001b[32m 28\u001b[39m {\u001b[33m\"\u001b[39m\u001b[33mrole\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33muser\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mcontent\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33mupdate this to use another agentic design pattern\u001b[39m\u001b[33m\"\u001b[39m},\n\u001b[32m 29\u001b[39m {\u001b[33m\"\u001b[39m\u001b[33mrole\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33muser\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mcontent\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33magentic_design_patterns: \u001b[39m\u001b[33m\"\u001b[39m + agentic_design_pattern},\n\u001b[32m 30\u001b[39m {\u001b[33m\"\u001b[39m\u001b[33mrole\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33muser\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mcontent\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33mthis_file: \u001b[39m\u001b[33m\"\u001b[39m + this_file},\n\u001b[32m 31\u001b[39m {\u001b[33m\"\u001b[39m\u001b[33mrole\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33muser\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mcontent\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33mthis_design: \u001b[39m\u001b[33m\"\u001b[39m + this_design}\n\u001b[32m 32\u001b[39m ]\n\u001b[32m 34\u001b[39m response = create_completion(\u001b[33m'\u001b[39m\u001b[33mopen\u001b[39m\u001b[33m'\u001b[39m, messages)\n\u001b[32m---> \u001b[39m\u001b[32m36\u001b[39m display(Markdown(\u001b[43mthis_design\u001b[49m\u001b[43m \u001b[49m\u001b[43m+\u001b[49m\u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[33;43m\"\u001b[39;49m\u001b[43m \u001b[49m\u001b[43m+\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse\u001b[49m))\n", "\u001b[31mTypeError\u001b[39m: can only concatenate str (not \"NoneType\") to str" ] } ], "source": [ "# answer initial question\n", "\n", "# setup\n", "from IPython.display import Markdown, display\n", "from basic_lab_setup import setup, create_completion\n", "\n", "setup()\n", "\n", "# get transcript use cursor to summarize the pertinent part of the day2 part 5 transcript: find icon in video controls next to volume\n", "# prompt: being as succint as possible summarize the agentic pattern, architecture, and workflow design pattern information in the transcript\n", "\n", "# Simple load and use\n", "with open('day2_5_transcript_summary.md', 'r') as file:\n", " agentic_design_pattern = file.read()\n", "\n", "with open('../../2_lab2.ipynb', 'r') as file:\n", " this_file = file.read()\n", "\n", "messages = [\n", " {\"role\": \"user\", \"content\": \"Which pattern(s) did this_file use? don't explain, just define the pattern(s) in the this_file\"},\n", " {\"role\": \"user\", \"content\": \"agentic_design_pattern: \" + agentic_design_pattern},\n", " {\"role\": \"user\", \"content\": \"this_file: \" + this_file}\n", "]\n", "\n", "this_design = create_completion('openai', messages)\n", "\n", "messages = [\n", " {\"role\": \"user\", \"content\": \"update this to use another agentic design pattern\"},\n", " {\"role\": \"user\", \"content\": \"agentic_design_patterns: \" + agentic_design_pattern},\n", " {\"role\": \"user\", \"content\": \"this_file: \" + this_file},\n", " {\"role\": \"user\", \"content\": \"this_design: \" + this_design}\n", "]\n", "\n", "response = create_completion('openai', messages)\n", "\n", "display(Markdown(this_design + \"\\n\\n\" + response))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.10" } }, "nbformat": 4, "nbformat_minor": 2 }