Spaces:
Sleeping
Sleeping
File size: 9,625 Bytes
b59afe8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
{
"cells": [
{
"cell_type": "markdown",
"id": "9641c10f",
"metadata": {},
"source": [
"# Week 1 - Lab 1: Generate a business idea with Amazon Nova\n",
"\n",
"Small project to showcase using Amazon Nova text generation models.\n",
"\n",
"### Credentials\n",
"You will need to set up your AWS credentials in your $HOME/.aws folder or in the .env file. Amazon Bedrock can work with either the standard AWS credentials, or with a Bedrock API key, stored in an environment variable ```AWS_BEARER_TOKEN_BEDROCK```. The API key can be generated from inside Amazon Bedrock console, but it only provides access to Amazon Bedrock. So if you want to use additional AWS Services, you will need to set up your full AWS credentials for CLI and API access in your .env file:\n",
"```bash\n",
"AWS_ACCESS_KEY_ID=your_access_key\n",
"AWS_SECRET_ACCESS_KEY=your_secret_key\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ef3b004",
"metadata": {},
"outputs": [],
"source": [
"# Install necessary packages\n",
"# This will also update your pyproject.toml and uv.lock files.\n",
"!uv add boto3"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67b57a2b",
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from time import sleep\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "505a930a",
"metadata": {},
"outputs": [],
"source": [
"# Load api key from .env or environment variable. This notebook is using the simpler API key method, which gives access only to Amazon Bedrock services, instead of standard AWS credentials\n",
"\n",
"load_dotenv(override=True)\n",
"\n",
"os.environ['AWS_BEARER_TOKEN_BEDROCK'] = os.getenv('AWS_BEARER_TOKEN_BEDROCK', 'your-key-if-not-using-env')\n",
"\n",
"region = 'us-east-1' # change to your preferred region - be aware that not all regions have access to all models. If in doubt, use us-east-1.\n",
"\n",
"bedrock = boto3.client(service_name=\"bedrock\", region_name=region) # use this for information and management calls (such as model listings)\n",
"bedrock_runtime = boto3.client(service_name=\"bedrock-runtime\", region_name=region) # this is for inference.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2617043b",
"metadata": {},
"outputs": [],
"source": [
"# Let's do a quick test to see if works.\n",
"# We will list the available models.\n",
"\n",
"response = bedrock.list_foundation_models()\n",
"models = response['modelSummaries']\n",
"print(f'AWS Region: {region} - Models:')\n",
"for model in models:\n",
" print(f\"Model ID: {model['modelId']}, Name: {model['modelName']}\")"
]
},
{
"cell_type": "markdown",
"id": "56b30ff6",
"metadata": {},
"source": [
"### Amazon Bedrock Cross-Region Inference\n",
"We will use Amazon Nova models for this example. \n",
" \n",
"For inference, we will be using the cross-region inference feature of Amazon Bedrock, which routes the inference call to the region which can best serve it at a given time. \n",
"Cross-region inference [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) \n",
"For the latest model names using cross-region inference, refer to [Supported Regions and models](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html) \n",
"\n",
"**Important: Before using a model you need to be granted access to it from the AWS Management Console.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8be42713",
"metadata": {},
"outputs": [],
"source": [
"# Define the model and message\n",
"# Amazon Nova Pro is a multimodal input model - it can be prompted with images and text. We'll only be using text here.\n",
"\n",
"QUESTION = [\"I want you to help me pick a business area or industry that might be worth exploring for an Agentic AI opportunity.\",\n",
" \"Expand on a pain point in that industry that is challenging and ready for an agentic AI solution.\",\n",
" \"Based on that idea, describe a possible solution\"]\n",
"\n",
"BEDROCK_MODEL_ID = 'us.amazon.nova-pro-v1:0' # try \"us.amazon.nova-lite-v1:0\" for faster responses.\n",
"messages=[]\n",
"\n",
"system_prompt = \"You are a helpful business consultant bot. Your responses are succint and professional. You respond in maximum of 4 sentences\"\n",
"\n",
"# Function to run a multi-turn conversation. User prompts are stored in the list and we iterate over them, keeping the conversation history to maintain context.\n",
"\n",
"def run_conversation(questions, model_id, system_prompt, sleep_time=5):\n",
" \"\"\"\n",
" Run a multi-turn conversation with Bedrock model\n",
" Args:\n",
" questions (list): List of questions to ask\n",
" model_id (str): Bedrock model ID to use\n",
" system_prompt (str): System prompt to set context\n",
" sleep_time (int): Time to sleep between requests\n",
" Returns:\n",
" The conversation as a list of dictionaries\n",
" \"\"\"\n",
" messages = []\n",
" system = [{\"text\": system_prompt}]\n",
"\n",
" try:\n",
" for i in range(len(questions)):\n",
" try:\n",
" messages.append({\"role\": \"user\", \"content\": [{\"text\": questions[i]}]})\n",
"\n",
" # Make the API call\n",
" response = bedrock_runtime.converse(\n",
" modelId=model_id,\n",
" messages=messages, \n",
" system=system\n",
" )\n",
"\n",
" # Store the response\n",
" answer = response['output']['message']['content'][0]['text']\n",
"\n",
" # Store it into message history\n",
" assistant_message = {\"role\": \"assistant\", \"content\":[{\"text\":answer}]}\n",
" messages.append(assistant_message)\n",
" print(f\"{i}-Question: \"+questions[i]+\"\\nAnswer: \" + answer)\n",
" sleep(sleep_time)\n",
"\n",
" except Exception as e:\n",
" print(f\"Error processing question {i}: {str(e)}\")\n",
" continue\n",
"\n",
" return messages\n",
"\n",
" except Exception as e:\n",
" print(f\"Fatal error in conversation: {str(e)}\")\n",
" return None\n"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "c36c0e4a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0-Question: I want you to help me pick a business area or industry that might be worth exploring for an Agentic AI opportunity.\n",
"Answer: Consider the healthcare industry for Agentic AI opportunities, focusing on patient care optimization and administrative automation.\n",
"1-Question: Expand on a pain point in that industry that is challenging and ready for an agentic AI solution.\n",
"Answer: Addressing the challenge of efficient patient scheduling and resource allocation through Agentic AI solutions.\n",
"2-Question: Based on that idea, describe a possible solution\n",
"Answer: Develop an Agentic AI system to dynamically schedule appointments, optimize staff allocation, and predict patient inflows for healthcare facilities.\n"
]
},
{
"data": {
"text/plain": [
"[{'role': 'user',\n",
" 'content': [{'text': 'I want you to help me pick a business area or industry that might be worth exploring for an Agentic AI opportunity.'}]},\n",
" {'role': 'assistant',\n",
" 'content': [{'text': 'Consider the healthcare industry for Agentic AI opportunities, focusing on patient care optimization and administrative automation.'}]},\n",
" {'role': 'user',\n",
" 'content': [{'text': 'Expand on a pain point in that industry that is challenging and ready for an agentic AI solution.'}]},\n",
" {'role': 'assistant',\n",
" 'content': [{'text': 'Addressing the challenge of efficient patient scheduling and resource allocation through Agentic AI solutions.'}]},\n",
" {'role': 'user',\n",
" 'content': [{'text': 'Based on that idea, describe a possible solution'}]},\n",
" {'role': 'assistant',\n",
" 'content': [{'text': 'Develop an Agentic AI system to dynamically schedule appointments, optimize staff allocation, and predict patient inflows for healthcare facilities.'}]}]"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"run_conversation(QUESTION,BEDROCK_MODEL_ID,system_prompt=system_prompt)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agents",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
|