title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t263.44
|
So QA index already exists.
| 263.44 | 268.56 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t267.2
|
So it's not going to create a new index.
| 267.2 | 271.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t269.12
|
And instead, it's just going to connect that index here.
| 269.12 | 272.24 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t271.68
|
Right.
| 271.68 | 278.08 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t272.24
|
So we just connected or we created our index, our vector database index.
| 272.24 | 284.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t279.36
|
And now what I want to do is I'm going to switch back to our data.
| 279.36 | 286.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t285.6
|
And I'm going to run through that.
| 285.6 | 293.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t286.88
|
So I'm going to load the data set and the squad data set from Hugging Face.
| 286.88 | 299.84 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t294.96000000000004
|
Now I'm going to use a validation split because the model has been trained on the training data.
| 294.96 | 304.16 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t299.84
|
But squad, I want to make it a little bit hard.
| 299.84 | 307.92 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t304.15999999999997
|
So we're going to use a validation split that hasn't seen before.
| 304.16 | 313.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t307.91999999999996
|
I'm removing any unique or duplicate context in there.
| 307.92 | 317.04 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t313.76
|
So zoom out a little bit here.
| 313.76 | 320.08 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t318.4
|
Squad depth, we're using this filter.
| 318.4 | 323.92 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t320.08
|
So this is all Hugging Face data sets syntax here.
| 320.08 | 326.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t325.52
|
And then we're encoding it.
| 325.52 | 330.24 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t326.88
|
So this model.encode, so this is our sentence transformer.
| 326.88 | 335.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t330.24
|
We're encoding it to create a load of sentence vectors for our context.
| 330.24 | 339.36 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t335.68
|
And we're converting these to lists because we are going to be pushing these
| 335.68 | 342.48 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t339.36
|
through an API request to Pinecone.
| 339.36 | 346.64 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t342.48
|
Again, we need a list, not a NumPy array or otherwise you're going to get an error.
| 342.48 | 348.42 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t347.92
|
OK.
| 347.92 | 352.24 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t349.12
|
Then back to the Pinecone side of things.
| 349.12 | 358.32 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t352.24
|
We want to create a list of, it's basically a list of tuples.
| 352.24 | 364.16 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t358.32
|
And those tuples include the ID of each context.
| 358.32 | 366 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t364.16
|
So there's a unique ID for each context.
| 364.16 | 370 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t366.48
|
We want the vector or the encoding, the context vector.
| 366.48 | 373.04 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t371.04
|
And then we also have this dictionary here.
| 371.04 | 374.48 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t373.04
|
Now this is metadata.
| 373.04 | 380.08 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t374.48
|
So metadata in Pinecone is like any other information about your vectors that you want to include.
| 374.48 | 386.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t380.08
|
And this is really good if you want to use metadata filtering, which is super powerful in Pinecone.
| 380.08 | 392.08 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t387.91999999999996
|
And I definitely want to leave the option open later on.
| 387.92 | 393.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t392.08
|
I'm not sure if we'll use it or not.
| 392.08 | 397.44 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t394.4
|
We'll probably put something in there just so we can play around with it.
| 394.4 | 405.2 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t398.96
|
Now that creates the format that we need to upset everything, which means just like push or upload
| 398.96 | 406.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t405.2
|
everything to Pinecone.
| 405.2 | 409.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t406.4
|
So then I do that in chunks of 50 at a time.
| 406.4 | 416 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t409.76
|
Just makes things a little bit easier on the API aggressor rather than sending everything at once.
| 409.76 | 416.5 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t416.0
|
OK.
| 416 | 420.16 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t417.44
|
So that's like how we create the index.
| 417.44 | 428.96 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t421.2
|
So now what we're going to do is actually integrate that a little bit in our app.
| 421.2 | 432.32 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t429.76
|
So let's switch back to our app here.
| 429.76 | 434.96 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t432.32
|
Let's move this over here.
| 432.32 | 436.64 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t434.96
|
Let's view it.
| 434.96 | 439.36 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t437.84
|
So first, let's just remove this.
| 437.84 | 440.16 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t439.35999999999996
|
We don't need that.
| 439.36 | 443.84 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t440.71999999999997
|
OK, save will automatically reload.
| 440.72 | 451.92 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t445.52
|
So first thing we want to do here is let's initialize the Pinecone connection.
| 445.52 | 456 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t451.91999999999996
|
So I'm going to just take.
| 451.92 | 459.6 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t457.35999999999996
|
Let's just take this part of the code.
| 457.36 | 463.52 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t460.71999999999997
|
Just copy it and then we'll remove what we don't need in a minute.
| 460.72 | 469.52 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t463.52
|
And so we don't need we do need sentence transformers in a minute.
| 463.52 | 470.64 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t469.52
|
We don't need data sets.
| 469.52 | 472.64 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t471.76
|
We do need Pinecone.
| 471.76 | 477.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t473.44
|
So actually here we're initializing our retriever model.
| 473.44 | 479.44 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t477.76
|
It's the same as what we did before.
| 477.76 | 480.96 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t479.44
|
So we do want to keep that in there.
| 479.44 | 482.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t480.96
|
Make that bigger.
| 480.96 | 486.48 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t483.91999999999996
|
API key again, just store this somewhere else.
| 483.92 | 494 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t486.48
|
Or if you are using Streamlit Cloud, they have like a secrets management system.
| 486.48 | 496.56 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t494.0
|
And it's something we'll look at in the future for sure.
| 494 | 498.48 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t496.56
|
But for now, I'm just putting it in here.
| 496.56 | 505.04 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t500.24
|
So we have our API key environment and we're just doing the same thing we did before.
| 500.24 | 507.28 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t505.04
|
But actually, we don't want to create an index.
| 505.04 | 510.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t507.28000000000003
|
We're assuming we've already created an index if we're in our app.
| 507.28 | 511.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t510.40000000000003
|
So we're just going to connect to it.
| 510.4 | 516.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t512.48
|
OK, so with that, we've got our API key.
| 512.48 | 521.92 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t516.4
|
We've kind of set up the like the back end part of our app.
| 516.4 | 525.04 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t521.92
|
Like the smart part that's going to handle the open the main Q&A.
| 521.92 | 527.6 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t526.0
|
But it's going to be a little bit slow.
| 526 | 532.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t527.6
|
And we will have a look at how to solve that pretty soon.
| 527.6 | 536.32 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t532.88
|
But for now, what we're going to do is actually just implement this.
| 532.88 | 541.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t536.88
|
And we're going to actually query and see what we return.
| 536.88 | 544 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t542.56
|
So I'm going to save this.
| 542.56 | 548.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t544.0
|
We won't see anything change in our app now other than the fact that it takes longer to load
| 544 | 553.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t548.4
|
because it's downloading the driver model.
| 548.4 | 558.16 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t553.76
|
That's the main part of the slowness here and then obviously connecting to Pinecone.
| 553.76 | 560.72 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t558.16
|
Also takes a second as well.
| 558.16 | 565.6 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t561.68
|
So for now, we're going to deal with how slow it is.
| 561.68 | 569.84 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t567.36
|
But we will fix that pretty soon.
| 567.36 | 577.28 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t569.84
|
And now I actually want to do is I would say, OK, if the query is not empty,
| 569.84 | 580.32 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t577.2800000000001
|
because by default it is empty, that's why we add that in there.
| 577.28 | 581.92 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t580.32
|
So I'm going to actually remove this.
| 580.32 | 583.52 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t582.8000000000001
|
Enter.
| 582.8 | 589.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t583.52
|
If it is not empty, so if query is not equal to nothing,
| 583.52 | 596.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t590.72
|
we're going to query Pinecone for whatever is in that query.
| 590.72 | 599.6 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t596.88
|
So the first thing we need to do is create our context vector.
| 596.88 | 603.52 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t599.6
|
So I'm going to write XQ, just shorthand for context vector.
| 599.6 | 609.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t605.6
|
It's pretty standard, especially if you use FICE before they tend to use this.
| 605.6 | 613.52 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t610.32
|
And I say I said context vector, I meant query vector.
| 610.32 | 617.68 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t615.12
|
So we're going to do model and code.
| 615.12 | 621.84 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t618.72
|
And we need to put this in square brackets and we have query.
| 618.72 | 624.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t622.4
|
OK, and I'm going to convert that to a list.
| 622.4 | 628.64 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t624.4
|
OK, so this is going to create our query vector.
| 624.4 | 629.44 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t628.64
|
Let's write it down.
| 628.64 | 632.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t630.24
|
Create query vector.
| 630.24 | 639.28 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t634.3199999999999
|
And then the next thing we want to do is query Pinecone with this vector.
| 634.32 | 646.4 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t640.16
|
So to do that, we want to write first list.
| 640.16 | 649.76 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t646.4
|
Get relevant context.
| 646.4 | 652.88 |
Streamlit for ML #2 - ML Models and APIs
|
2022-01-26 16:30:36 UTC
|
https://youtu.be/U0EoaFFGyTg
|
U0EoaFFGyTg
|
UCv83tO5cePwHMt1952IVVHw
|
U0EoaFFGyTg-t650.96
|
And we're going to store these in XC.
| 650.96 | 659.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.