title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1702.82
|
the whole parse tree here and the island on the top layer suggests that these two things
| 1,702.82 | 1,712.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1707.4199999999998
|
should be parsed independently from each other and therefore also processed independently
| 1,707.42 | 1,720.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1712.2
|
from each other. So here is my suggestion to to extend this and maybe Hinton's already
| 1,712.2 | 1,728.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1720.8400000000001
|
thought of this, but I would suggest that the this attention mechanism here is modulated
| 1,720.84 | 1,737.88 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1728.8400000000001
|
by how close two things are in the parse tree. Okay, so what would that be? So for a given
| 1,728.84 | 1,743.6 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1737.88
|
a given vector, it would be how much do you attend to this vector right here? Well, a
| 1,737.88 | 1,749.64 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1743.6000000000001
|
lot because it agrees with you, right? It you know, this the softmax of the inner product
| 1,743.6 | 1,756.04 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1749.64
|
would be high, it agrees with you. And also, it is in the same, it is same branch of the
| 1,749.64 | 1,760.76 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1756.0400000000002
|
parse tree. So that's perfect, right? This one right here doesn't agree with you, but
| 1,756.04 | 1,766 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1760.7600000000002
|
is in the same branch. So it could potentially later agree with you through a consensus algorithm.
| 1,760.76 | 1,771.96 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1766.0
|
However, this one over here, I, you probably shouldn't attend to that too much, even though
| 1,766 | 1,778.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1771.96
|
it points in the same direction, because it's in a different branch of the parse tree, you
| 1,771.96 | 1,783.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1778.12
|
shouldn't attend zero to it, like because these branches on top, they could change.
| 1,778.12 | 1,789.88 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1783.84
|
And you know, by you sending information there, this one could change the the top structure
| 1,783.84 | 1,795.02 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1789.88
|
here that could agree more with your branch of the parse tree and so on. So my suggestion
| 1,789.88 | 1,804.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1795.02
|
would be that let's not only get the softmax of the, let's not only get the softmax of
| 1,795.02 | 1,812.28 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1804.48
|
the current layer things, but let's do x times and here we're going to have a sum. So this
| 1,804.48 | 1,819.76 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1812.28
|
is going to be k. And let's say we're at we're at layer L. And this is layer one, this is
| 1,812.28 | 1,826.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1819.76
|
layer two, this is layer three, going to number them from the top, actually from the bottom,
| 1,819.76 | 1,834.04 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1826.56
|
layer m, layer m minus one, and this is layer L, I'm I suck at this. So from the current
| 1,826.56 | 1,843.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1834.04
|
layer, I want to go up the hierarchy until layer one. And I'm going to take the softmax
| 1,834.04 | 1,856.04 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1843.12
|
of the representation at layer L, at layer k, where I'm at x k transposed, like this.
| 1,843.12 | 1,862.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1856.04
|
What we aggregate is still the the values on the current layer, but how much we should
| 1,856.04 | 1,867.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1862.4799999999998
|
attend to that should be dependent on the parse tree. And we do that like this. And
| 1,862.48 | 1,877.64 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1867.48
|
maybe we have like a kind of a lambda k, L minus k, L minus k, I hope you get what I
| 1,867.48 | 1,884.88 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1877.64
|
mean. So how much how much you aggregate this sum here, the sum here is weird. This should
| 1,877.64 | 1,895.4 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1884.88
|
go probably. Hi, it's future Yannick. And I just wanted to write that down again. So
| 1,884.88 | 1,902 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1895.4
|
because I've made some mistakes. Obviously, the sum here should be within the softmax
| 1,895.4 | 1,907.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1902.0
|
because you want to have aggregate the distributions in log space. And the softmax should still
| 1,902 | 1,915.88 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1907.8400000000001
|
be valid, you know, distribution. And then the lambda is exponentiated by k and k now
| 1,907.84 | 1,925.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1915.88
|
properly runs from the zero to all the way up the stacks. So big L would be the total
| 1,915.88 | 1,931.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1925.48
|
number of layers and little L would be the layer where you're currently at. And you can
| 1,925.48 | 1,939.16 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1931.8400000000001
|
clearly see that the contribution of these attention matrices, it is so lambda would
| 1,931.84 | 1,945.6 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1939.16
|
be something smaller than one. And therefore, the contribution is in the current layer is
| 1,939.16 | 1,951.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1945.6
|
the strongest, but also in the next one up is a bit weaker than one more up is even a
| 1,945.6 | 1,956.68 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1951.48
|
bit weaker, and so on. So you'd still have essentially the same mechanism as Hinton is
| 1,951.48 | 1,962.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1956.6799999999998
|
suggesting controlling for the fact that things are in different branches of the parse tree.
| 1,956.68 | 1,970.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1962.9199999999998
|
All right, back to classic Yannick, who is thoroughly confused by these things. Yeah,
| 1,962.92 | 1,975.84 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1970.92
|
I'm not good at I'm not good at coming up with math on the spot. But I hope you can
| 1,970.92 | 1,982.08 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1975.8400000000001
|
see what it's doing. So it is if if you simply take the first K, you would simply stay at
| 1,975.84 | 1,987.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1982.0800000000002
|
that layer and it would be what Hinton said. But what I'm saying is you should also consider
| 1,982.08 | 1,996.52 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1987.5600000000002
|
how much your top your higher layer, one layer up from you agrees with one layer up from
| 1,987.56 | 2,002.72 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t1996.52
|
the thing you want to attend to. So you also compute that inner product between between
| 1,996.52 | 2,008.36 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2002.72
|
the embeddings, and you add that to the softmax distribution. So initially, the softmax distribution
| 2,002.72 | 2,014.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2008.36
|
would be like you should attend to this thing and this thing, and this thing a lot. But
| 2,008.36 | 2,021.4 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2014.56
|
then the next up hierarchy would maybe say, Well, we agree, because you know, these are
| 2,014.56 | 2,026.4 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2021.4
|
in the same thing. But this one, maybe not so much. And you would add those together,
| 2,021.4 | 2,030.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2026.4
|
maybe with a lambda factor in here, and then you go one layer up, and it would say, Well,
| 2,026.4 | 2,037.08 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2030.92
|
okay, everything over here basically agrees, right? And here, but everything over here
| 2,030.92 | 2,041.64 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2037.0800000000002
|
basically doesn't agree. So you would add that maybe with a lambda squared, as you go
| 2,037.08 | 2,049.42 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2041.64
|
up the layers, it would be less and less important, but still, you'd consider it. All right. Now,
| 2,041.64 | 2,056.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2049.42
|
if this is going to work out, cite the channel. Now back to what Hinton says, this, this is
| 2,049.42 | 2,064.44 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2056.56
|
actually the system. This is the system. As in a nutshell, you're going to input the image
| 2,056.56 | 2,068.8 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2064.44
|
at the bottom. And Hinton says you could use like a conv net at the very bottom to get
| 2,064.44 | 2,074.72 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2068.8
|
it into the columns. But then you're going to every time step pass information up the
| 2,068.8 | 2,082.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2074.72
|
columns down the columns, and between the same layer of the different columns. And that's
| 2,074.72 | 2,088.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2082.2
|
going to, in some point, this is going to stabilize, I don't know if it has cycles,
| 2,082.2 | 2,095.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2088.12
|
it probably doesn't have cycles, this probably does not have cycles. So at some point, this
| 2,088.12 | 2,102.42 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2095.12
|
comes to an end. And if that comes to an end, it should be that the object level embeddings
| 2,095.12 | 2,108 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2102.42
|
agree on an object, the part level embeddings agree on what parts there are the sub parts
| 2,102.42 | 2,113.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2108.0
|
agree, and so on. And they form these islands, these islands give rise to a parse tree. And
| 2,108 | 2,117.6 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2113.2000000000003
|
the parse tree can tell you what object is there, what is it made of? And where are these
| 2,113.2 | 2,128.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2117.6
|
parts in the image and so on. So exactly, that is it. And now, we're going to look at
| 2,117.6 | 2,135.52 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2128.56
|
what Hinton calls some design decisions. How many levels are there? About five. Okay, we
| 2,128.56 | 2,142.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2135.52
|
can skip that. How fine grained are the locations? Hinton says, you could be as fine grained
| 2,135.52 | 2,148.8 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2142.2
|
as pixels, or they could correspond to larger image patches. You and he says you could do
| 2,142.2 | 2,155.96 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2148.7999999999997
|
convolutional neural network to get it in there. Does the bottom op net look at nearby
| 2,148.8 | 2,162.76 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2155.96
|
locations? He says, yes, the bottom op net. So this this is not the attention network.
| 2,155.96 | 2,168.76 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2162.76
|
That's the bottom op network, it could look at nearby locations. But Hinton imagines that
| 2,162.76 | 2,174.44 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2168.76
|
if you have bottom up, top down, and if you have attention drawing in for information,
| 2,168.76 | 2,182.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2174.44
|
and if you maybe limit that attention to a neighborhood, then then the the attention
| 2,174.44 | 2,186.36 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2182.12
|
will do the job because you can have instead of looking at neighboring locations in the
| 2,182.12 | 2,192.04 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2186.3599999999997
|
bottom up network, you can simply in two time steps, aggregate that information. So you
| 2,186.36 | 2,197.68 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2192.04
|
can do bottom up here, bottom up here, and then using the attention, the lateral mechanism,
| 2,192.04 | 2,205 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2197.68
|
you can pass that information around this way. And also, it is not as biasing the network
| 2,197.68 | 2,212.02 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2205.0
|
to the immediate neighborhood. So the attention mechanism can sort of look farther, which
| 2,205 | 2,217.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2212.02
|
conflicts with what he's saying on top that the attention mechanism might only be looking
| 2,212.02 | 2,224.16 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2217.2
|
at the neighbors, I think there are different possibilities here. And only looking at neighbors
| 2,217.2 | 2,230.52 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2224.16
|
is actually one of the solution to the problem of having, you know, kind of similar vectors
| 2,224.16 | 2,236.64 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2230.52
|
at very distant locations at down the levels. But I think it's not as as good a solutions
| 2,230.52 | 2,241 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2236.64
|
to simply look at how close things are in pixel space, because even though things are
| 2,236.64 | 2,248.2 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2241.0
|
close in pixel space, they might be far away in the parse tree space. How does the attention
| 2,241 | 2,255.62 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2248.2
|
work? We've already looked at this. So the way that one location attends to another location
| 2,248.2 | 2,262.38 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2255.62
|
is going to be the softmax of the inner product between the embeddings here. And the values
| 2,255.62 | 2,271.6 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2262.38
|
are also going to be just the embeddings at layer at that layer. The visual input, he
| 2,262.38 | 2,281.58 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2271.6
|
says convolutional net could be used. Color and texture. He says, he makes he gives this
| 2,271.6 | 2,288.72 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2281.58
|
example, like if you know, if an object is entirely pale, or entirely green, or entirely,
| 2,281.58 | 2,293.56 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2288.72
|
I don't even know how to pronounce this, the color of a part is straightforward. But what
| 2,288.72 | 2,300.36 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2293.56
|
color is the whole object. So this entire notion of capsules, by the way, Hinton imagines
| 2,293.56 | 2,310.58 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2300.3599999999997
|
this as these embeddings represent kind of properties of the object so that the cat ear
| 2,300.36 | 2,316.24 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2310.58
|
embedding represents not only the fact that it is a cat ear, but also different properties
| 2,310.58 | 2,322.62 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2316.24
|
about the cat ear and even its location in the image is in the embedding. And, you know,
| 2,316.24 | 2,328.76 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2322.62
|
we know that transformers, they must be doing something like this, because we feed in positional
| 2,322.62 | 2,333.48 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2328.7599999999998
|
embeddings, for example, at the very bottom, and it can still, you know, compute things
| 2,328.76 | 2,341.1 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2333.4799999999996
|
in terms of positions. So that's the there's an intrinsic connection between kind of capsules
| 2,333.48 | 2,348.5 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2341.1
|
and the kind of transformer architecture. He says, one of the motivations of glom was
| 2,341.1 | 2,354 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2348.5
|
the idea that the whole object has a compound color, which might be called pale green or
| 2,348.5 | 2,361.68 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2354.0
|
move. And at the object level, every location belonging to the object has exactly the same
| 2,354 | 2,369.12 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2361.68
|
compound color. So the object is whatever this all over. When deciding which other locations
| 2,361.68 | 2,372.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
|
2021-02-27 15:47:03
|
https://youtu.be/cllFzkvrYmE
|
cllFzkvrYmE
|
UCZHmQk67mSJgfCCTn7xBfew
|
cllFzkvrYmE-t2369.12
|
the object level attend to preference would be given two locations with a similar compound
| 2,369.12 | 2,380.72 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.