audio audioduration (s) 3.99 4 | transcription stringlengths 1 1.4k ⌀ |
|---|---|
Hi, I'm Stephen Jones and I'm one of the | |
And one of the ways that that comes out is that there are a key performance elements. | |
be this one and a half terabytes per second data, right? | |
do the division, that's 194 billion double versus values per second. | |
giving me a peak performance based on memory of just 190. | |
for gigaflops. Right now I'm only beating the 1996 | |
computer in the world. | |
So let's have a look at this memory thing. Let's have a closer look at. | |
how it works because it's so important in the performance. | |
machine. A single bit of memory is a capacitor. | |
And now the holes are fucked for one bit on the left or it's empty for a server. | |
The memory is read by switching on the transistor which can be... | |
you need to take into account. Even while the bulk of your program can be pretty naive, see, plus, | |
exit to a wire, the bit line. The wire then carries a | |
based on the charge in that capacitor so it's the wired record either. | |
and honor and offer one or a certain. | |
DRAM chip consists of millions of these cells all connected together. | |
a big 2D matrix. This matrix layout lets me access any row, any call. | |
And this is why it's called random access memory. That's the random access. | |
as again say a magnetic tape which has not in your access. | |
Data is addressed by a row and a column index. | |
We are taken from the requested address. | |
First, the row is accessed. All the cells in the row... | |
plus I'm not going to teach you could today there's not enough time for that. | |
activated and their state is copied up to these things called the sensor. | |
and the sense amplifiers read the tiny chart | |
on each of the capacitors in the cells and turn them into well-defined bolts. | |
to do that can much more easily be read in the next set. The problem is... | |
the charge and the capacitor is drained as this happens right, I'm connecting a wide. | |
to the capacitor's drainable the electrons out. And so the data in the row is destroyed. | |
I'll come to that in a moment. Next. | |
the column access takes place. Instead of reading from memory cells, | |
row is already in the amplifier so it reads the data held in the amplifier. | |
much quicker, much easier to read than a row because the amplifiers will produce a strong clear | |
But I'll teach you a few things that I think are vital to think. | |
signal and so I can read more quickly. | |
You can read repeatedly from the amplifiers because they hold that voltage. You can read as many | |
times you like from the Festrow. So if you can open a row and use it repeatedly. | |
then you will not have it deal with the capacitor. | |
Because it's so common in fact to read adjacent memory locations in a | |
There's this thing called burst mode, where a single request returned multiple weather. | |
date. This is a huge deal because it means I don't have to pay for the individual request. | |
over again and pretty much every processor in the world uses this because the | |
system of the processor is always going to go and read multiple bites at a time. | |
And then GPU the Cassistum is 128 bytes a time. I'll talk about the Cassistum. | |
about when you're programming the GPU. I think the most | |
of that. The problem is the way | |
I need to read another row. I first have to write back the | |
data which was held in the amplifiers. If you remember, the row was drained when it was... | |
it was copied into the amplifiers because the capacitor is discharged. So we now have to... | |
rewrite it to avoid memory corruption. So this makes a page | |
which expensive could involve both the right back and then a new road low. | |
up the new road load into the amplifier. | |
Hardware, course things, roads or pages pretty much interchangeably, so if you hear the term. | |
page, then this is what they mean. I mean a row of your memory switching page. | |
It was about three times as expensive as switching column within a page because of this look. | |
important thing when doing any engineering is to have an accurate mental | |
store operation, sorry store and load operation. So, put in your couple of | |
model of the system they are using. | |
So today, a mental model, I really think the best way to understand the how of something is to know. | |
why it's that way. So this talk is really about why could | |
is the way that it is, not just how. | |
architects of CUDA. I've been working on the CUDA programming model in DB. | |
That's a good question. Why is Coup de la Weyrt is? | |
Right. It's the way it is because. | |
of the laws of physics, quite literally. So, what | |
I mean by that. Well, if you're using a GPU | |
because you want performance of some kind. Curivest designed in part to allow. | |
I would get maximum performance on the GPU. Right, it's obviously, as I said, also designed to make it. | |
programmable. Performance is limited by the laws. | |
physics and I'll get to that in a moment. And so Kudra is designed. | |
do it best to help you work with the hardware. | |
than loads of physics to get good performance. | |
computing since Scotch 2008 and why | |
So this is actually a really interesting point to make. You see what's special about | |
is that we make both programming language for the hardware and the hardware | |
programming language. This means what only do we get to adjust the programming | |
language to match what the hard way can do. But we also get to adjust the hardware. | |
So there's more programmable the hardware designers come up with really | |
have a staff to overcome limitations like speed of electricity and silicon. | |
and could have evolved to allow this clever stuff to be programmable. Literally speaking. | |
could it is shaped by the laws of physics? | |
So I made another possible | |
contentious statement that I want to look at more closely for women. | |
One of the best things about this job is that it could really be a co-design between hardware. | |
whole talk basically about the CTC last year and I put the link below. | |
low as a seamless plunge for my talk, but also because if you're interested. | |
gives you a lot more detail and then I'm going to get into right here about the hardware. | |
and overcoming physical constraints. Anyway, I won't repeat the whole thing. | |
but I will bring up the main points. Let's start with | |
system, though, because presumably you paid money and... | |
investing time in GPU computing because you want performance. | |
from it. So let's look at what that means. | |
make up, I hope his lung controversial statement that's getting the best performance. | |
is about using all the GPU resources that you can. | |
software. Since CUDA is the way you program the DPU directly, | |
In other words, the more threads I'm running, the more memory I'm moving, | |
calculations I'm making, the better I'm probably doing. | |
So these are the feeds and speeds of the Ampede view. | |
and the obvious performance metric to look at is flops. |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 36