WEBVTT

00:00.000 --> 00:09.840
This morning, I mentioned WebAssembly and I said, I actually don't know anything about WebAssembly.

00:09.840 --> 00:16.400
So I invited a wide-o'er to actually fix this problem and take me to WebAssembly and

00:16.400 --> 00:18.400
come back, run for plus.

00:24.400 --> 00:38.400
Thank you. All right. So let's get started then. So who the hell am I? I mean, I'm working for a bunch of different companies, but the last few are mostly relevant to this.

00:38.960 --> 00:44.960
That rigged when I worked on was zero full-time and then I joined another company that's called ILEEP. So

00:45.760 --> 00:51.440
where I'm working still on was zero and another in WebAssembly or in times the call that's called Chickery. So hopefully

00:52.800 --> 01:00.080
knowledgeable enough to tell you about this. So what is was zero was zero is a go-er on time for WebAssembly.

01:01.200 --> 01:07.360
It comes with an interpreter that entirely it's entirely written in Go. Well, both the compiler and the interpreter,

01:07.520 --> 01:14.560
written in Go. The thing about the the thing about the interpreter is that it runs everywhere where go runs. So

01:15.120 --> 01:22.320
it can be just ported. You bring it in as a library, it will just work. But we also implement an ahead of time,

01:22.320 --> 01:31.600
load time, optimizing compiler that targets MD64 and M64 so you can get great performance and without

01:32.560 --> 01:39.040
hopefully not too many compromises. Oh, I also delivered in this talk if you want to learn more about WebAssembly

01:39.040 --> 01:44.560
tomorrow, there's a WebAssembly dev room and I give you this talk where there's a bit of overlap with this one.

01:46.000 --> 01:53.600
So if you really really like this talk, you can come and see it again well for a part or if you

01:53.600 --> 02:00.480
really hated it, you can bring tomatoes and. And I'm going to talk a little bit about Chickery as well,

02:00.560 --> 02:07.520
which is a WebAssembly runtime for Java, which means that since we mentioned Android earlier,

02:07.520 --> 02:13.040
it also runs on Android. So that's interesting. And we're going to talk, I'm going to talk about that

02:13.040 --> 02:22.000
too tomorrow. All right. So how many people here familiar with WebAssembly? Nice. How many people

02:22.000 --> 02:28.800
have used WebAssembly or are using WebAssembly? Cool. Okay, I can go super fast in this first part then.

02:31.440 --> 02:37.760
So WebAssembly is essentially a VM the runs in the browser together with the VM. Actually,

02:37.760 --> 02:44.240
it's part of the VM that runs JavaScript. And it was born out of need for better improve performance

02:45.120 --> 02:50.480
in the browser. So yeah, JavaScript has great performance in the browser because there's a just

02:50.480 --> 02:57.520
intent compiler there that produces native code on the fly. But if you want to run and write code

02:57.600 --> 03:02.240
written any other language than JavaScript, you have to compile it to JavaScript. And that's not

03:02.240 --> 03:08.320
optimal. I mean, it kind of it works and we've demonstrated that it actually produces good code.

03:08.320 --> 03:16.400
But why not coming up with a compilation target that was designed to be targeted by compilers?

03:16.400 --> 03:20.560
All right. And that's what WebAssembly is. So it's a safe, portable, low-level,

03:20.560 --> 03:27.040
good format, design for efficient execution, comparable representation. But I think it's not designed.

03:29.120 --> 03:34.960
We'd only the browser in mind. So it can run outside the browser. It's just designed to be a good

03:34.960 --> 03:41.120
VM. All right. So yeah, it does run in the browser, but it does not make any web specific assumption,

03:41.120 --> 03:46.480
does not provide web specific features. So we can be employed in other environments as well.

03:46.480 --> 03:53.600
And that's where this talk with focus, where this focus with focus on. So inside the browser,

03:53.600 --> 04:03.280
whether you realize it or not, you may be using already WebAssembly. If you're using games,

04:03.280 --> 04:11.040
it's very, very possible that they're being compiled, at least in part, to WebAssembly for performance.

04:11.120 --> 04:16.160
If you use Google Earth, part of it, it's compiled in WebAssembly. If you have a used

04:16.160 --> 04:24.240
Figma for graphics, that uses WebAssembly. So all right. So there's Google sheets. There are a lot

04:24.240 --> 04:30.880
of pieces of WebAssembly on the web in production today. All right. But again, I want to focus on

04:30.880 --> 04:37.200
use cases outside the browser. So those use cases where people are comparing it to container

04:37.280 --> 04:43.760
technology. But where I'm mostly interested in is what I'm mostly interested in in software

04:43.760 --> 04:50.240
extensions. All right. So when it comes to extending software, especially when it's tricky to

04:50.240 --> 04:58.640
reveal, if they're large code bases, like for instance, C++ code bases, where you want to plug

04:58.640 --> 05:05.680
user defined behavior, usually you have to take, you're taking a choice between two languages

05:05.680 --> 05:12.800
today, right. In the past there were other languages, but mostly today it's about Lua and JavaScript.

05:12.800 --> 05:19.040
And of course they work and it's great and they're usually pretty efficient too because there

05:19.040 --> 05:23.680
are language VMs that work pretty well for these two languages. But you're making a choice for

05:23.680 --> 05:30.560
your end users. They will be forced to use either Lua or JavaScript. They will have to learn

05:30.560 --> 05:37.920
those languages and that's a constraint. What about WebAssembly? WebAssembly? It's a minimal VM. It's

05:37.920 --> 05:45.280
fast to boot and it's easy to embed. So as I mentioned earlier, most JavaScript VMs nowadays

05:46.000 --> 05:51.200
ship a WebAssembly runtime. So you could embed just JavaScript and be done with it and some people

05:51.200 --> 05:57.360
do that. But the wisdom is a much smaller specification as compared to JavaScript. Well,

05:57.360 --> 06:04.880
JavaScript essentially now incorporates embeds, includes the the Waza and the specifications

06:04.880 --> 06:14.560
are by definition, it's larger. So Waza libraries are smaller in general than JavaScript runtime.

06:15.840 --> 06:21.760
There are companies that are using already WebAssembly for the purpose of letting user write

06:21.760 --> 06:26.400
their own code. For instance, Red Panda is an alternative Kafka implementation that's shipping

06:27.520 --> 06:33.440
data transformation that use WebAssembly so that users can write them using their favorite language,

06:33.440 --> 06:42.240
including obviously go. Avoid it's a network proxy. You can write filters using a number of

06:44.560 --> 06:51.120
a number of strategies. There's a pretty fine filters. There's native filters. You can write them

06:51.120 --> 06:58.160
using your native language or new one, of course, and make sure that's or WebAssembly. So

06:59.680 --> 07:07.680
or OPA, if you're familiar with open policy agent, they compile policies in WebAssembly,

07:07.680 --> 07:15.440
so you can then run them everywhere and compile the user. What about Waza? Waza is already used

07:15.440 --> 07:19.120
in a bunch of different projects. This is a slide with just a few of the projects that are

07:19.200 --> 07:26.480
ready using Waza in production. So there's a stopper on the country repo, but there are

07:26.480 --> 07:31.920
Mazen, which is a network proxy similar to employee. There's Kubernetes. They just ship

07:32.640 --> 07:40.560
customs scheduling using Waza. Aquatriev is a security scanner, also a shoutout to mechanoid,

07:41.360 --> 07:50.320
which is a framework to run WebAssembly on tiny devices, and it's a zero compile using tiny

07:50.320 --> 07:59.680
gun. That's cool. All right. So Waza lets your users write software extensions using their

07:59.680 --> 08:07.040
favorite language and run it in a sandbox environment. So a shoutout to my company too,

08:07.920 --> 08:16.160
that's an old slide. The number of languages is growing both in the embed part and the first

08:16.160 --> 08:23.040
point and the third point. All right. So now it's like 17 languages and counting nine languages

08:23.040 --> 08:28.720
and counting for Waza and Targeting languages, but regardless, X-Tism is an open source project

08:29.520 --> 08:35.920
that obstructs a way some of the needy greedy about WebAssembly and using WebAssembly in your

08:36.800 --> 08:40.480
applications. And it provides you with a unified interface regardless of the language your

08:40.480 --> 08:46.320
application is written in. So we provide a framework so you can plug a WebAssembly run time there

08:46.320 --> 08:52.560
and the other side. We provide a framework so you can compile your plugins using your favorite

08:52.560 --> 09:00.720
language and use it as a plugin for your host application. So and that's why we have several

09:00.800 --> 09:09.120
different WebAssembly run times we contribute to because you use all of them. We use V8. We target

09:09.120 --> 09:16.080
browsers so there's also spider monkey and WebKit there. And then there's Chikari, the cup

09:16.080 --> 09:22.560
cup there. Well, Chikari cup there and was zero among the others. It provides this unified interface

09:23.440 --> 09:28.000
which simplifies a lot. But we're not going to go into that and just wanted to give another

09:28.000 --> 09:36.640
quick quick. Yeah shoutout to ncp.run which is a project that we just launched allows you to

09:36.640 --> 09:43.040
write your own agent code for using WebAssembly and it uses all of these open source projects

09:43.040 --> 09:49.120
I just mentioned. And so ncp.run if you want to play with it it uses the modal contest protocol

09:49.120 --> 09:54.480
if you're familiar with it it's something that's been people have been blabbering about for the last

09:54.480 --> 10:01.120
few weeks and yeah it's a fun project. All right. Now let's talk wasm. How does wasm look like?

10:01.840 --> 10:09.040
So let's say you have this function you're breathing using say tiny go. This will more or less it's

10:09.040 --> 10:13.920
not this is not really true but for the sake of this conversation it's all right. This will

10:13.920 --> 10:19.600
compile pretty much something like this. So it's a stack-based VM so we have upcodes that push

10:19.600 --> 10:27.440
values onto the stack and pop codes that pop values of the stack. So local get look put value

10:28.400 --> 10:34.160
values on the stack to values to sub traction is a binary operation so it pops to values

10:34.160 --> 10:38.320
then it pushes the results onto the stack and that's how it works. That's pretty much how all

10:38.320 --> 10:45.440
operations work and that's how WebAssembly looks. WebAssembly runs in the browser and outside the browser

10:45.440 --> 10:52.240
so on the top you can see how you use it from JavaScript. We exported this subtraction function

10:52.240 --> 11:00.320
that we saw earlier this one it's exported and so we can call this sub function we can call it from

11:00.320 --> 11:06.880
from JavaScript at the top and at the bottom you can see you can see go using was zero and as you can

11:06.880 --> 11:12.640
see the number of lines it's comparable and so they produce obviously the same result because

11:15.760 --> 11:21.680
wasn't bytecode looks a little bit like jvm bytecode which I mentioned because I'm familiar with the

11:21.680 --> 11:28.400
jvm and to the point that really you can just scan through these lines of code and have a very

11:28.400 --> 11:34.320
strict correspondence but it differs in significant way when it comes to control flow. On the jvm

11:34.320 --> 11:39.680
control flow is instructor you have conditional and unconditional jumps and you can jump in this

11:39.680 --> 11:46.080
case forwards but also backwards and then makes it harder to validate and make sure that your code

11:46.080 --> 11:54.880
is not to you know weird stuff but WebAssembly does not allow you to do unstructured jumps it comes

11:54.880 --> 12:03.920
with structure control flow blocks loops if then else is and yeah and that allows the

12:03.920 --> 12:10.240
validator to do more strict controls of their what's happening in your code and our interesting

12:10.240 --> 12:17.280
thing about WebAssembly it does not provide a standard library in general and so the way you provide

12:17.280 --> 12:22.800
essentially WebAssembly at this point is a glorified calculator it only does computation it's pure

12:22.800 --> 12:29.520
compute as they say so the only way to do side effects interact with the word it's to provide

12:30.080 --> 12:36.800
functions that you can import right we so exports you can export functions so you can call them

12:36.800 --> 12:43.040
and then you can import function that are defined potentially externally and this functions that you could

12:43.040 --> 12:48.880
define using your favor language in our case go or JavaScript in the case of the browser can provide

12:48.880 --> 12:54.960
capabilities that allow you to interact with external word and so not just produce heat but also

12:55.840 --> 13:04.560
right on this makes something as flow data. Yeah there's a set of let's say standard dyes for

13:04.560 --> 13:11.840
some of the finish standard dyes a set of agreed upon interfaces called yz that it's

13:11.840 --> 13:19.840
post like and gives you primitives to bright files access network and stuff like that and that's

13:19.840 --> 13:26.080
what you should think about when you read the word yz more or less and that's how they work essentially

13:26.080 --> 13:32.880
they provide these for instance fd read primitive and this fd read is writing to a file descriptor

13:32.880 --> 13:38.800
that's virtualized so it's not a real file descriptor generally but it's you know it's an indexing

13:38.800 --> 13:48.320
to something all right okay so as I mentioned modules are all about functions and they can import

13:48.320 --> 13:55.200
an export functions that's how they work they can also export memory tables goables but we won't

13:55.200 --> 14:04.080
go into that today yeah then for instance you could have a function called use add the find here

14:04.880 --> 14:12.080
that it's exported here that it's calling a function add that it's imported and as you can see

14:12.080 --> 14:18.080
the imported function does not have a body because it's provided externally to define it somewhere

14:18.080 --> 14:25.360
else and that's how it gets translated into web assembly at the bottom so how do you wired things

14:25.360 --> 14:33.360
together how do you plug these function signatures with an implementation these are what are usually

14:33.360 --> 14:39.520
called host functions because they are provided by the host that is the environment that embeds

14:39.520 --> 14:44.240
your web assembly runtime because of the bra in the case of the browser the host is usually

14:44.240 --> 14:50.000
the browser which means a JavaScript environment so here we're defining the add function

14:50.000 --> 14:55.600
remember the add function at only a signature and not a body so we'll find the add function there

14:56.240 --> 15:03.040
this way we can call the use add function remember the use add function is calling add

15:04.160 --> 15:09.520
and it won't break because there's an actual implementation for add this is in JavaScript at the

15:09.520 --> 15:14.400
bottom you can see how we do it with was zero and regardless of the syntax regardless of the

15:14.400 --> 15:19.040
methods that we're calling it's pretty much the same thing with a find and add function here

15:19.440 --> 15:24.400
and with a find and add function here here we give it a date add and here with a find it in line

15:24.400 --> 15:30.960
using an animal's function and we call the exported function it's pretty much the same it's just

15:30.960 --> 15:38.000
a slightly different API but more or less so there are multiple languages that you can use to write

15:39.760 --> 15:45.040
web assembly you can depending on the provided that there's a compiler that is able

15:45.040 --> 15:50.320
to compile to web assembly or the daring interpreter has been compiled to web assembly

15:50.320 --> 15:57.360
for instance python runs on web assembly by a means of having to interpret her well the runtime

15:57.360 --> 16:08.960
compiled to web assembly so it's polygonal all right what about go obviously go

16:08.960 --> 16:14.080
works with web assembly a composite web assembly tiny good these are old slides with old versions

16:14.080 --> 16:19.920
of the compilers but tiny go was the first to provide support for wasom as far as I know

16:21.520 --> 16:27.200
or the very least it was the first providing very tiny output and the first to provide

16:27.200 --> 16:34.960
important export that worked seamlessly and still is the best tool chain to this day if you want to

16:35.120 --> 16:41.600
use web assembly if you want to compile to web assembly go but big go is catching up

16:42.800 --> 16:48.320
so things go one twenty one you have support for these wazis set of libraries and you can

16:48.320 --> 16:55.600
already you can already use it but you can not you can not import you can not export

16:57.040 --> 17:03.200
functions in this case but the next version of go one twenty four will will close the gap so

17:03.280 --> 17:11.120
you will be able to use it as a grouping replacement for tiny go but tiny go will still provide

17:11.120 --> 17:22.320
best performance when it comes to size of executables so was there is so portable that you can

17:22.320 --> 17:31.600
compile was zero using the go compiler and also the tiny go compiler and create a was zero binary

17:31.680 --> 17:43.360
that can run in was zero so you can so you can was and well you wasn't and this I don't know

17:43.360 --> 17:51.600
if you can see it from from there but this is a screenshot of doom in a ski art running nested

17:51.600 --> 18:00.320
was zero inside another was zero indeterminate just because we can so how does it work

18:00.960 --> 18:08.640
whoa 12 minutes you got to go fast here all right there's many run times there's many run times

18:09.280 --> 18:14.960
and but they you'll have one thing in common most of these run times all of these run times compiled

18:14.960 --> 18:21.200
to native code and there are native libraries so that means that you have to use to go if you want to

18:21.200 --> 18:27.200
link them against a go executable send that's where was the arrow comes to the rescue that was

18:27.200 --> 18:34.400
zero was started in 2020 by Takeshi at Tetrate at the time all was run times were native libraries

18:34.400 --> 18:42.240
and required you to use to go now I argue I would argue that the defining features some of the

18:42.240 --> 18:48.000
defining features of the go run time and the go ecosystem are static linking so you have self-contained

18:48.000 --> 18:55.520
executables cross compilation with no paying and go routines for convenient concurrency right but

18:55.600 --> 19:04.240
unfortunately if you compile statically you cannot really load dynamic executable dynamic binaries

19:04.240 --> 19:13.520
you cannot dynamically load binaries I mean you can do it but it's kind of painful cross compilation

19:13.520 --> 19:20.480
becomes a pain because if you link against native libraries you have to rely on a sequimpiler

19:20.480 --> 19:26.000
that it's on that host platform or you have to set up the sea sprue just I mean you can do it but

19:26.000 --> 19:32.800
still and go routines are an abstraction over as threads so they must be aware whatever native code you

19:32.800 --> 19:40.080
bringing must be aware that there's a different run time there so in general foreign function interfaces are

19:40.080 --> 19:45.600
a pain where the language is maintained this is true for patent this is true for Java and it is just a

19:45.680 --> 19:53.360
true for go um more over if you're linking against native executables there's no boundaries between

19:53.360 --> 19:59.200
the goren time and what these native libraries do so this is how this memory looks like they're all

19:59.200 --> 20:04.640
they're all mixed up and but in web assembly there's isolation memory's isolated and virtualized

20:04.640 --> 20:10.960
so they cannot cross their boundaries they cannot overwrite and corrupt the memory of the host

20:11.360 --> 20:17.520
all right so was zero points to go entirely and the main goal of this talk in the less than

20:17.520 --> 20:25.680
minutes or it's to show you how that works let's see if I can manage mostly I can manage the problem

20:25.680 --> 20:34.480
is that you will follow that's that's the issue here all right so how does was your awards in

20:34.480 --> 20:43.680
general you it works at runtime you load a web assembly run binary and tell it please go please

20:43.680 --> 20:49.280
was zero load is web assembly binary and compile it now regardless if you're running an interpreter

20:49.280 --> 20:57.360
mode on in compile mode was zero we'll compile it to an internal representation and then at runtime

20:57.360 --> 21:04.080
you will instantiate to discompile representation in order to execute it obviously we

21:04.240 --> 21:11.520
compile it beforehand so we don't have to do it at runtime over and over again so how does and this is

21:11.520 --> 21:20.480
where from wasm to azum the title comes in function compilation how does that work all right well

21:20.480 --> 21:26.320
obviously first we need to decode the module to validate it and that just standard across all of the

21:26.320 --> 21:32.320
web assembly run times and then we compile the module into this executable representation how so how does

21:32.320 --> 21:38.720
the interpreter work in that regard we do some sort of translation some translation into the

21:38.720 --> 21:47.360
internal representation it's another form of bytecode essentially the wasm the wasm bytecode

21:47.360 --> 21:53.360
can be transformed into a more compact representation for efficiency where a lot of of codes

21:53.360 --> 21:59.440
repeats in web assembly there's multiple of codes depending on the type of the operands integers

21:59.440 --> 22:06.560
integer with 64 bits floating points floating point 64 bits and instead we just use one

22:06.560 --> 22:12.880
upcode that that's which is over the type and so we essentially compact the representation internally

22:13.440 --> 22:19.600
but then after we do this a very lightweight translation we just switched over the upcodes

22:19.600 --> 22:26.080
going inside a loop and loop over those pretty much and that's how it works so that's interpreter

22:27.040 --> 22:33.520
what about the compiler so originally we had a very straightforward compiler from web assembly

22:33.520 --> 22:38.960
to native code but then we realized the optimizing compiler most of the word tacheshia actually did

22:40.400 --> 22:47.760
and the new compiler architecture it's a proper multi-stage architecture where we have several passes

22:48.720 --> 22:53.520
and it's inspired by states of the ar compilers and just in tech compilers such as v8 so all

22:53.520 --> 23:00.960
the end goes on compiler so I'm going to get through this well we have enough time very fast

23:00.960 --> 23:07.520
but this is not a compiler class so very limited for that all right so let's say we have this

23:07.520 --> 23:15.840
web assembly binary on the on the left web assembly function it does a sum and there's a subtraction

23:15.840 --> 23:22.720
with a result right some into values then the result is subtracted from the third value that you see at the

23:23.520 --> 23:30.960
right this is translated into an internal presentation and essentially it removes the stack

23:30.960 --> 23:36.720
representation and instead use a value-based representation this is a normal a traditional

23:36.720 --> 23:44.400
I don't know who to call it a common representation and it's a basic blocks to control

23:44.400 --> 23:51.760
of the way to represent to represent control flow it essentially transforms a listing of code into

23:53.200 --> 23:59.920
a graph inside every note of the graph there's instructions and

24:01.520 --> 24:09.440
but these instructions do not include jumps instead of jumps we use arrows we use edges the

24:09.600 --> 24:15.200
jumps and instead represent a branch itself represented by edges in the graph and essentially

24:15.200 --> 24:20.800
by doing this transformation and another transformation that's called that gives the name SSAC single

24:20.800 --> 24:26.000
static assignment it renames all the variables so essentially you'd only occurs once on the left and

24:26.000 --> 24:35.520
side on the left and side these are all transformation that simplifies further analysis later

24:36.480 --> 24:42.640
and after doing this first form we are free to do some further analysis and simplifications so

24:44.880 --> 24:50.480
for each of us we can do optimizations suppose you have this listing C code and you have a

24:50.480 --> 24:57.120
debug constant and at build time you can set it to true or to false if it's false it will

24:57.120 --> 25:03.840
evaluate to zero at build time which means that your code here will look like this if zero now if there's

25:03.840 --> 25:09.920
an if zero your compiler knows statically at build time that that never actually gets evaluated

25:09.920 --> 25:14.240
and that means that an entire block of code can be eliminated that's called that code elimination

25:15.520 --> 25:22.240
now in this particular listing we also know that we have a value a we'll value b and we have a

25:22.240 --> 25:30.640
value c and here at this line of code a is exactly equal to five b is exactly equal to six

25:30.640 --> 25:36.000
and they want they don't change between all of those statements so that means that we can

25:36.000 --> 25:41.440
substitute those values in the expression and compute that expression at build time which means

25:41.440 --> 25:47.120
this is called constant propagation and constant folding and means that instead of all the

25:47.120 --> 25:53.440
listing that we had before we just return 15 as a constant fall this is called these are

25:53.440 --> 25:59.520
the optimization passes that we saw earlier all right once you have done all of these optimization

25:59.520 --> 26:04.560
passes and there are more more complicated they don't just delete code they can also add code

26:04.560 --> 26:11.520
is if it's more efficient we translate that into native code or at least we start to select

26:11.520 --> 26:17.920
instruction this is instruction selection process in this procedure we replace the internal

26:17.920 --> 26:21.840
presentation that we have whoops that we have on the left-hand side we actual that we do

26:22.000 --> 26:29.280
from instruction this is our our instructions and as you can see here there's there's worth

26:29.280 --> 26:35.920
there's values with a question mark these are virtual registers now on a machine you have a

26:35.920 --> 26:41.360
finite number of registers you don't have infinite register but before we actually generate

26:41.360 --> 26:46.560
the finalized version of the code we fake it and just say okay I don't know many register I have

26:46.800 --> 26:57.760
infinite and only in a later stage we allocate and fit the infinite number of register over the

26:57.760 --> 27:06.240
register we actually have so on armor we have 30 30 register 30 general purpose register for integers

27:07.360 --> 27:13.680
and 32 floating points slash senior register which does might not seem like a lot but it's actually

27:13.680 --> 27:22.080
plenty as compared to md64 so in the register allocation phase we replace all of the

27:22.080 --> 27:28.640
uses of the virtual register we use the use of proper register doing spilling and loading them

27:28.640 --> 27:38.400
back as required and then and then finally we generate the code how does it work we generate

27:38.400 --> 27:46.480
the binary as bytes lies and we actually encode the single instruction as bytes lies and then

27:46.480 --> 27:52.720
we prepare to jump into the bytes lies and let the CPU execute what instruction by instructions

27:52.720 --> 28:00.320
all of that is done without using CGO by using some fancy trickery using assembly code and that's

28:00.320 --> 28:06.880
super cool because the go compiler allows you to do that finally if you words about function evocation

28:06.880 --> 28:12.400
well essentially we use this azim code to set up the register in such a way that we don't

28:13.200 --> 28:20.400
break the go runtime and then once the register has stashed away we jump into the execution

28:20.400 --> 28:25.200
this is called a trampoline we jump into the code that we generated this is called a trampoline

28:26.720 --> 28:32.560
and what about user code what about host function what about errors we return an error code

28:33.200 --> 28:39.520
and then this code is transformed into this uh error code is a return to the

28:39.520 --> 28:47.040
go runtime so they can safely handle can be safely handled by user code so we replace the

28:47.040 --> 28:53.920
register we make them safely restore the registers and then let give control back to the runtime

28:53.920 --> 29:00.400
what about host functions they work exactly the same as an error except we return an ID to represent

29:00.480 --> 29:06.480
the function that we want to invoke and then once the function has been invoked in go space we return

29:06.480 --> 29:14.720
control to the executable code in wasn't space by again doing all the fancy dance we the registers

29:15.760 --> 29:25.920
and that's pretty much how it works nothing fancy and that's all I had so my name is Eduardo Vaki

29:26.000 --> 29:33.760
you can find me on Twitter or on Masadon of Luskai and get up whatever we did nickname and

29:33.760 --> 29:38.000
try and see if you're on try and stay zoom for it was zero thank you

