WEBVTT

00:00.000 --> 00:14.000
Hi everyone, I guess I've got the superb slot lighting slot during the lunchtime.

00:14.000 --> 00:20.760
So I'm a matchec, I'm based in the Englands and I've been spending glass 25 years teaching

00:21.760 --> 00:29.760
and I've been spending glass 10 years working with FDIO and the VPP.

00:29.760 --> 00:36.760
So I'm going to talk about therapists and bench working and specifically in the context of FDIO and the VPP.

00:36.760 --> 00:43.760
So for those of you who are in the room, how many of you know about the VPP and FDIO?

00:44.760 --> 00:49.760
Okay, over 50% of the room, how many are you using?

00:49.760 --> 00:55.760
Over 50% of the room, how many of you speak it?

00:55.760 --> 00:58.760
About 50% actually.

00:58.760 --> 01:12.760
So I'm supposed to go over there but I'm quickly briefly talked about VPP and assess it and then go into the bench marking some results and some new stuff that we have.

01:13.760 --> 01:15.760
That would be good.

01:15.760 --> 01:25.760
So, there was a previous talk about with by Marvel team talking about VPP.

01:25.760 --> 01:28.760
They showed some details about what VPP is.

01:28.760 --> 01:30.760
But basically it's about vector processing.

01:30.760 --> 01:32.760
It's doing it fast.

01:32.760 --> 01:36.760
It's scaling it linearly and I have the steam called trumpet.

01:36.760 --> 01:46.760
So if you Google turbo VPP, you will hit number of different demos, videos and paper technical papers showing the VPP scalability.

01:46.760 --> 01:56.760
We are currently at over two terabits per 2 socket server and going higher with the next gen and IMDs, arms and zeans.

01:56.760 --> 02:05.760
Future which later playing, but it all comes down to being aware of how the CPUs and GPUs today basically process was work.

02:05.760 --> 02:13.760
And being able to process the buckets in the vector and keeping the caches both data and instruction hot.

02:13.760 --> 02:16.760
It's a pluggable architecture.

02:16.760 --> 02:20.760
So just do it and plug into it.

02:20.760 --> 02:23.760
And there's number of plugins at the link.

02:23.760 --> 02:30.760
Since it is a partner core project in FDIO, it's almost like a marriage.

02:30.760 --> 02:39.760
And what we're doing is we're basically benchmark data playing based on DPP and also benchmark DPDG.

02:39.760 --> 02:46.760
With that, we have a quality gate with the technologies, but we also have to optimize testing methodology.

02:46.760 --> 02:52.760
So I'm pleased to say that we are just in the publication queue for the MLR search and RFC.

02:52.760 --> 03:02.760
In the ATF, and we'll be augmenting Galaxy 2544 specifically focusing on software tracking and basically benchmarking pipelines.

03:02.760 --> 03:06.760
We publish a lot of data and a lot of analytics.

03:06.760 --> 03:14.760
And the good news is that since it goes, it's not always visible, it actually running multiple labs mainly hardware vendors.

03:14.760 --> 03:21.760
Number of those vendors actually use it to develop their products and talk to them because they'd be speaking today.

03:21.760 --> 03:31.760
Why do we need to replace the RFC to 504 by their search is just to slow for us to keep turning for the volumes of tests that we do?

03:31.760 --> 03:39.760
So that's why we implement quite a evolved characteristics I'll go with MLR search to speed things up.

03:39.760 --> 03:43.760
What we benchmark, we rely on donations.

03:43.760 --> 03:49.760
So in the bunch of Zions, we have some other arms, so Marvel team isn't in Oct. 10, we test them.

03:49.760 --> 03:55.760
You can see all the results in the open just and know how to find them. I'll show you.

03:55.760 --> 04:03.760
We also have a new NDDA grace server for a year now, so you can see how they perform with DPP.

04:03.760 --> 04:13.760
So it's all quite a lot of goodies, latest nicks, and we are currently willing folks to get us latest DPPs like NDDA blue fields.

04:13.760 --> 04:16.760
And it does lots of lots of things.

04:16.760 --> 04:22.760
So this shows a bit of the challenge.

04:22.760 --> 04:26.760
There's probably a guess we need help with a UI.

04:26.760 --> 04:29.760
So any of you UI designers, please come and help us.

04:29.760 --> 04:37.760
But the blue boxes on the left is basically test selection and the legend configuration.

04:37.760 --> 04:50.760
The top right is our trending screen that shows daily trending and for the PDR and NDDA rights PDR is basically some tolerance to the packet drop.

04:50.760 --> 04:56.760
The bottom one is our release tests that test various variations.

04:56.760 --> 05:04.760
Much higher number of combinations of the packet sizes, cores, numbers and so on.

05:04.760 --> 05:06.760
There's quite a lot of analytics data.

05:06.760 --> 05:14.760
The good news is that this is for you to browse, but all of this data is downloadable.

05:14.760 --> 05:17.760
And you can see here out of one core for large packets.

05:17.760 --> 05:20.760
We are 200 gig for specifically self high rapids here.

05:20.760 --> 05:24.760
At the time it's with about 64 gig.

05:24.760 --> 05:30.760
This is a similar view, but for a great server.

05:30.760 --> 05:39.760
So again, anybody with UI experience, please, please help us.

05:39.760 --> 05:44.760
And so it's a lightning talk, so I'm rushing a bit.

05:44.760 --> 05:48.760
I'm around later for you to ask any questions.

05:48.760 --> 06:01.760
Now what I wanted to talk now and slow down is to talk about the new think that the VB thoughts and many of them are actually in the room and available for questions and follow-ups.

06:01.760 --> 06:11.760
So we are in this introducing a new type of flow or packet processing in the VB and that's the stateful data plane.

06:11.760 --> 06:18.760
So suddenly we're dealing not with packet by packet lookup, but we are actually organizing packets into or detected.

06:18.760 --> 06:26.760
So the classifying packets into sessions and dealing with those session sessions and sessions are stateful.

06:26.760 --> 06:29.760
So what are the capabilities?

06:29.760 --> 06:32.760
It's an alternative to take on sessions.

06:32.760 --> 06:38.760
And the idea is to have the basic session parts within the VB pipeline.

06:38.760 --> 06:53.760
But make it pluggable so that any more sophisticated stateful engines like firewalls, like Snord, so it got us IPS systems, can actually plug into it and offload the verdicts for fast-potting.

06:54.760 --> 07:02.760
And it's basically removing a lot of redundancy to deal with the session lifecycle.

07:02.760 --> 07:06.760
So we have sessions coming from the left on the picture.

07:06.760 --> 07:14.760
The classified with SOTP lookup and then we're going through some basic notes that are integrated with L for lifecycle.

07:14.760 --> 07:18.760
And then basically they are managed.

07:18.760 --> 07:21.760
The packets are managed as part of this session.

07:22.760 --> 07:24.760
The sessions are multi-dependent.

07:24.760 --> 07:27.760
There's a bit of the hierarchical service training for what's in reverse.

07:27.760 --> 07:36.760
And most importantly, the service chain can be updated over the lifecycle of the session by doing what's happening with the packets and the verdicts.

07:36.760 --> 07:39.760
So we call it the dynamic service training.

07:39.760 --> 07:46.760
There are a number of existing SOTP services already in the repo.

07:46.760 --> 07:54.760
There is a, you can check the source, there's docs, Muhammad here, and few folks wrote a block.

07:54.760 --> 08:02.760
So I feel free to get familiar with with that.

08:02.760 --> 08:05.760
So SOTP is kind of a big deal for VB.

08:05.760 --> 08:08.760
I would believe it's a brand new thing.

08:09.760 --> 08:12.760
We are testing it in SOTLabs with T-Rex.

08:12.760 --> 08:15.760
So we're limited to the multiple 10 geeks.

08:15.760 --> 08:18.760
And we sort of stretch in T-Rex a bit for stifle.

08:18.760 --> 08:22.760
We have our partners in Intel Labs doing this with Dixia.

08:22.760 --> 08:24.760
At a number of hundred geeks.

08:24.760 --> 08:27.760
We have the first benchmarks in on ML rapids.

08:27.760 --> 08:37.760
So we observe three million connections per second on UDP, TCP, on the single Z1 core, which is encouraging.

08:37.760 --> 08:42.760
And there are the links to the latest locks.

08:42.760 --> 08:45.760
You can check it yourself.

08:45.760 --> 08:47.760
And please watch the space.

08:47.760 --> 08:50.760
And now going forward.

08:50.760 --> 08:53.760
We're going to continue developing our stateless packet tests.

08:53.760 --> 08:54.760
We are ready.

08:54.760 --> 08:57.760
We're putting, we're starting to put all the energy into stifle part.

08:57.760 --> 09:00.760
So the baseline of SOTP data plane.

09:00.760 --> 09:02.760
Then the services over SOTP.

09:02.760 --> 09:05.760
And then we want to go after a bit more complex stuff.

09:05.760 --> 09:07.760
Like integration with Snort.

09:07.760 --> 09:10.760
I'm talking to the Sorycata colleagues.

09:10.760 --> 09:17.760
And friends and see what they think about fast-puffing the Sorycata relics.

09:17.760 --> 09:19.760
And of course, we're adding the new hardware,

09:19.760 --> 09:21.760
GPUs and such as I have mentioned.

09:21.760 --> 09:24.760
Now what we're looking for is.

09:24.760 --> 09:27.760
We need to count with traffic generator folks.

09:27.760 --> 09:30.760
And if you hear in the room or you're talking to them,

09:30.760 --> 09:32.760
please implement latest standards from ITF.

09:33.760 --> 09:37.760
I want to allow a search draft to make it seem like a compatible.

09:37.760 --> 09:40.760
And help us with stifle tests.

09:40.760 --> 09:44.760
And of course, for VPP, for VPP community up here,

09:44.760 --> 09:46.760
please keep your reviewing patches,

09:46.760 --> 09:48.760
contributing the services.

09:48.760 --> 09:50.760
And now adding the stifle stuff.

09:50.760 --> 09:53.760
And we're very excited about that.

09:53.760 --> 09:56.760
And help us ship it.

09:58.760 --> 10:00.760
And I'm 10 seconds.

10:00.760 --> 10:01.760
Cool.

10:05.760 --> 10:06.760
Okay, thank you much, Jack.

10:06.760 --> 10:08.760
Do we have some questions?

10:10.760 --> 10:11.760
Yes.

10:15.760 --> 10:17.760
Yeah, thank you, Mr. Paul.

10:17.760 --> 10:19.760
For the amazing part of the seat,

10:19.760 --> 10:24.760
because I was checking benchmarking results after a new release.

10:24.760 --> 10:26.760
It's a kind of big deal.

10:26.760 --> 10:32.760
And my question is, is there a way to run those benchmarks with

10:32.760 --> 10:35.760
directs on hardware?

10:35.760 --> 10:37.760
On your own hardware?

10:37.760 --> 10:38.760
Yeah, of course.

10:38.760 --> 10:43.760
And I mean, last question is how?

10:43.760 --> 10:44.760
Yeah.

10:44.760 --> 10:45.760
Well, let's chat.

10:45.760 --> 10:47.760
There are actually folks who presented before.

10:47.760 --> 10:49.760
And that was the previous talk.

10:49.760 --> 10:51.760
Who would be running it for a while.

10:51.760 --> 10:55.760
And then they've been actually using it as part of their network.

10:56.760 --> 10:59.760
But we have a number of people who replicated SSIT.

10:59.760 --> 11:01.760
So we're very happy to help you.

11:01.760 --> 11:05.760
And actually, there are people in the room here who also run SSIT in the labs.

11:05.760 --> 11:07.760
And on their own machines.

11:07.760 --> 11:08.760
So happy to assist.

11:08.760 --> 11:09.760
Thank you.

11:12.760 --> 11:13.760
Okay.

11:13.760 --> 11:14.760
Great.

11:14.760 --> 11:16.760
Well, thank you very much, John.

