WEBVTT

00:00.000 --> 00:11.040
Well, thank you everyone for sticking around to the last talk.

00:11.040 --> 00:16.120
I'll try to keep things relatively short and we can all get on our way to whatever is

00:16.120 --> 00:17.600
coming after this.

00:17.600 --> 00:23.120
So, yeah, I'm going to talk about the Unitary Compiler Collection which is a technically

00:23.120 --> 00:27.640
yet-to-be-released project, so this is a bit of a sneak preview.

00:27.640 --> 00:32.280
The GitHub is public, but we haven't like officially kind of put it out there, so this

00:32.280 --> 00:38.240
is kind of the first getting a first preview.

00:38.240 --> 00:41.400
So this is the team that's been working on it.

00:41.400 --> 00:42.560
This is our fun logo.

00:42.560 --> 00:47.400
There's a sticker over here for it if you want to take one.

00:47.400 --> 00:54.760
Jordan has been leading this project, unfortunately they cannot be here, but yeah, this

00:54.760 --> 00:56.800
is props to the team.

00:56.800 --> 01:00.480
So let's go over what's in the name, so Unitary Compiler Collection.

01:00.480 --> 01:06.360
So Unitary because Unitary matrices and Unitary operators are the things that under like

01:06.360 --> 01:15.360
quantum mechanics, the things that we operate on in quantum computation, and then compilers

01:15.360 --> 01:20.680
are things that translate code from usually a higher level of abstraction to a lower level

01:20.720 --> 01:27.680
of abstraction, and just a note here on terminology in quantum, the word transpilers often

01:27.680 --> 01:34.600
used, or maybe a gate optimizer or something, and usually these things are used when the

01:34.600 --> 01:39.920
level of abstraction is roughly equivalent, so if you're taking a circuit and optimizing

01:39.920 --> 01:45.880
it, and we'll go over an example of that, and then a collection, a group of things.

01:45.880 --> 01:52.760
The idea here is that we don't want to necessarily be the only ones developing things like

01:52.760 --> 01:57.520
transpiler passes because there's only so much that a limited group of people can do,

01:57.520 --> 02:03.840
and also every application of quantum computation may need a different particular compiler

02:03.840 --> 02:09.640
pass to make the problem more efficient.

02:09.680 --> 02:16.000
Obviously, what's in the name is reminiscent of GCC, the GNU Compiler Collection, and

02:16.000 --> 02:20.800
taking a lot of inspiration from the classical compiler infrastructure that's been built

02:20.800 --> 02:30.280
over the past 50 years, and there's already a lot of things happening in that area, but

02:30.280 --> 02:35.880
yeah, obviously want to take away a lot of lessons from things learned in that development.

02:35.880 --> 02:41.200
So first let's just talk about what is a quantum compiler, or maybe in this instance

02:41.200 --> 02:45.240
this is going to be something we're going to do a transpilation, and then a compilation

02:45.240 --> 02:46.600
as well.

02:46.600 --> 02:53.400
So we've seen a bunch of quantum circuits already in today's talks, and so we're

02:53.400 --> 02:57.480
going to think about what are some operations that we can do on a circuit that looks

02:57.480 --> 03:04.200
like this to reduce the gate count, reduce the depth of the circuit, and ultimately,

03:04.200 --> 03:05.520
run on the quantum computer.

03:05.520 --> 03:09.280
We want our circuits to be as short as possible, because they'll run faster, they'll

03:09.280 --> 03:14.480
be less opportunities for errors to occur, and so we get more accurate results.

03:14.480 --> 03:17.960
And errors are generally something that happen a lot more on quantum computers than

03:17.960 --> 03:19.760
they do on classical computers.

03:19.760 --> 03:24.440
When we do classical compilers, we just want to make things fast, and error correction is

03:24.440 --> 03:33.680
a thing, but a lot lower rate of errors on the classical computer than a quantum computer.

03:33.680 --> 03:35.360
So let's zoom in to these two gates.

03:35.360 --> 03:40.960
Let's just say for a sake of example, that there are two RZ gates, and they're very similar

03:40.960 --> 03:45.280
type of gate, but they have a different angle that's parameterizing them.

03:45.280 --> 03:52.240
We can take that, we can compile that into a single gate where the argument or the angle

03:52.240 --> 03:57.400
that you're rotating your cube bit by is the sum of those two angles that you are doing

03:57.400 --> 03:58.400
previously.

03:58.400 --> 04:03.160
Okay, that's a very simple one that's like a people optimization, but these are things

04:03.160 --> 04:10.000
that you can't take into account, and you'd rather do that rotation in one operation rather

04:10.000 --> 04:14.480
than do a rotation, and then take the next step, and then do another rotation.

04:14.480 --> 04:20.040
Okay, so now we've compiled, we've made our circuit just a little bit shorter.

04:20.040 --> 04:22.800
Now let's look over here at these three operations.

04:22.800 --> 04:28.400
Let's say in this example, that we have a C0, and then we have two Z gates.

04:28.400 --> 04:34.520
So the C0 operation is the control not, and a little bit of mathematics, you know, you

04:34.520 --> 04:39.760
just massage the matrices a little bit, will show you that you can actually just do one

04:39.760 --> 04:45.760
Z on the target cube it before the C0, and that's equivalent to doing two Zs, basically

04:45.760 --> 04:52.360
the Z propagates through the C0 in such a way where instead of doing two Zs, you can

04:52.360 --> 04:53.840
do one.

04:53.840 --> 04:57.440
One gate reduction, wonderful.

04:57.520 --> 05:01.320
Okay, so we've done that.

05:01.320 --> 05:05.520
Those are just, again, some examples of maybe what people would call people optimizations.

05:05.520 --> 05:10.120
They're just a nice little trick, basically.

05:10.120 --> 05:15.320
And now let's say you're ready to take this circuit, and you want to go run it on some

05:15.320 --> 05:17.520
QPU on some quantum hardware.

05:17.520 --> 05:23.760
Most quantum hardware does not have a three-cube gate, and so whatever classical software

05:23.800 --> 05:27.280
that you use to generate the circuit, you know, maybe doesn't know that, and maybe you

05:27.280 --> 05:29.560
probably don't want it to know what the hardware actually looks like.

05:29.560 --> 05:33.920
You want it to generate something, and then you compile.

05:33.920 --> 05:38.560
So then let's say, for example, a common three-cube gate is the controlled control not

05:38.560 --> 05:43.200
gate, and you want to say, okay, the computer can't do that, so I need to decompose it

05:43.200 --> 05:46.080
into operations that the computer can do.

05:46.080 --> 05:52.380
Okay, and there is a nice decomposition of the controlled control not or the topple

05:52.420 --> 05:58.660
gate into single-cube gates and two-cube gates, but it's really, really big.

05:58.660 --> 06:05.300
So it introduces a whole lot of new gates and a new, and new operations into your circuit.

06:05.300 --> 06:10.060
Okay, so now we're getting this idea that like, we did this, we did this decomposition,

06:10.060 --> 06:13.420
and now you might want to redo some of the optimizations that you've had previously

06:13.420 --> 06:20.460
because maybe this had a margin actually can join in this operation, or maybe there

06:20.460 --> 06:25.740
is a gate here that it cancels with, and you want to do some analysis after you do decomposition.

06:25.740 --> 06:33.660
But let's say you go about doing that, you do another compilation pass, and there's one other

06:33.660 --> 06:37.700
thing, which is let's say you're running on a machine with limited connectivity in the

06:37.700 --> 06:38.700
qubits.

06:38.700 --> 06:43.700
So right now we're introducing an operation that touches qubit one and qubit three, but what

06:43.700 --> 06:47.500
if those two qubits on the device are not actually connected, not actually connected, not

06:47.500 --> 06:51.940
actually connected in such a way where you can do a two-cube operation between them.

06:51.940 --> 06:57.540
Then you need to introduce swaps, or you need to rearrange the qubits so that the ones that

06:57.540 --> 07:03.740
you're acting on are, they actually have some physical coupling on the quantum computer.

07:03.740 --> 07:08.020
So now there's no operations between qubit one and qubit three, everything is just between

07:08.020 --> 07:10.460
one and two and two and three.

07:10.460 --> 07:15.540
So we've satisfied what's called like the coupling map constraint, which is sometimes

07:15.540 --> 07:19.780
referred to as like qubit routing, qubit mapping, these kind of things.

07:19.780 --> 07:25.780
So you can see all the problems that we're dealing with are very intertwined in that you

07:25.780 --> 07:30.620
may want to do one and then go back and do an optimization pass and then do another and

07:30.620 --> 07:32.340
then optimization pass.

07:32.340 --> 07:37.020
So it's not very trivial.

07:37.020 --> 07:42.340
Okay, so that's our quick example of doing quantum circuit compilation.

07:42.340 --> 07:50.500
That's like the overall goal that you see is working towards and so why are we building

07:50.500 --> 07:51.500
a new tool?

07:51.500 --> 07:54.860
We've heard a lot about a lot of the existing tools that are out there.

07:54.860 --> 08:00.020
So let's take a look at what the ecosystem looks like and highly incomplete picture.

08:00.020 --> 08:07.420
There's like 20 more blocks here of companies with their own specific tooling mostly and

08:07.420 --> 08:14.500
Google has a circuit and they transpile down to their QPUs and IBM as Kiske and they have

08:14.500 --> 08:19.620
a transpiler or a compiler down to their hardware and basically every company is doing this

08:19.620 --> 08:27.380
because the tools that are extremely useful just don't exist for everyone.

08:27.380 --> 08:32.060
And this creates a problem, oh, and there is chasm and we've heard a little bit about that.

08:32.060 --> 08:38.100
There is chasm which kind of goes to everyone but not everyone has full buy-in as to using

08:38.100 --> 08:42.220
chasm to be an intermediate representation.

08:42.220 --> 08:45.420
And so it's important to look here like what's so good about classical compilers, what

08:45.420 --> 08:48.980
have they enabled and what should we be drawing from them?

08:48.980 --> 08:56.220
So there is a question previously about like, oh, you know the diagram that Harshit should

08:56.460 --> 09:03.940
previously has this network and in order to compile from one you take this path on a graph and

09:03.940 --> 09:09.460
someone asked about like a hub and spoke model and that's very much like intimately related to

09:09.460 --> 09:16.460
in classical compilers one thing you can have or when running classical code you can have maybe you have

09:16.460 --> 09:21.060
whatever language you have then you want to run to different back ends and you can support a direct pipe to each

09:21.060 --> 09:29.020
one of those instructions at architectures but this is very intensive because then you need

09:29.020 --> 09:36.220
a connection between every language and every back end which is one a lot of code and two

09:36.220 --> 09:41.780
it's a lot of maintenance that needs to be kept up and as something changes on the instructions

09:41.780 --> 09:48.460
at architecture you need to maintain every one of these n times m connections.

09:48.460 --> 09:53.940
So you know obviously one thing that's been learned in in classical compilers is that if you

09:53.940 --> 09:59.540
have a standardized intermediate representation that the compiler can operate on and do most of the

09:59.540 --> 10:08.180
optimizations on then you only need to maintain you know like the rust to LLVM IR pipe and then

10:08.180 --> 10:15.140
all the optimizations can just happen on that which is a much conceptually simpler and you know

10:15.140 --> 10:20.100
produces the amount of work and maintenance that needs to go into like this whole infrastructure that we need to

10:20.100 --> 10:21.100
build.

10:21.100 --> 10:28.180
Okay so that's the motivation behind UCC how do I install it it's the Python package she

10:28.180 --> 10:35.460
pip install it and then using it right now we support these four however you define your circuit

10:35.460 --> 10:38.980
and it's just the call to UCC.com pile.

10:38.980 --> 10:44.540
We've made it to be very extensible so if you want to do a custom optimization pass or

10:44.580 --> 10:51.380
something these are the default passes that happen when you run UCC just names.

10:51.380 --> 10:57.580
If you want to define a custom pass let's say you want to do some fancy LLVD composition

10:57.580 --> 11:03.260
of you know your unitaries you basically just make a new class and you inherit from something

11:03.260 --> 11:09.700
like a transformation pass a lot of what we built so far is is basically extensions of

11:09.740 --> 11:16.500
kizkik because the kizkik compiler is quite good and quite performant and we want to plug

11:16.500 --> 11:22.660
in other compilers to it to see like or you know as we've been doing to you know see if we

11:22.660 --> 11:27.300
draw for multiple different places how does can we get performance gain.

11:27.300 --> 11:31.540
So here's just some positive how it performs generally what you want to be looking at here is

11:31.540 --> 11:37.860
like this is compiled gate count so you want lower as you can see the compiler that we've

11:37.940 --> 11:44.180
introduced UCC is performant with everyone else and everyone basically does most of the same things

11:44.180 --> 11:50.660
so you know we still need a lot of compiler research to you know continue pushing down gate counts

11:51.780 --> 11:57.300
but yeah we're competitive with everyone else but this is a very new project we need lots of

11:57.300 --> 12:02.660
new ideas the compiler infrastructure for quantum is is relatively lacking as everyone has heard

12:02.660 --> 12:08.980
today like everyone has their own tooling and there's not a ton of collaboration between

12:08.980 --> 12:15.220
between the companies between the silos of information so yeah here's our repo we have

12:15.220 --> 12:22.260
some documentation but we need yeah we need people using it we need people testing it out and so

12:22.820 --> 12:29.940
yeah if you have a quantum workload please just check it out and then just the last thing a

12:29.940 --> 12:35.860
little bit of advertisement for alessandra and i both work for the unitary foundation if anything

12:35.860 --> 12:40.740
that you saw today inspired you to want to work on quantum software the unitary foundation gives

12:40.740 --> 12:46.820
micro grants to people who are doing this no strings attached it's $4,000 and it's generally

12:46.820 --> 12:53.860
for things that are open source community projects you know whether you're developing a chat we

12:53.940 --> 13:02.500
had given a micro grant to the talk that we saw previously about quantum type language it's a

13:02.500 --> 13:08.340
short application and yeah so you can go here unitary dot foundation slash grants and get some more

13:08.340 --> 13:16.260
information and yeah please check it out with that thank you all again for coming for today

13:17.140 --> 13:25.140
yeah I forget what the QR code is I'm pretty sure it's too our repository but

13:26.740 --> 13:35.700
you skated find out sweet well that's uh that's all for for a quantum computing devroom for

13:35.700 --> 13:41.140
for 2025 yeah thanks everyone for your participation

13:46.260 --> 13:48.100
yeah

