【合集】量子热力学

Orijinal Video İçeriğiVideoyu Genişlet
  • Welcome back from lunch, and let's dive into an exciting topic: quantum thermodynamics!
  • Today, we’ll take a broad look at what quantum thermodynamics is, focusing on diverse perspectives and applications.
  • We'll explore the link between information and thermodynamics, classical examples, and move into the quantum realm.
  • Understanding these concepts can bolster computational efficiency and energy management, especially in the context of quantum systems.

All right, well, I'm really happy to see so many of you made it back from lunch. It's great to have you here, and it's wonderful to have Lydia Del Rio here talking about quantum thermodynamics, an area I'm really passionate about.
And I'm sure we'll get to hear a lot more from her. Thank you.

Okay, thank you. Thank you very much for coming here, all of you. So good to see so many familiar faces. And thank you to the organizers for inviting me to give this tutorial.

Since there are already a few talks about thermodynamics at this conference, which I will announce at the end, this will be a very broad and, in a way, a bit superficial tutorial, just to give you kind of an overview of the field.

For those of you who are computer scientists or mathematicians or the kind of physicist that I was before starting to work on thermodynamics. So it's just to give you more of an idea of what quantum thermodynamics is and how we're approaching it.

Okay, so let's just start with a small introduction.

Why should we study quantum thermodynamics? Why is this interesting?

There are three different perspectives, in my opinion. On one hand, you have the philosopher. So these are people like me who wonder, like, why is thermodynamics such an effective theory? Is it luck? Is it just an emergent theory? Or can we have some axiomatic formulation, some, like, information-driven approach to thermodynamics? Well, what is really the essence of this theory?

Then, we can have someone who is an explorer who has studied thermodynamics of large systems and now is interested in seeing what happens when they go and study quantum systems or very small systems. You ask, well, do the same laws apply to quantum systems, or do we need to make some corrections? Are these corrections because of the quantum part or just because we have very small systems and there are some finite size effects? Are there some new quantum effects that we can explore and phrase in terms of thermodynamics? So can we expand this framework in this sense?

But all of these people still start from this idea of starting from thermodynamics and then wanting to explore it in their regimes. On the other hand, you can have the engineer, someone who is interested in actually developing a machine and making it efficient according to some parameters relevant in this regime.

By the way, these are the kind of people who developed the first thermodynamic theory. So here you might be interested in very concrete questions like, what's the heat dissipation in quantum computers? How can we keep this low? Or how small can we make heat engines?

In this talk, I hope that I will go over a bit of the three perspectives. We’ll have three parts. In the first part, we'll talk mostly about the relation between information and thermodynamics: work, cost of erasure, the work value of information, etc.

So we’ll look first at the classical case to give you some intuition, and then we look into the quantum case. This will give us a few small results and interesting phenomena but not an overarching framework to treat them.

This is what we’ll see in the second part of the talk. There, we will look at axiomatic approaches to quantum thermodynamics, in particular resource theory approaches.

We’ll see what they can tell us and see why these particular resource theories are justified, and then we’ll talk about directions and open questions.

Ok, so you might think that a priori thermodynamics didn't have anything to do with information, right? You're trying to move trains. What does it matter what information you have about the system?

Maybe the first example where this started to play a role was Maxwell's demon. Most of you are already familiar with this paradox. So imagine that you have a box filled with some gas, okay? And here’s some observer, Alice, and all she can see of the gas is that it has some pressure and volume, and it’s all at the same temperature.

Okay? So she cannot do anything with this box. But then we have someone else who has access to more information. This is a demon, and he has some microscope and he can see all the particles in there. A gas has a certain temperature because it’s formed by particles that are traveling faster and slower with some distribution.

He’s sitting here at this little door, and he can see which particles go fast and which go slow. So he can control this door. This doesn’t cost anything; it just opens a flap. He lets all the slow ones come to the left and all the fast ones stay on the right.

By doing this, he creates a gradient in temperature, with all this cold gas on the left and the hot gas on the right. He can let the hot gas expand and extract some work from this.

For many years, this was a puzzle because thermodynamics tells us that you cannot extract any work from a single heat path. So how come he can do this? The answer has to do with information.

Let’s see how to treat this piece by piece.

Okay, no. Let’s not see how to treat this piece by piece. Let’s first think about the general question of this is a nice paradox, but what about concrete applications for us?

The idea is that thermodynamics studies a kind of glorified accounting of heat flows and energy flows, which is relevant because you may want to perform a computation. But you’re also interested not just in the logical circuit that you have, but also in how much power you have to supply to this computer or how much heat will be dissipated.

Also because the heat that is dissipated might cause you decoherence problems. These are the kind of questions that, funnily enough, also have to do with information and thermodynamics and will also be addressed.

Now, silhouette boxes are a very simple toy model invented by Silhouette. Here’s the idea: you have one bit of information encoded in a box with a partition in the middle. You have a particle on one of the two sides. Around this box, you have a heat bath.

So some environment at some temperature keeps the particle with some momentum. If you don’t like the idea of a single particle heat bath, you could think that all the gas particles are many gas particles on the right, or they’re all on the left. In either case, you have one bit of information.

The idea would be like: how can you extract work from this one bit of information? If I know where the particle is, I can attach a little bucket with some mass, and then this is expanding, so it expands this volume.

If I integrate how much it expands, it gives me actually kT log 2 work. This is the difference in potential energy of this thing. I now have a mass that is high up and I can use this as stored work.

This is like the classical idea of what work is. We can also go the other way around. Imagine I have this box and I don’t know if the particle is on the left or on the right, but I want to reset this bit.

I start with no information. What I can do is instead of putting this divider here in the middle, I can put it on one of the sides and then just push it. Pushing this costs me again kT log 2 (k is the Boltzmann constant).

This is just a constant to compare between temperatures and energy.

So this is the heart of Landauer's principle, which tells you that you can trade information and heat from a heat bath for work. Heat just means energy that we don’t have very good control of, like a heat bath.

Work is the energy that we know where it is. You have this weight that is lifted here. This is really no surprise. What is interesting is this rate, this conversion rate of kT log 2 per bit.

This, by the way, is the solution to Maxwell’s demon problem. So here’s Maxwell’s demon. He already created this difference in temperature between the two sides of the box. You might think, oh, he’s done; he extracted work from nothing.

But if you look at the bigger system, where you also think about the memory of the demon, then this demon had to store this information somewhere about each particle. Either he will just never erase this memory, and then you have this side product which is a new system filled with trash that is now useless.

So it’s not a cyclic process, or at some point he’ll have to erase all of this data, and actually the erasure of this data—resetting this blackboard—costs him exactly the same amount of work that he spent that he gained in this whole process.

Now, back to our other question about the cost of computations. The solution also has to do with this trade-off between information and work.

The idea for Bennett is that every computation can be split into two parts: one would be a reversible part followed by erasure. In the same way that when you have a quantum map, you can always dilate it to think of it as some unitary operation followed by a unitary operation on a system and the ancilla, followed by tracing out this ancilla.

Okay, forgetting about this ancilla. The idea is that let’s pretend for now that the reversible computations are free in principle, and then the work cost becomes just the cost of erasure of this extra ancilla.

Otherwise, you end up with all these memories in your computer that are full of useless stuff. So what do I mean by erasure more explicitly?

It would be like formatting a hard drive. This means I had all these bits that I stored here. Some were 0, some are 1, some I don’t remember anymore what they are, and I want to set them all to zero.

Or in the case of a quantum system, I have some system that could be in some state, and I want to reset it to a standard state.

To give you some idea of the numbers of what this limit gives you, this constant case, very, very small, such as the erasure of a 16 terabyte hard drive at room temperature, costs 0.4 microjoules. To have some idea of what this means, lifting a tomato by one meter on Earth costs one joule.

These fundamental limits are still very far from what the industry can do, but they will become relevant at some point, and they are interesting on their own.

So you could think, ok, this costs me kT log 2 per bit, but what if I have more information about the thing that I want to erase? For example, you could have a fractal. A fractal looks like a very complex object. If you didn’t know that this was generated by a small program, you might think, oh, wow, I need to erase all these bits.

But since there is a very small program that generates it, what you can do is compress it, in the same way that in coding theory you can code something and then decode it.

Here, you compress it, and this is a reversible process because it can also go back. Now you have a much shorter program, n bits, say, which you can then erase.

In the end, this just costs you n kT log 2. The idea is that in general, we can take any message, compress it, and in the ideal limit, it gives you the entropy of the original message, and then erase it.

This is where information comes into play. Another aspect is that information is also subjective.

In the same way that maybe someone before who did not have information about this fractal would have a harder time erasing it, you could also think that perhaps I have a system, and I prepared to in this bit one, and then I tell Alice, “Alice, it’s in state one.”

Then I tell Bob, “Well, I don’t tell him anything.” And then I tell them, please erase this bit. While Alice, if she knows that it’s in state one, can just apply a reversible operation like a NOT gate and erase it for free, Bob will have to apply this erasure procedure, which costs him kT log 2.

This is to say that actually the work cost of erasure depends on the entropy, and the entropy you have to take into account the memory of your agent. This is a general result in the classical case.

Now, you may think, “Well, ok, how does this generalize for the quantum case? What needs to change?” First of all, I showed you a very almost microscopic silhouette box. Is that such an equivalent for quantum systems, or what does work even mean?

This idea of using side information is very nice, but how do you use a quantum memory? You cannot just read it because this would disturb the contents of the memory.

Finally, if you have quantum correlations, then this conditional entropy can be negative. Does this mean anything or is it just a quantity that is useful for information processing tasks but doesn’t have a thermodynamic value or meaning?

This is a slow introduction to what we’ll talk about later.

I’ll give you first two models for this quantum silhouette box. The first one is a semi-classical model where the assumptions are that, well, we have a work storage system somewhere, but for now, this is just implicit.

We have some systems, and we can change the Hamiltonian of the system. To account for energy, we say that if you lift a level by delta E, this costs delta E, but only if the level is occupied. It’s the same sense that if I lift the box, it only costs me energy if the box is full.

Reversibly, I can lower the box, and it costs delta E and gives me delta E if it’s occupied. So here’s how you extract work from one bit of information.

Take a qubit. Here are the two states, 0 and 1 originally degenerate. What you're going to do is first you lift this level very high, and it costs you nothing according to this model. We’ll discuss the assumptions behind the model later on.

So you lift one very high; the state is not occupied. You know this, so it doesn’t cost you anything. Now, you connect it to a heat bath. The assumption is that when you connect it to a heat bath and wait long enough, it goes to a Gibbs state, a thermal state.

You start lowering this very, very slowly. What happens in the beginning? Not much happens because this state is not occupied. But as you lower it, the probability of being occupied gets higher.

So it’s some probability you’re getting delta E every time you do this, and if then you integrate how much you gained by the time you get to the end, you gain exactly kT log 2 on average and end up with a maximally mixed qubit.

This would be one example of how to have a quantum silhouette box if you believe that these assumptions are justified. Here’s another one that tries to be a bit more explicit, and we’ll come back to this framework later.

Here you model your battery system explicitly. Just like before, you thought of this weight that you lift. Now you can think that you have a harmonic oscillator. Lifting the state of this harmonic oscillator corresponds to heat because now we have a state in some high energy state.

This high energy can be stored there like you store this weight. Then later on, you can use this. That’s the idea.

For the heat bath, instead of always thermalizing my system, I just give myself an explicit heat bath which I say can be as big as it is, and it's already in this Gibbs state. It means that I can draw qubits from there.

If the gap is very big, this exponential means that there is almost no probability of being in an excited state. As it becomes smaller and smaller, the gap, then the probabilities become closer.

Now, let’s give myself these two things. The rules of the game are that I can apply unitaries, but only unitaries that conserve energy. I know that I’m not injecting any energy into the system, and I'm not injecting or taking away any entropy. Yes, are these rules clear? If you have questions, I may not see your hands, so just say something.

Using this idea and starting again from a qubit that is in state zero, there is a protocol which takes you to a maximally mixed qubit here and at the same time raises your work storage system, your battery by kT log 2 on average, with fluctuations that decay with 1/sqrt(N), where N is the number of steps that you take.

The idea here is, just like before, in this other semi-classical protocol, you have to do every step very slowly.

Let things thermalize and move just a little, going in this quasi-static regime, where the idea is that you're not throwing away too much energy at any given step to the path. You take many, many of these qubits such that the probability distribution here matches almost exactly the probability distribution here.

You apply essentially kind of a control swap, where you swap, where this acts as your control qubit, and you swap these two. What happens then is on the first step, with very low probability, goes very high, and with most probability, nothing happens.

In the second step, the probability of going up is a bit higher, a bit higher, a bit higher, etc., just like before. When you start here and there’s a big gap, very likely nothing happens. As you go down, the probability that you gain energy increases.

The protocols are very similar, but the models are different, and vice versa. You can also go from a fully mixed state in your system and erase it to state zero. This costs you an average kT log 2.

So this is what we’re going to take as a building block, like we took the silhouette boxes as a building block. If you have this, then we can talk about erasure with quantum information.

As I said before, we cannot just go look at the memory, see what it says, and then do a control operation because we don’t want to disturb the memory too much.

For example, imagine you have all these qubits in your lab, and you want to erase the first qubit but preserve the state of the others. Your first qubit is the system that you want to erase, and it’s maximally entangled with the first qubit of your memory, while the other two are in some state.

What you want to do is to erase this first qubit, take it to state zero, and keep the rest as it was. In this case, it goes to Rho 2, 3 doesn’t change, and this doesn’t change.

Now it’s a maximum mixed state, and more generally, we can have this memory preservation condition, which means I want to erase S, and I have access to some memory M, and my condition is that I don’t want to destroy M, nor do I want to destroy any correlations that M might have with another system, any other reference system.

I want to go to this state 0 and S and preserve the rest. Good. I can act on M; I just need to return it to the same state.

The good result is that we can still use this memory optimally, like we could in the classical state. Again, the work cost of erasure becomes the conditional entropy, which now can be negative.

What does this mean? Let’s look at this simple example and see how one could do it.

Okay, we start with this stage, and the entropy of S given M is minus one because you have here a pure state minus the entropy of half of it, which is a maximally mixed qubit—this gives you minus one. We want to take it to this stage.

Here’s a protocol to do it. On the first step, we see a nice entangled state. I’m just going to use my building block, like this Hillard engine kind of protocol to extract work from it.

So I can unitarily take it to 00. I take kT log 2 from each of them. This gives me 2 kT log 2. You can ask, is this OK? Yes, it’s OK. Because now here you have a maximally mixed state which you already had before, so you did not disturb the memory and did not touch the rest of the memory either.

On the second step, I just say, well, now I have a maximally mixed qubit here on the first one. So I use again the building block. This costs me kT log 2 to raise it.

In total, the work cost was -kT log 2, precisely the entropy. So again, I did this trade-off between information and work. I had these quantum correlations before that I don’t have anymore, but in exchange, I gained some energy.

More generally, we can use decoupling results and single shot protocols and get something that looks like this. The idea is always the same: look for some part of your memory that looks maximally mixed or compress it to look maximally mixed, then erase that together with the system. You get work out of this, extract work from this quantum correlations because its maximally entangled with the system, and then just erase the system.

This was all about erasing a particular state. I can ask, well, maybe if I want to build a quantum computer, what I want to know is not so much how much it costs to treat a particular state but how much it costs to implement a certain gate, an operation.

That is very similar again. The result will be that if you have a map that goes from X to X prime, you dilate it, and at the end, the work cost is kind of the work of erasing this ancilla that you want to discard at the end.

It’s like your map goes unitarily to your final output system and an ancilla; this doesn’t cost any work. But you want to erase the ancilla. What else do you have? All I have is this output state.

I use this conditional entropy of the final state. This has been generalized for degenerate Hamiltonians, but for any Hamiltonians in Philippe Feist’s thesis and in numbers. Again, it’s a bit pathetic because running 20 petaflops computation costs you one watt.

We’re very, very far from any relevant limit, which is good. It means that engineers still have a long way to go before we need to do more theory.

Actually, I don’t know if that’s good. Can you cut this part from the video?

Again, these are all results in very specific settings. Later on, I want to generalize this whole approach. But before this, let's just go back almost a century.

Something that I didn’t know until a couple of years ago is that VK entropy is called entropy for a reason and for thermodynamic reason. The way he came up with this was thinking, well, I have a particle gas, all my particles are in this quantum state, I have N of them, and I want to erase this.

Erasing means I want to take them to being all in the same state. How can I come up with a protocol to do this? What he did was thinking, well, let’s apply some semi-permeable membranes that only let some particles pass.

I use this to separate them, but there’s no change in volume for each of these particles, for each of these boxes. This doesn’t cost me any work. Now I have these three boxes, and I know which particle is in each of them.

Then, for each of them, I can… Oh no, not yet. Later on, I’ll apply unitarily. But before, I want to go back to my original volume of having just a small box.

The number of particles in each box will be proportional to PK. So will the volume of the final box. I see how much it costs me to compress each of these boxes, which is N and the logarithm of the difference between the volumes, which is precisely N PK log PK.

Let’s do this for all of them; just divide by the number of particles, and it gives me the work cost of doing this operation, which is precisely the Von Neumann entropy. I can apply a unitary operation on each of them or each of these particles to take it to the state that I wanted.

Or you could have done it the other way around. You could have applied this operation. Now, I want to compress this.

How much time do I have now?

Good. Now, let’s try to get a bigger perspective. Think about why thermodynamics is so effective in the first place so that later we can find the right way to look at it. Thermodynamics is great because it doesn’t care about the microscopic details of some theory.

It doesn’t care if your gas is made of quantum particles or classical ones. That’s why it could survive the advent of quantum mechanics.

What it does is identify what are the easy operations for an agent to do, what are the hard operations, and what resources come for free, like room temperature. Based on this, it tells you how you can exploit these conditions, these constraints you have to build efficient things like steam engines or fridges.

It also tries to find the minimal cost of transforming one state into the other. In that sense, it’s a very operational approach, and it’s what we call a resource theory.

We could say this is the first resource theory in physics. If you haven’t heard this term before, then you’ve probably heard of LOCC. This is an example of a resource theory.

Let me give you an overview or introduction to the idea. A resource theory is to treat some physical situation, or it doesn’t need to be a physical situation, as a game.

The idea is you imagine that there's an agent in this game, and you're going to play from the point of view of the agent. You say, here’s my space of resources; it could be anything. Here’s a set of allowed operations that I can do. This imposes some kind of order in the resources.

For example, I don’t know, this could be a weight standing here and a bit lower down, down, but to the left and then at the very bottom, for example. Yes. There are only some directions I can go.

From this, I get a preorder in my structure, giving it some structure. Then I can ask lots of questions.

For example, maybe this is a very complicated structure, and I can ask, is there a simple way to characterize it? Can I find a monotone here, something that always goes down? For example, here it could be the height of the object. In thermodynamics, you’ll see that it gives us free energies.

You can also ask what necessary and sufficient conditions go from one resource to the other, and what very useful resources are in this theory. What are useless resources?

For example, here today, the resource D is almost free because you can always get to it. Let’s say in LOCC, your set of allowed operations is something that is deemed easy, comparatively easy, let’s say, which are in this case local operations and classical communication.

That's because you think you have agents; they are far apart, and they can do everything in their labs, but they cannot apply like global quantum theory. It’s always the set of operations that gives the meaning to your resource theory that justifies it.

From there, the monotones you get are things like squashed entanglement, entanglement of formation, entanglement of distillation. You learn from here that separable states are always free because you can prepare them, regardless of what the initial state is.

You can even think of things like a currency, which are states that are very useful and scalable, like pairs of Bell states.

How would you apply this kind of thinking to thermodynamics? Here we go. First of all, let’s think about what our limitations are and think about the classical case. One could be lack of knowledge about the exact state of my system.

When people describe systems in terms of volume or temperature, they think that this is the fundamental theory, but it’s just the accessible information to them. This means you can represent your states like this.

Then you are restrained by conservation laws like energy conservation, momentum conservation, etc. Also, limited control of operations. You might not have absolute time control, but just let some guys expand these kinds of things.

If you think of this, then what are your resources? These are microscopic descriptions. Your operations could be, depending on the setting, adiabatic operations or isothermal operations, meaning that you have access to some environment temperature, for example.

From this framework, you can derive all the laws of thermodynamics, you derive the free energy as a monotone, and you derive Carnot efficiency as a limit on efficiency tensions, etc. There is great work on this by Carathéodory and Giles and then later by Lieb and Ingvanson.

They just say, let’s have a theory and say that equilibrium states are scalable and have this nice order. Just like in LOCC, we have Bell pairs that you can always go from four Bell pairs to three, so they are ordered.

Using this a little more, you can derive free energy or entropy. That's the only monotone that scales nicely.

So how would you translate this to the quantum case? Well, let's think of resources now as descriptions of quantum systems and quantum descriptions of quantum systems. This means I can describe a system by some state and the Hamiltonian of the system because these are the things that we care about at the moment.

What are the allowed operations? It's like in this model we saw before. We want to account for entropy. We only allow for unitaries; we don’t let anything give or take entropy.

We want to account for all energy flows. You say that all unitaries that are free are the ones that commute with the Hamiltonian so that the ones that don’t bring in any energy.

You can start from this basic framework and then add more things or play with it. One common thing is to add a free environment.

Let’s add some states that are free and that you can always get. We can imagine you're doing this experiment in some room at some temperature. You can let things thermalize at a certain temperature.

Yes, so it is. You can always trace out systems. You can always forget. Yes, of course, this is a toy model, right? If there's ever been a spherical cow thermal operations, it is.

Later on, we’ll talk about how to try to make this more realistic. Before I go into what results you can draw from here, let’s question a little bit more why we should allow for this environment for free.

Why do you model a heat path as a thermal state? We do this not only when we treat thermodynamics like this, but also when modeling some quantum memory. You want to have a noise model; it’s often useful to model it as thermal noise, which is modeled by a Gibbs state.

So why is this justified? Let’s go part by part. I’ll give you three justifications.

The first one is that it's a reduced description of a subsystem if you take the state of maximum entropy in the bigger system. Suppose you have a large system composed of many independent parts, and then you know something about the system.

For example, you know the total energy, and you know that under thermal operations, you’re never going to change this energy. You’re always inside this shell. You know nothing else about the system.

You model this because you like probabilities. You model it as a maximally mixed state on its energy shell. If you look at a very large system, then it will look like this exponential. It looks like a Gibbs state. This is kind of Jane's principle idea.

If you take a very large system and have maximum ignorance, and you model it via all states are equally likely, you get a reduced description of this type.

The second justification is that it’s a very typical state. It’s not surprising if you’re familiar with decoupling results, for example. The idea is, if you have a big system and your small subsystem is much smaller, for most global states, and for most subsystems, it will appear in a given state.

The thing is, you don’t believe the big system is actually a maximum mixed state. It could be in any state.

But if you apply some random evolution or choose your subsystem randomly, or if you choose it according to some constraints, it will likely get Gibbs state. A critical conclusion is that this is also true if your subsystem is not a local thing, but it corresponds to just some degrees of freedom that you have access to.

The last justification is this idea of passivity of the thermal state. Here the idea is, well, let’s say we don’t give the thermal state for free; we’ll just give these unitary operations.

What is the only state you could give that does not trivialize the resource theory? This means if I just allow for unitaries and allow you many copies of a state, what is the only state that you could not use by applying a unitary reduce the energy of this state and raise some weights?

The idea is that if you have only one copy of the state, then any state that has probabilities decreasing on the eigenvalues will do. If you have many copies of the state, then only this exponential distribution will do.

Despite all these nice arguments, we know that Gibbs state is still a spherical cow. It's still the case that when you know that the state of the system is approximately the Gibbs state, it doesn't mean that it is the Gibbs state.

But we model it like the Gibbs state because it doesn’t hurt, doesn’t give you anything for free. What would be more interesting is to have a resource theory where you can model the heat path as something that you have less information about than being in a Gibbs state.

We’ll go into this a bit later on. Let’s stop the first part here.

Any questions at this stage? Great. If there are any questions, please. Hi, I have a question about the claim you made about if we choose a sort of random evolution on a large subsystem, then we expect small subsystems of it to look like thermal states.

Am I stating that correctly? Yes, that preserves energy. I see. I’m just a little bit confused because I’ve also learned to think of the fact that if I pick a sort of random evolution with some disorder in it, then I should expect to get like Anderson localization which prevents thermalization.

How should I make sense of those two facts? So they’re looking at very concrete. So when you say your evolution is random, how random is it really? Fair enough. I suppose for Anderson localization is randomly drawn with diagonal disorder distributed in a certain way.

Perhaps that’s not random enough because it’s already quite a restrictive class of evolution. This would be my guess, but I’m not 100% sure that this is the reason, but it’s my guess.

It’s easy to have results on thermalization when you just say in a Hilbert space, random thing, and when you try to go to more concrete measures, for example, if you replace the hard measure with something more physical, then it becomes more complicated.

For example, there are these results in this that if you have random, local, random but local interactions, then you still get thermalization, but as you impose more constraints, it becomes increasingly complicated to get this. You need to study concrete systems and see what happens.

Any other questions? You said we are using the protocol of unitarily operators commuting with the global Hamiltonian to execute operations that don’t inject energy into the system.

But of course, if you want to implement one of these operations in a physical system, you cannot do this immediately, but you’ll have to change temporarily the Hamiltonian and wait until the unitary has been applied.

Then we put back the Hamiltonian to what it was at the beginning, right? Yes, exactly. When you change the Hamiltonian, you inject energy into the system. Yes, you will need something else to store this energy when it comes back at the end.

Yes, that is the point I was going to make in the second part. Yes, but okay, Cohen. The question was, can you have a framework where you can account for all these other machinery to apply this unitary operator commuting with energy? We’ll look into this in the second part.

This is the idea of like this is a spherical cow. This already imposes restrictions, but later on, we want to impose more restrictions to make it more physical.

So for example, do not allow for this arbitrary control because, as you say, you need to be controlling when you turn on and off a Hamiltonian. And of course, that costs something. We’ll go into this later on.

Any further questions? I guess the audience has voted themselves a longer coffee break.