Truncated manybody dynamics of interacting bosons: A variational principle with error monitoring
냥냥 kipid2019. 3. 16. 10:14
# Truncated manybody dynamics of interacting bosons: A variational principle with error monitoring
by KangSoo Lee and Uwe R. Fischer
I apologize for not finishing this paper. I just open this in the internet. But this document is flawed as it is originally written in LaTeX and not translated to the docuK format perfectly yet.
##[.nosecN] Abstract
We introduce a scheme to describe the evolution of an interacting system of bosons, for which the field operator expansion is truncated after a finite number of modes, in a rigorously controlled manner. Using McLachlan's principle of least error , we find a set of equations for the manybody state. As a particular benefit, and in distinction to previously proposed approaches , our approach allows for the dynamical increase of the number of orbitals during the temporal evolution, because we can rigorously monitor the error made by increasing the truncation dimension. The additional orbitals, determined by the condition of least error of the truncated evolution relative to the exact one, are obtained from an initial trial state by a method we call steepest constrained descent.
## PH
20141217: To the docuK.
20140623: First posting?
## TOC
## Introduction
When we treat an atom, such as $^{87}$Rb, $^{23}$Na, and $^{7}$Li which consists of an even [$Z$ (number of neutrons and protons) $+A$ (number of electrons)] number of fermions, as a whole, the Fock space creation and annihilation operators of these composites satisfy bosonic commutation relations. Since the experimental realization of BoseEinstein condensates , a large variety of experiments with these bosonic isotopes have opened up a fascinating mesoscopic and macroscopic quantum world . After an initial period, concentrating on the effective singleparticle physics of these ultracold dilute gases , more recently their manybody physics came into focus, revealing rich and hitherto unexpected possibilities to test fundamental correlation properties at the microscopic level .
The extension to a true manybody physics, incorporating quantum correlations beyond meanfield, requires, however, vast computational resources when both the number of particles and the interaction between those particles increases. Therefore, a simplification of the problem by truncating the field operator expansion to a finite number of modes (or, as an equivalent term, singleparticle orbitals) has been commonly utilized to obtain results relevant to the prediction of experiments in trapped bosonic quantum gases.
The most extreme truncation, the semiclassical form of meanfield theory, retaining just one orbital, gives the wellknown GrossPitaevski\v\i\/ equation. Without the aid of contemporary computers, it seemed to be inevitable until most recently, particularly in outofequilibrium situations far away from the ground state, to reduce the complexity of the problem at hand as much as possible, and hence to use the GrossPitaevski\v\i\/ equation approach. With the increased interest in manybody physics, however, there arose the necessity to go beyond the alltoosimplified mean field approach of the GrossPitaevski\v\i\/ equation. The accuracy of predictions on manybody correlations and the corresponding response functions will obviously increase with a less severe degree of truncation, though the solutions will not be exact still.
#####/ Necessity of truncation of field operator expansion to render problems computable
To derive the equations of manybody evolution, various variational approaches can be employed. Historically the first was the variational ansatz of Dirac and Frenkel , followed by McLachlan's variational principle and the timedependent variational principle (TDVP), which is a principle of stationary action . Therefore, there are various, not necessarily equivalent, choices of variational principle for finding the equation of motion of the truncated manybody evolution. The DiracFrenkel principle imposes $\langle\delta\Phi \hat{H}i\partial_t \Phi\rangle = 0$ ($\hbar \equiv 1$), where $\langle\delta \Phi$ denotes any possible variations of the manybody state $\langle\Phi$ with respect to a given set of variational parameters, whereas McLachlan's principle requires that the error of manybody evolution must be minimized.
#####/ %\col{Sometimes you use $\Psi$, sometimes $\Phi$ for the manybody state; is there supposed to be a difference, for example one is truncated the other exact? If not, we should use one symbol, e.g. $\Phi$ and its variation.}
On the other hand, the TDVP, as stated, requires stationarity of a given action. The three principles thus support quite different doctrines.
Applying either the TDVP or the DiracFrenkel's principle, the authors in have proposed a method they called MCTDHB (MultiCon\figura\tional TimeDependent Hartree method for Bosons).
This approach has, for example, provided tools for the description of the fragmentation of bosonic manybody states . We will describe below in detail that, besides its many beneficial properties, the MCTDHB method is incomplete in certain situations. Specifically, when the singleparticle density matrix (SPDM) becomes singular, i.e. noninvertible, the method fails. As a consequence, MCTDHB does not provide a way to propagate, for example, a \emph{pure} single condensate into a fragmented condensate manybody state. Although MCTDHB provides an important tool to describe the manybody physics of interacting bosons, the method therefore lacks the possibility to directly connect the phenomena of condensation and fragmentation.
#####/ %will show: ambiguity of [previously employed DiracFrenkel variational principle for manybody evolution
%MacLachlan principle offers possibility to constrain error accumulated during evolution
%demonstrate that during time evolution SPDM (Single Particle Density Matrix) potentially becomes singular (noninvertible),
Here, critically examining DiracFrenkel's principle and the TDVP, and adopting alternatively McLachlan's principle for truncated manybody evolution, we improve on the previous multiconfigurational Hartree methods, and solve the singularity problem of a noninvertible SPDM. In the process, we will also validate the resulting equations of MCTDHB in a different manner, however additionally offering a straightforward handling of the exceptional evolution points related to the singularity of the SPDM.
## Variational Principles
Let us now discuss the possible variational principles in more detail. We are aiming at finding an approximate solution of the manybody Schr\"odinger equation when the state $\Phi\rangle$ is restricted (or truncated). McLachlan's principle , which was presented in 1963 as a new version of Frenkel's principle, requires the minimization of the error or remainder of this approximate solution from the exact evolution.
The time evolution of any state is dictated by Schr\"odinger's equation, $i\partial_t \Phi\rangle = \hat{H} \Phi\rangle$. In other words, the evolution of state is determined by the Hamiltonian at any moment. But to make the state $\Phi\rangle$ manipulable, we are generally forced to restrict or confine the state $\Phi\rangle$ into some simple and computationally feasible forms. With the state $\Phi\rangle$ in restricted form, $[i\partial_t  \hat{H}] \Phi\rangle$ cannot be exactly zero in general. Therefore, McLachlan's principle aims at finding the approximate solution which minimizes the positive semidefinite error measure $\langle\Phi [i\partial_t  \hat{H}]^{\dagger}[i\partial_t  \hat{H}] \Phi\rangle$. The details of the corresponding procedure will be rephrased in section , after introducing a concrete way to restrict the manybody state in a computationally feasible form.
Hence it is guaranteed that the equation of motion obtained from McLachlan's principle follows the exact evolution most similarly under given constraints. The most appealing feature of McLachlan's principle is thus that it is a quite intuitive principle.
Since it offers the possibility to evaluate the error directly, we can intermediately, monitoring the error, increase the number of orbitals in the truncated field operator expansion, i.e. truncate the state less, to assure accuracy of the result. Alternatively, to save computational costs and time, the number of orbitals can also be decreased intermediately, i.e. by truncating the state more, in particular in cases where decreasing the number of orbitals does not affect the accuracy of result significantly.
Our scheme, described in detail below, in which McLachlan's principle is applied, therefore offers the opportunity to dynamically adjust the truncation of field operators or the state itself properly during computational time evolution, since we can monitor the error. As a particular benefit, a convergence test, mandatory for MCTDHB, is unnecessary, as we can directly obtain an error which indicates automatically how well our approach describes the interacting system of bosons.
On the other hand, the TDVP which was carried out in MCTDHB requires stationarity of action $\delta S=0$. This does not necessarily mean an extremum (minimum or maximum) of the action. Though stationary points include local extrema, the principle practically imposes only stationarity of the action: The equation of motion comes from stationary point of the action, which is not even necessarily a local minimum or maximum.
#####/ %$\delta S=0$ does not necessarily give extremum
In manybody quantum mechanics, the action is written in terms of an expectation value of an operatorvalued functional:
\begin{split}
S =
&\int \mathrm{d} t \int \mathrm{d} \vec{x} ~
\big\langle \Phi \big
\bigg[
\frac{1}{2m}
\sum_{i=1}^{3}
\big( \partial_i \hat{\Psi}^{\dagger} (\vec{x}) \big)
\big( \partial_i \hat{\Psi} (\vec{x}) \big)
+
V_{\rm trap} (\vec{x},t)
\hat{\Psi}^{\dagger} (\vec{x})
\hat{\Psi} (\vec{x})
\bigg]
\big \Phi \big\rangle \\
&+
\frac{1}{2}
\int \mathrm{d} t
\iint \mathrm{d} \vec{x}_{\alpha}
\mathrm{d} \vec{x}_{\beta} ~
\big\langle \Phi \big
V (\vec{x}_{\alpha}, \vec{x}_{\beta})
\hat{\Psi}^{\dagger} (\vec{x}_{\alpha})
\hat{\Psi}^{\dagger} (\vec{x}_{\beta})
\hat{\Psi} (\vec{x}_{\beta})
\hat{\Psi} (\vec{x}_{\alpha})
\big \Phi \big\rangle

\int \mathrm{d} t ~
\big\langle \Phi \big
\bigg[
\frac{\big[ i \partial_t \big]
+ \big[ i \partial_t \big]^{\dagger}}
{2}
\bigg]
\big \Phi \big\rangle
\end{split}
which is in quantum mechanical correspondence to the classical Lagrangian action. Here, $V_{\rm trap}(\vec{x},t)$ is the (in general timedependent) scalar trap potential confining the atoms, $V (\vec{x}_{\alpha}, \vec{x}_{\beta})$ is the twobody interaction potential, and $m$ the mass of bosons.
This is a realvalued functional of the manybody state $ \Phi \rangle$ . Sometimes the action is simply expressed as
$S = \int \mathrm{d} t ~
\langle \Phi 
\hat{H}  i \partial_t
 \Phi \rangle$. But this is not a proper expression. As the Lagrangian $L(q,\dot{q};t)$ in classical mechanics is variablesensitive, the proper form of the action in manybody quantum mechanics is important problem. Variationally changing the state $ \Phi \rangle$ and the temporal change of the state $\partial_t  \Phi \rangle$, we find the evolution of the state around the stationary action point.
But here a problem occurs. When the form of the state is restricted or truncated for computational reasons in the sense that the state resides in a subHilbert space, it is questionable whether the equation of motion obtained in the subHilbert space leads to an evolution most similarly to the exact one obtained with the unlimited full Hilbert space. Even though the equation of motion obtained variationally with a nontruncated state can give the correct manybody Schr\"odinger equation, we cannot rely on the correctness of the equation in case the state is truncated. For example, the path of stationary action with limitations imposed on the path can in principle deviate far from the one obtained without any constraints on the path.
As a simple specific example, when we restrict the state to have only an overall phase change, i.e. $ \Phi (t) \rangle = e^{i \Omega t}  \Phi \rangle$, the action becomes $S = \int \mathrm{d} t \big( \langle \hat{H} \rangle  \Omega \big)$. Depending on the value of $\Omega$, the action can be positive or negative. Actually there is no upper bound and lower bound on this action. Even worse is that, for some types of constrained states as above, there can be no stationary point of the action at all. So when applying the TDVP, at least the convergence upon increasing the number of orbitals, i.e. loosening the constraints, must be tested for every specific problem, to ensure the validity of the results. This is because, in contrast with McLachlan's principle, there is no direct error indicator in the TDVP which controls the accuracy of the approximation.
The earliest variational principle for the approximate solution of manybody dynamics is DiracFrenkel's principle , which requires
\langle\delta\Phi
\hat{H}i\partial_t
\Phi\rangle = 0 ,
where $\delta \Phi$ denotes possible variations of the manybody state $\Phi$ with respect to the variational parameters. The equation is quite similar to the TDVP when the action is given by
$S
= \int \mathrm{d}t
\langle\Phi 
\hat{H}  i \partial_t
\Phi\rangle$.
The difference and (possible) equivalences between DiracFrenkel's, McLachlan's and other variational principles have been extensively discussed in the past .
In , it is concluded that if the relevant manifold, i. e. the subHilbert space, can be parametrized by pairs of complementary parameters, the above mentioned principles are equivalent. In , it is insisted that both DiracFrenkel's and McLachlan's variational principles are equivalent if both $\delta \Phi$ and $\delta \Phi^*$ are possible independent variations. But the ``equivalence'' merely %seems to be conditional and therefore implies the same resulting equation under some given conditions, not the equivalence of the principles themselves.
In addition, note that the simpleminded point of view that $\delta \Phi$ and $\delta \Phi^*$ are possible independent variations can easily lead to incorrect conclusions, since $\delta \Phi^*$ is simply the complex conjugate of $\delta \Phi$ . And therefore $\delta \Phi^*$ is just dependent on $\delta \Phi$.
Furthermore, as explained in detail later, principles which result in the problem of a noninvertible SPDM, which was mentioned already in the above, lack some information in comparison to McLachlan's principle, which resolves this problem.
These three principles seem to be different in many senses. First of all, $\delta \Phi$ and $\delta \Phi^*$ are quite dependent as they are simply complex conjugate of each other.
#####/ % Broe, CPL 149, 547 (1988): We show that if the manifold can be parameterized by pairs of “complementary” parameters, the abovementioned principles are equivalent. The condition of complementarity, which is a sufficient one, is demonstrated to be satisfied in a number of important applications. However, a case of nonequivalence in the recent literature warns against liberal assumptions about the equivalence of the timedependent variational principles.
#####/ % Without any confinement in a state, both give the correct timedependent Schr\"odinger equation of manybody state. But we should check carefully whether, whatever restrictions and conditions come to a state, both give the same equations and equivalent results. Here we are going to claim that they are not equivalent when the reduced SPDM $\langle \hat{a}_{ij}^{\dagger} \rangle$ becomes noninvertible. \\
In summary, comparing the three variational principles, McLachlan's principle appears to be most suitable for finding a truncated manybody dynamics which approximates the real dynamics of interacting bosons. %and is thus automatically rigorously controlled.
Adopting McLachlan's principle, we insist, in the following sections, that the variationally optimal truncation of the manybody dynamics of interacting bosons can be adaptively controlled with a monitored error.
#####/ %DiracFrenkel's principle seems to have little connection to other physical principles and phenomena such as Hamilton's principle in classical mechanics or single particle quantum mechanics.
%But it seems to lack a confident reasoning for the above equation. For what end should the DiracFrenkel principle be valid?
### Truncating a manybody state
The limited or restricted forms of the state $\Phi\rangle$ for the truncated manybody dynamics can in principle take any form. In the simplest case, assuming that the occupation numbers of bosons concentrate in one orbital for the whole time of evolution, we can treat the manybody state with one singleparticle orbital. More generally, the state will reside in a subHilbert space of a specific form.
An easily extendable and flexible form of the limitation on the size of the Hilbert space is the multiconfigurational timedependent Hartree wavefunction ansatz, in which the manybody state of bosons is described as a linear combination of permanents $\vec{n}\rangle$, with a finite number $M$ of orthonormalized timedependent singleparticle orbitals. Increasing the number $M$ of orbitals, we can easily extend the timedependent subHilbert space $\mathcal{M}(t)$. The basic steps to be followed in the procedure are as follows.
#####/ % As a result of introduction of timevarying orbitals or truncation, inverse of truncated SPDM comes in the evolution of orbitals.
% As mentioned before, this hampers dynamical switching between various $M$ (the number of orbitals) values in DiracFrenkel's variational approach.
Firstly, the manybody Hamiltonian %which specifies the energy and time evolution of the manybody state is given by
\begin{split}
&\hat{H} =
\int \mathrm{d} \vec{x} ~~
\hat{\Psi}^{\dagger} (\vec{x})
\bigg[

\frac{\nabla^2}
{2m}
+
V_{\rm trap}(\vec{x})
\bigg]
\hat{\Psi} (\vec{x}) \\
&+
\frac{1}{2}
\iint \mathrm{d} \vec{x}_{\alpha} \mathrm{d} \vec{x}_{\beta}
\hat{\Psi}^{\dagger} (\vec{x}_{\alpha})
\hat{\Psi}^{\dagger} (\vec{x}_{\beta})
V(\vec{x}_{\alpha}, \vec{x}_{\beta})
\hat{\Psi} (\vec{x}_{\beta})
\hat{\Psi} (\vec{x}_{\alpha}).
\end{split}
With a complete set of basis orbitals, the field operators of creation and annihilation of particles are expressed by the expansions
\begin{split}
\hat{\Psi}^{\dagger}(\vec{x})
= \sum_{i=1}^{\infty}\hat{a}_i^{\dagger}
\phi_i^* (\vec{x}) ~
\quad\textrm{and}\quad
\hat{\Psi}(\vec{x},t)
= \sum_{i=1}^{\infty}\hat{a}_i
\phi_i (\vec{x},t) .
\end{split}
Using these, the Hamiltonian can be written as
\hat{H}
= \sum_{i,j=1}^{\infty}{ \epsilon_{ij} \hat{a}_{i}^{\dagger} \hat{a}_{j} }
+ \frac{1}{2} \sum_{i,j,k,l=1}^{\infty}{ V_{ijkl} \hat{a}_{i}^{\dagger} \hat{a}_{j}^{\dagger} \hat{a}_{k} \hat{a}_{l} }
where the singleparticle matrix elements are
\epsilon_{ij}
= \int \mathrm{d} \vec{x} ~~
\phi_i^* (\vec{x})
\bigg[ \frac{\nabla^2}{2m} + V_{\rm trap} (\vec{x}) \bigg]
\phi_j (\vec{x}) ,
while the twobody interaction elements are represented by
V_{ijkl}
= \iint \mathrm{d} \vec{x}_{\alpha} \mathrm{d} \vec{x}_{\beta} ~
\phi_i^* (\vec{x}_\alpha)
\phi_j^* (\vec{x}_\beta)
V(\vec{x}_\alpha, \vec{x}_\beta)
\phi_k (\vec{x}_\beta)
\phi_l (\vec{x}_\alpha) .
We abbreviate sometimes, for the sake of convenience,
\begin{split}
&\hat{a}_{ij}^{\dagger}
\equiv
\hat{a}_{i}^{\dagger}
\hat{a}_{j},
\quad %%
\hat{a}_{ijkl}^{\dagger\dagger}
\equiv
\hat{a}_{i}^{\dagger}
\hat{a}_{j}^{\dagger}
\hat{a}_{k}
\hat{a}_{l}, \\
&\hat{a}_{ijk}^{\dagger\dagger}
\equiv
\hat{a}_{i}^{\dagger}
\hat{a}_{j}^{\dagger}
\hat{a}_{k},
\quad %%
\hat{a}_{ijk}^{\dagger}
\equiv
\hat{a}_{i}^{\dagger}
\hat{a}_{j}
\hat{a}_{k}
\end{split}
and so on. Then, the Hamiltonian can be written in shorthand form as
$\hat{H}
= \hat{h} + \frac{1}{2}\hat{V}
= \sum_{i,j=1}^{\infty}
\epsilon_{ij}
\hat{a}_{ij}^{\dagger}
+ \frac{1}{2}
\sum_{i,j,k,l=1}^{\infty}
V_{ijkl}
\hat{a}_{ijkl}^{\dagger\dagger}$.
To find the manybody ground state $G\rangle$, we have to minimize the energy expectation value $E_G = \langle G\hat{H}G\rangle$. Without any restrictions on the state $G\rangle$, the actual exact ground state will be found. However realistically, we cannot describe a state exactly as this will in general require infinitely many orbitals (or variables). So we confine the state into a space of finite dimension, i.e. using only finite number $M$ of orbitals, which is computationally feasible,
 G^{\mathcal{M}} \rangle =
\sum_{\vec{n} \in \mathcal{M}} C_{\vec{n}}  \vec{n} \rangle.
The above is regarded as the truncation of the manybody bosonic state. Here $\vec{n} \in \mathcal{M}\rangle$ indicates a normalized Fock state or a permanent
\vec{n} \rangle
\equiv
\frac{
\big(\hat{a}_1^{\dagger}\big)^{n_1}
\big(\hat{a}_2^{\dagger}\big)^{n_2}
\cdots
\big(\hat{a}_M^{\dagger}\big)^{n_M}}
{\sqrt{n_1! n_2! \cdots n_M!}}
\textrm{vac} \rangle
\quad\text{with}\quad
\sum_{i=1}^{M} n_i = N,
which is a $N$ particle state of which the individual members are composed of $n_1$ particles in $\phi_1 (\vec{x})$ orbital, $n_2$ particles in $\phi_2 (\vec{x})$ orbital, $\cdots$, and $n_M$ particles in $\phi_M (\vec{x})$ orbital. These $M$ orbitals must be orthonormalized to each other.
\int \mathrm{d} \vec{x} ~
\phi_i^* (\vec{x})
\phi_j (\vec{x})
=
\delta_{ij}.
These orbitals compose the subHilbert manifold spanned by $M$ orthonormal orbitals, which will be denoted by $\mathcal{M}$ in our context. The operators of creation and annihilation on these are related to the field operators in position space by the inversion of Eq. ,
\hat{a}_i^{\dagger}
=
\int \mathrm{d} \vec{x} ~
\hat{\Psi}^{\dagger} (\vec{x}) \phi_i (\vec{x})
\quad\textrm{and}\quad %%
\hat{a}_i
=
\int \mathrm{d} \vec{x} ~
\hat{\Psi} (\vec{x})\phi_i^* (\vec{x}) .
These primitive steps are well explained in . See those for a detailed account.
Here, a subHilbert space spanned by $\sum_{i=1}^{M} c_{i} \phi_{i} (\vec{x})$ is to be chosen so as to describe the ground state optimally. And the coefficients $C_{\vec{n}}$ which give minimum of energy are to be determined, too.
Then, $G^{\mathcal{M}}\rangle$ can be considered as optimal truncation of the actual ground state $G\rangle$. The details on how to proceed concretely will follow below in section .
In the timeevolving case, we change not only the coefficients $C_{\vec{n}}$ along time, but also the subHilbert space $\mathcal{M}$ changes with time. With a timevarying truncation of the manybody state, we can express the state as
 \Phi (t) \rangle =
\sum_{\vec{n} \in \mathcal{M}(t)} C_{\vec{n}} (t)  \vec{n} ; t \rangle ,
with timevarying orbitals and their conjugate creation operators
\hat{a}_i^{\dagger} (t)
=
\int \mathrm{d} \vec{x}~ \hat{\Psi}^{\dagger} (\vec{x})
\phi_i (\vec{x}, t) .
This approach was also incorporated in MCTDHB . The time differentiation of the state $ \Phi (t) \rangle$ then becomes
\begin{split}
i \partial_t  \Phi (t) \rangle
&=
\sum_{\vec{n} \in \mathcal{M}(t)}
\Big[
\big( i \partial_t C_{\vec{n}} (t) \big)
 \vec{n} ; t \rangle \\
&+
C_{\vec{n}} (t)
\sum_{i=1}^{M}
\int \mathrm{d} \vec{x} ~
\big( i \partial_t \phi_i (\vec{x}, t) \big)
\hat{\Psi}^{\dagger} (\vec{x})
\hat{a}_{i}
 \vec{n} ; t \rangle
\Big].
\end{split}
Expanding $i \partial_t \phi_i (\vec{x}, t)$ with a complete basis yields
i \partial_t \phi_i (\vec{x}, t)
=
\sum_{k=1}^{\infty}
\phi_{k} (\vec{x}, t) ~
t_{ki} (t) .
where the matrix $t_{ki}$ for $1 \leq k \leq M$ indicates an {\em inner rotation} inside the subHilbert space, whereas $t_{ki}$ for $k > M$ changes subHilbert space itself. Integrating both sides after multiplying with $\phi_{k}^{*} (\vec{x},t)$, we obtain
t_{ki} (t)
=
\int \mathrm{d}\vec{x} ~
\phi_{k}^{*} (\vec{x},t)
\big( i \partial_t \phi_i (\vec{x},t) \big).
Then Eq. can be expressed in the alternative form
\begin{split}
i \partial_t  \Phi (t) \rangle
=
\sum_{\vec{n} \in \mathcal{M}(t)}
\Big[
&\big( i \partial_t C_{\vec{n}} (t) \big)
 \vec{n} ; t \rangle \\
&+
C_{\vec{n}} (t)
\sum_{k=1}^{\infty}
\sum_{i=1}^{M}
t_{ki}
\hat{a}_{ki}^{\dagger}
 \vec{n} ; t \rangle
\Big].
\end{split}
These preliminaries will be used in the following sections, by providing the tools for describing the truncated state $\Phi\rangle$ optimally.
#####/ %% Since we do not count orbitals outside $M$, $\phi_{k}$'s for $k > M$ are just pictorial. The actual calculation takes upto $M$.
%Thus show nonequivalence of DiracFrenkel and MacLachlan
%Uniqueness of the solution
###[#subsecAdaptingthenumberoforbitals] Adapting the number of orbitals
The governing equation of MCTDHB which comes from either DiracFrenkel's principle or TDVP implies that the SPDM must be always invertible, not only initially but also at any instant time afterwards. We quote here the equation (26) in , which is
\begin{split}
&i \partial_t \phi_{k} (\vec{x},t) \\
&=
\sum_{l=M+1}^{\infty}
\phi_{l} (\vec{x},t)
\bigg[
\epsilon_{lk}
+
\sum_{i,n,p,q=1}^{M}
V_{lnpq}
\langle \rho \rangle^{1}_{ki}
\langle \hat{a}_{inpq}^{\dagger\dagger} \rangle
\bigg]
\end{split}
in our notation, where $\langle \rho \rangle^{1}_{ki}$ represents the inverse of the SPDM $\langle \rho_{ij} \rangle \equiv \langle \hat{a}_{ij}^{\dagger} \rangle$. In , it seems to be taken for granted that the SPDM is always invertible. When the SPDM becomes noninvertible, however, the MCTDHB method in fact suddenly fails. %Isn't this inevitable? Furthermore, why in the first place, with MCTDHB, should we be unable to propagate pure %single condensate into fragmented state? What was wrong? or what was missing?
#####/ %When the reduced SPDM $\langle \hat{a}_{ij}^{\dagger} \rangle$ becomes noninvertible, MCTDHB which comes from %DiracFrenkel's principle fails.
For example, when the whole bosons of our interest reside initially in single orbital, MCTDHB cannot propagate this pure condensate into any fragmented state. The method does not provide a way to find the form of the second orbital in this case.
To decrease the subHilbert space or the number of orbitals is not an issue. We can simply eliminate those orbitals with an occupation number ignorable, i.e. not of order $N$. However, to increase the subHilbert space dimension, that is the number of orbitals $M$, becomes difficult since an increase of the subHilbert space and an additional orbital must be set up optimally.
Noninvertibility of the SPDM can obviously also happen dynamically. Unoccupied orbitals or scarcely occupied orbitals thus cause a problem with the dynamics of MCTDHB. These problems cannot be resolved by DiracFrenkel's principle or the TDVP. As McLachlan's principle is based on requiring that the least error should be acquired during time evolution, we may resolve this problem by finding an additional orbital which minimizes the error, as will be expounded in detail below.
#####/ %Applying McLachlan's alternatively, we have tried to solve these problems and to see things in a different point of view.
%noninvertibility of singleparticle density matrix
%during time evolution hampers dynamical switching between various $M$ values for DiracFrenkel
%Explain why not a problem for MacLachlan
McLachlan's principle does not use the action, but the concept of error or remainder. We know the exact form of
the manybody Schr\"odinger equation. It is
$i \partial_t  \Phi \rangle
= \hat{H}  \Phi \rangle$ with the full Hamiltonian Eq. in second quantization form. Since the exact calculation is too cumbersome, we can represent the state by the relatively simple multiconfigurational timedependent Hartree form. The remainder from exact evolution in any case becomes
\big[
i \partial_t
 \hat{H}
\big]
\big \Phi \big\rangle .
In this expression, the left part $i \partial_t  \Phi \big\rangle$ is an evolution of the state $\Phi\rangle$ in its limited, truncated form and the right part $\hat{H}  \Phi \big\rangle$ represents the exact evolution. Here, the initial state is specified in its truncated form, while the Hamiltonian $\hat{H}$ is not truncated.
A quantitative measure of the instantaneous error is then
\big\langle \Phi \big
\big[
i \partial_t
 \hat{H}
\big]^{\dagger}
\big[
i \partial_t
 \hat{H}
\big]
\big \Phi \big\rangle
\geq 0 ,
which is (by definition) positive semidefinite. We have to minimize this error by varying the truncated state in Eq. .
##[#secMinimizingtheenergy] Minimizing the energy of the truncated manybody state
### Variational method with Lagrange multipliers
Before investigating the dynamics, let us demonstrate the timeindependent scheme first. To find the manybody ground state, we have to minimize the energy expectation value
E_{G^{\mathcal{M}}}
= \langle G^{\mathcal{M}} \hat{H} G^{\mathcal{M}}\rangle
= \Big[ \sum_{\vec{m} \in \mathcal{M}} \langle \vec{m}  C_{\vec{m}}^* \Big]
\hat{H}
\Big[ \sum_{\vec{n} \in \mathcal{M}} C_{\vec{n}}  \vec{n} \rangle \Big]
by varying the coefficients $C_{\vec{n}}$'s and the set of $M$ orbitals \{$\phi_i$\}, subject to the $(1+M^2)$ constraints
\sum_{\vec{n}} C_{\vec{n}}^* C_{\vec{n}} = 1
\qquad \text{(1 constraint)}
and
\begin{split}
\int \mathrm{d} \vec{x} ~ \phi_i^* (\vec{x}) \phi_j (\vec{x}) = \delta_{ij}
\quad
\textrm{for }i, j \leq M
\\ \text{($M^2$ constraints)}.
\end{split}
The variation with respect to the expansion coefficients $C_{\vec{n}}$ gives
\frac{\partial E_{G^{\mathcal{M}}}}
{\partial C_{\vec{n}}^*}
=
\langle \vec{n} 
\hat{H}
\sum_{\vec{m} \in \mathcal{M}}
C_{\vec{m}}  \vec{m} \rangle
=
\lambda C_{\vec{n}}
=
E_{G^{\mathcal{M}}} C_{\vec{n}},
where $\vec{n} \in M$ . We have a real functional $E_{G^M}$ Eq. to be minimized, and one real equation of constraint Eq. , with $\frac{(N+M1)!}{N!(M1)!}$ complex variables $C_{\vec{n}}$ when we fix the total number of bosons at $N$. The undetermined Lagrange multiplier $\lambda$ which has to be real is determined to be $E_{G^{\mathcal{M}}}$ with the help of constraint Eq. .
Using the properties $\epsilon_{ij}^* = \epsilon_{ji}$, $V_{ijkl} = V_{jilk}$ and $V_{ijkl}^* = V_{lkji} = V_{klij}$, we can express the above equation explicitly as
\begin{split}
&\langle \vec{n} 
\hat{H}
\sum_{\vec{m} \in \mathcal{M}}
C_{\vec{m}}
 \vec{m} \rangle
=
E_{G^{\mathcal{M}}} C_{\vec{n}} \\
&= \sum_l
\bigg[
\epsilon_{ll}
+
\frac{1}{2}
V_{llll}
(n_l1)
\\
&~~~~~~~~~~~~~~~~~~~
+ \frac{1}{2}
\sum_{k}{}^{'}
(V_{lklk} + V_{lkkl})
n_k
\bigg]
n_l
C_{\vec{n}} \\
&~+
\sum_{j,l}{}^{'}
\bigg[
\epsilon_{lj}
+ V_{lllj}
(n_l1)
+ V_{ljjj}
n_j \\
&~~~~~~~~~~~~~~~~~~~
+ \sum_{k}{}^{'}
(V_{lkkj} + V_{lkjk})
n_k
\bigg]
\sqrt{(n_j+1) n_l}
C_{\vec{n}^{j}_{l}} \\
&~~+ \frac{1}{2}
\sum_{j,l}{}^{'}
V_{lljj}
\sqrt{(n_j+2)(n_j+1)(n_l1)n_l}
C_{\vec{n}^{jj}_{ll}} \\
&~~+ \frac{1}{2}
\sum_{i,j,l}{}^{'}
V_{llji}
\sqrt{(n_i+1)(n_j+1)(n_l1)n_l}
C_{\vec{n}^{ij}_{ll}} \\
&~~+ \frac{1}{2}
\sum_{j,k,l}{}^{'}
V_{lkjj}
\sqrt{(n_j+2)(n_j+1)n_k n_l}
C_{\vec{n}^{jj}_{kl}}
\\&~~
+ \frac{1}{2}
\sum_{i,j,k,l}{}^{'}
V_{lkji}
\sqrt{(n_i+1)(n_j+1)n_k n_l}
C_{\vec{n}^{ij}_{kl}},
\end{split}
where the primed summation $\sum'$ is performed such that
different indices can have only different values. For example, summations over two and three indices are $\sum_{j,l}{}^{'} \equiv \sum_{j} \sum_{l \neq j}$, $\sum_{i,j,l}{}^{'} \equiv \sum_{i} \sum_{j \neq i} \sum_{l \neq (i,j)}$, and $\sum_{k}{}^{'}$ inside a bracket means $\sum_{k \neq \text{(the other indices)}}$.
Though Eq. is just an eigenvalue equation with fixed matrix components, the terms $\epsilon_{ij}$ and $V_{ijkl}$ are matrix elements depending on the orbitals, which are also to be tuned as follows.
For variation with respect to the orbitals $\{\phi_k\}$, functional differentiation is used. As the permanent $ \vec{n} \rangle$ is constructed from repeatedly applying creation operators of particles in the $M$ orbitals, it can be regarded to be given by multiple integrations over the orbitals $\phi_k (\vec{x})$,
\vec{n}\rangle
=
\int \mathrm{d}\vec{x}_{\alpha}
\phi_{i} (\vec{x}_{\alpha})
\hat{\Psi}^{\dagger} (\vec{x}_{\alpha})
\int \mathrm{d}\vec{x}_{\beta}
\phi_{j} (\vec{x}_{\beta})
\hat{\Psi}^{\dagger} (\vec{x}_{\beta})
\cdots
\textrm{vac}\rangle .
As each functional differentiation contributes to the result, this will be counted by a factor $\hat{a}^{\dagger}_{k}$, which results in
$\frac{\partial \langle \Phi }
{\partial \phi_k^* (\vec{x})}
= \langle \Phi 
\hat{a}_{k}^{\dagger}
\hat{\Psi} (\vec{x})$. But as the full hamiltonian $\hat{H}$, which is given in field form by Eq. , is independent on the $M$ orbitals chosen, the functional differentiation of $\hat{H}$ with respect to the orbitals $\{\phi_k\}$ gives zero,
$\frac{\partial \hat{H}}
{\partial \phi_k^* (\vec{x})}
= 0$. Then functional variation of $E_{G^{\mathcal{M}}}$ Eq. with respect to the orbitals $\{\phi_k\}$, combined with the functional constraints Eq. , leads to
\begin{split}
&\frac{\partial E_{G^{\mathcal{M}}}}{\partial \phi_k^* (\vec{x})}
= \\
&\Big[
\sum_{\vec{m} \in \mathcal{M}}
\langle \vec{m}  C_{\vec{m}}^*
\Big]
\hat{a}_k^{\dagger}
\hat{\Psi} (\vec{x})
\hat{H}
\Big[
\sum_{\vec{n} \in \mathcal{M}}
C_{\vec{n}}  \vec{n} \rangle
\Big]
= \sum_{j=1}^{M} \lambda_{kj} \phi_j (\vec{x}) ,
\end{split}
where $\lambda_{jk} = \lambda_{kj}^*$ is Hermitian matrix. Here the method of Lagrange multipliers with complex functional variables is used . Integrating each side over space after multiplication with $\phi_{l}^* (\vec{x})$ yields
\Big[
\sum_{\vec{m} \in \mathcal{M}}
\langle \vec{m} 
C_{\vec{m}}^*
\Big]
\hat{a}_k^{\dagger}
\hat{a}_l
\hat{H}
\Big[
\sum_{\vec{n} \in \mathcal{M}}
C_{\vec{n}}
 \vec{n} \rangle
\Big]
=
\begin{cases}
\lambda_{kl} & \textrm{for } l \leq M \\
0 & \textrm{for } l > M .
\end{cases}
Here, using Eq. and the property that $\vec{m}_{k}^{l} \in \mathcal{M}$ when $k,l \leq M$ and $\vec{m} \in \mathcal{M}$ ,
the undetermined set of Lagrange multipliers $\lambda_{kl}$ becomes related to $E_{G^{\mathcal{M}}}$ by $\lambda_{kl} = E_{G^{\mathcal{M}}} \langle \hat{a}_{kl}^{\dagger} \rangle$ for $k, l \leq M$.
Using Eq. and the bosonic commutation relations between field operators, i.e.
$[\hat{\Psi}(\vec{x}),
\hat{\Psi}^{\dagger}(\vec{x}')]
= \delta \big( \vec{x}  \vec{x}' \big)$,
$[\hat{\Psi}(\vec{x}),
\hat{a}_{k}^{\dagger}]
= \phi_k (\vec{x})$, and so on, Eq.
$\langle \hat{a}_k^{\dagger} \hat{\Psi} (\vec{x}) \hat{H} \rangle
= \sum_{j=1}^{M} \lambda_{kj} \phi_j (\vec{x})$
is explicitly expressed as
\begin{split}
&\sum_{j=1}^{\infty}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\bigg[

\frac{ \nabla^2}
{2m}
+
V_{\rm trap}(\vec{x})
\bigg]
\phi_j (\vec{x}) \\
&+
\sum_{p,q,j=1}^{\infty}
\langle
\hat{a}_{kpqj}^{\dagger\dagger}
\rangle
\int \mathrm{d} \vec{x}' ~
\phi_p^* (\vec{x}')
V(\vec{x}, \vec{x}')
\phi_q (\vec{x}')
\phi_j (\vec{x}) \\
&+
\langle
\hat{a}_k^{\dagger}
\hat{H}
\hat{\Psi} (\vec{x})
\rangle
=
\sum_{j=1}^{M} \lambda_{kj} \phi_j (\vec{x}) .
\end{split}
Since we can eliminate the annihilations above $M$, we obtain
\begin{split}
&\sum_{j=1}^{M}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\hat{h}
\phi_j (\vec{x})
+ \sum_{p,q,j=1}^{M}
\langle
\hat{a}_{kpqj}^{\dagger\dagger}
\rangle
\hat{V}_{pq}
\phi_j (\vec{x}) \\
&=
\sum_{j=1}^{M}
\Big[
\lambda_{kj}

\langle
\hat{a}_k^{\dagger}
\hat{H}
\hat{a}_j
\rangle
\Big]
\phi_j (\vec{x}) \\
&=
\sum_{j=1}^{M}
\Big[
E_{G^M}
\langle
\hat{a}_k^{\dagger}
\hat{a}_j
\rangle

\langle
\hat{a}_k^{\dagger}
\hat{H}
\hat{a}_j
\rangle
\Big]
\phi_j (\vec{x})
\equiv
\sum_{j=1}^{M}
\tilde{\lambda}_{kj}
\phi_j (\vec{x}),
\end{split}
where the new undetermined Lagrange multipliers
$\tilde{\lambda}_{kj}
\equiv
\lambda_{kj}

\langle
\hat{a}_k^{\dagger}
\hat{H}
\hat{a}_j
\rangle$ which satisfy
$\tilde{\lambda}_{kj}^*
=
\tilde{\lambda}_{jk}$ are introduced. We abbreviated, for the sake of convenience, two singleparticle
and interaction operators defined by
\hat{h}
\phi_j (\vec{x})
\equiv
\bigg[

\frac{\nabla^2}
{2m}
+
V_{\rm trap}(\vec{x})
\bigg]
\phi_j (\vec{x})
and
\hat{V}_{pq}
\phi_j (\vec{x})
\equiv
\int \mathrm{d} \vec{x}' ~
\phi_p^* (\vec{x}')
V (\vec{x}, \vec{x}')
\phi_q (\vec{x}')
\phi_j (\vec{x}) .
### Method of steepest constrained descent
To find the tentative ground state $ G^{M} \rangle$, we have to find the coefficient $C_{\vec{n}}$'s and the complex orbital function $\phi_k (\vec{x})$'s satisfying Eq. and simultaneously. Additionally, the solutions must satisfy all $(1 + M^2)$ constraints Eq. , . The number of real values which we should find is $2\frac{(N+M1)!}{N!(M1)!}$ for the set of the $C_{\vec{n}}$, $2M$ real functions for the $\phi_k (\vec{x})$, and $(1 + M^2)$ real values for the undetermined Lagrange multiplier $E_{G^M}$ and the $\lambda_{kj}$. The number of given real equations therefore is $2\frac{(N+M1)!}{N!(M1)!}$ for Eq. , $2M$ real functional equations for Eq. , and $(1 + M^2)$ for Eq. , .
Apart from the large number of variables, what makes matters even more complicated is that the equations , , , are coupled together. To find a selfconsistent solution is therefore obviously a very difficult problem.
#####/ %is much more difficult problem than finding equations that the solution must satisfy.
%So we have to know, in addition, the way to find $G^M \rangle$, actually $C_{\vec{n}}$'s and $\phi_k (\vec{x})$'s from given equations.
In ref. , the authors started from an initial guess, then iteratively, with a convergence check, they obtained a solution. A few years later in ref. , applying the Wick rotation $it \rightarrow \tau$ on the equations of motion, they introduced the socalled imaginary time propagation. They claimed that this reduces any arbitrary initial manybody state after a sufficient time
of propagation to the ground state.
The imaginary time evolution
$i \frac{\partial}{\partial t} \Phi \rangle
= \hat{H} \Phi \rangle
\Rightarrow
 \frac{\partial}{\partial \tau} \Phi \rangle
= \hat{H} \Phi \rangle$
implies that
$e^{ i \hat{H} t }
\Rightarrow
e^{ \hat{H} \tau }
\Phi \rangle$.
As $\tau$ goes to infinity, this seems to indicate that only the ground state survives and the excited states would no longer contribute. The contribution of the excited states decays exponentially according to a factor which is proportional to energy difference from the ground state and $\tau$.
#####/ %(The reason why we introduce another method.)
%No flexibility.
%Orbitals must be orthonormalized. But I think imageinary time propagation does not guarantee it. Although normalization of C_n can change, keeping orbitals' orthonormality might be safe.
Here, we present the method of steepest {\it constrained} descent, of which the understanding is fundamental to the theory and design of higherorder algorithms of optimization {\it with constraints}. Since our problem requires optimization with {\it constraints on the variables} Eq. , , a simpleminded gradient (or, as it is more commonly termed steepest) descent method does not work in our case. There are already numerous higherorder optimization algorithms {\it without constraints} such as Newton's method, conjugate gradient method, BFGS (BroydenFletcherGoldfarbShanno) method, and the BarzilaiBorwein method . But these methods are not applicable to our problem since the function to be minimized {\it with constraints on the variables} is generally {\it unbounded without constraints}. For example, the energy $\langle \hat{H} \rangle$ with unnormalized state $ \Phi \rangle$ can be zero, or $\pm\infty$. So when the actual groundstate energy is positive, the global minimum is where all $C_{\vec{n}} = 0$. For this case, e.g. Newton's method will simply lead to an incorrect solution. In other words, since our final point is not a global minimum without constraints, it can not be approximated as a quadratic function $\big[ f(\bf{x}) = \frac{1}{2} \bf{x}^{T} A \bf{x} + \bf{b}^{T} \bf{x} \big]$ around the minimum point (with constraints included). So all of these abovementioned other methods,
which are based on the expansion properties of the function around the minimum to second order will fail.
#####/ %Anyway the understanding of steepest {\it constrained} descent is fundamental to the theory and design of higherorder algorithms of optimization {\it with constraints}.
On the other hand, the method of steepest {\it constrained} descent guarantees that any given state is propagated to the neighboring lowest value point along the steepest {\it constrained} path for given variational parameters. Though it propagates a state only to a local neighboring minimum point, we can find in many cases the global minimum, or the ground state, from a wellchosen initial state, and with in addition wellchosen variational parameters. Although this does not deliver the state along the shortest path, the very large number of degrees of freedom on the choice of variational parameters or the sequential processing (separation) of variations can compensate it in many cases. Furthermore the step size can be determined by onepoint or twopoint method. The flexibility of the method of steepest constrained descent will therefore be beneficial for finding the ground state.
Let us see the process in more detail. The first step is to find (by educated guess) an appropriate initial state, specifying the coefficients $C_{\vec{n}}$ and the $M$ orbitals which we believe are appropriate to approximately describe the ground state of a given system. %It doesn't have to be accurate, though good guess would be definitely better to save time and calculation costs.
From this initial guess, the state is propagated as follows. For the expansion coefficient $C_{\vec{n}}$'s, using the steepest constrained descent ,
\frac{d C_{\vec{n}}}
{d \tau}
=

\Delta_{C}(\tau)
\bigg[
\langle \vec{n} 
\hat{H}
\sum_{\vec{m}}
C_{\vec{m}}  \vec{m} \rangle

\lambda
C_{\vec{n}},
\bigg]
where $\Delta_{C}(\tau)$ is any arbitrary positive function of $\tau$ which is introduced to satisfy
$\sum_{\vec{n}}
\frac{d C_{\vec{n}}^*}{d\tau}
\frac{d C_{\vec{n}}}{d\tau}=$ const
at a certain given instant $\tau$ and therefore can be chosen in a convenient way to save time and calculation costs. Since it must satisfy $\sum_{\vec{n}} C_{\vec{n}}^* C_{\vec{n}} = 1$, i.e.
$\sum_{\vec{n}}
\big[
C_{\vec{n}}^*
\frac{d C_{\vec{n}}}
{d \tau}
+
\frac{d C_{\vec{n}}^*}
{d \tau}
C_{\vec{n}}
\big]
=
0$, $\lambda$ is to be $\langle \hat{H} \rangle$.
As another option, we can use a polar representation for $C_{\vec{n}}$. Representing the complex variable $C_{\vec{n}}$ by an Euler representation with a radius $\xi_{\vec{n}}$ and an angle $\theta_{\vec{n}}$ gives
$C_{\vec{n}}
= \xi_{\vec{n}}
e^{i \theta_{\vec{n}}}$.
Then the constraint Eq. becomes
$\sum_{\vec{n}} \xi_{\vec{n}}^2 = 1$, restricting only the radial component of the complex variables $C_{\vec{n}}$. Since we do not have to confine the variable change into the specific form
$\sum_{\vec{n}}
\big[
\big(
\frac{d \xi_{\vec{n}}}{d\tau}
\big)^2
+
\xi_{\vec{n}}^2
\big(
\frac{d \theta_{\vec{n}}}{d\tau}
\big)^2
\big] =$ const,
separating the two variable sets can be much more efficient in this case. That is, we set
$\frac{d \xi_{\vec{n}}}{d\tau} = 0$ and
$\sum_{\vec{n}}
\big(
\frac{d \theta_{\vec{n}}}{d\tau}
\big)^2 =$ const
for some given $\tau$ time spans which would be chosen in computationally favorable way, and
$\frac{d \theta_{\vec{n}}}{d\tau} = 0$
and
$\sum_{\vec{n}}
\big(
\frac{d \xi_{\vec{n}}}{d\tau}
\big)^2 =$ const for the other $\tau$ spans.
Then the method of steepest constrained descent gives
\frac{d \xi_{\vec{n}}}
{d \tau} = 0 ,
\qquad %%
\frac{d \theta_{\vec{n}}}
{d \tau}
=

\Delta_{\theta}(\tau)
\Im
\big(
\xi_{\vec{n}}
e^{ i \theta_{\vec{n}}}
\langle \vec{n} 
\hat{H}
\rangle
\big),
for some given $\tau$ spans, and
\frac{d \theta_{\vec{n}}}
{d \tau}
= 0 ,
\qquad %%
\frac{d \xi_{\vec{n}}}
{d \tau}
=

\Delta_{\xi}(\tau)
\Big[
\Re
\big(
e^{ i \theta_{\vec{n}}}
\langle \vec{n} 
\hat{H}
\rangle
\big)

\lambda
\xi_{\vec{n}}
\Big]
for the other following $\tau$intervals.
Here $\Re$ means real part of complex number and $\Im$ means imaginary part of complex number. Using $\sum_{\vec{n}} \xi_{\vec{n}}^2 = 1$, and therefore
$\sum_{\vec{n}}
2
\xi_{\vec{n}}
\frac{d \xi_{\vec{n}}}
{d \tau}
=
0$, then
$\lambda$ becomes
$\lambda
= \langle
\hat{H}
\rangle$ too.
Dealing with $\xi_{\vec{n}}$ and $\theta_{\vec{n}}$ separately, we propagate the two variable sets successively and iteratively until convergence is achieved. With any sequence, they will finally approach the minimum. %With good choice of variables and topology, it will approach the minimum more quickly. \\
For the orbitals $\phi_k (\vec{x})$, the method of steepest constrained descent gives
\begin{split}
&\frac{d \phi_{k} (\vec{x})}
{d \tau}
=
\Delta_{\phi_k} (\tau)
\Big[
\langle
\hat{a}_{k}^{\dagger}
\hat{\Psi} (\vec{x})
\hat{H}
\rangle

\sum_{j=1}^{M}
\lambda_{kj}
\phi_{j} (\vec{x})
\Big] \\
&=

\Delta_{\phi_k} (\tau)
\bigg[
\sum_{j=1}^{M}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\hat{h}
\phi_j (\vec{x})
+ \sum_{p,q,j=1}^{M}
\langle
\hat{a}_{kpqj}^{\dagger\dagger}
\rangle
\hat{V}_{pq}
\phi_j (\vec{x}) \\
&~~~~~~~~~~~~~~~~~~

\sum_{j=1}^{M}
\tilde{\lambda}_{kj}
\phi_j (\vec{x})
\bigg].
\end{split}
If we propagate the orbitals separately one after another, % iteratively,
i.e. only the $k$th orbital changes within a certain period, the constraint becomes
$\int \mathrm{d}\vec{x}~
\phi_l^* (\vec{x})
\frac{\phi_k (\vec{x})}{d\tau}
= 0$ for $l \neq k$ and
$\int \mathrm{d}\vec{x}~
\big(
\phi_k^* (\vec{x})
\frac{\phi_k (\vec{x})}{d\tau}
+
\frac{\phi_k^* (\vec{x})}{d\tau}
\phi_k (\vec{x})
\big)
= 0$.
Then the undetermined Lagrange multipliers becomes
\tilde{\lambda}_{kl}
=
\sum_{j=1}^{M}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\epsilon_{lj}
+ \sum_{p,q,j=1}^{M}
\langle
\hat{a}_{kpqj}^{\dagger\dagger}
\rangle
V_{lpqj}
and
\tilde{\lambda}_{kk}
=
\Re \Big(
\sum_{j=1}^{M}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\epsilon_{kj}
+ \sum_{p,q,j=1}^{M}
\langle
\hat{a}_{kpqj}^{\dagger\dagger}
\rangle
V_{kpqj}
\Big) .
Separating the propagation of the orbitals, i.e. propagating the orbitals one after another independently, makes the process much easier and more comprehensible. %And the computational error which comes from finite step size can be handled by cutting the nonorthogonal part of the only propagating orbital.
In a numerical implementation, the method of steepest {\it constrained} descent is performed in discrete steps. Then a projection onto the set of constraints must be employed at every discrete step. Using just a onepoint step size method, we can use the Lagrange multipliers $\lambda_{kj}$ as a degree of freedom to save computational cost, since we do not need to consider the propagation outside of constraints. As an example, we can set
$\lambda_{kj} =\langle \hat{a}_k^{\dagger} \hat{H} \hat{a}_j \rangle$ in Eq. so that we do not need to calculate the quantities
$\langle \hat{a}_k^{\dagger} \hat{H} \hat{a}_j \rangle$. However, to ensure fast convergence, the optimal step size is determined with a twopoint or even fourpoint method. The role of Lagrange multipliers is crucial, then, because we mix two gradients at different points. Because of the finite step size, the propagation outside of the constraints at one point can be the direction of the steepest {\it constrained} descent at another point.
The task in an implementation therefore is to combine the proper directions with appropriately chosen Lagrange multipliers at different points, to optimize the speed of convergence.
## Control of truncated manybody evolution
### Evaluating the error of truncated manybody evolution
An instantaneous error is expressed by Eq. . Minimizing this error with a state change under the truncation Eq. gives us the appropriate truncated manybody evolution. This offers, as a major benefit of the present approach, a quantitative value, the error, which indicates how accurately the truncated evolution describes the exact one.
Explicitly expressing the error, we have
\begin{split}
&\langle \Phi 
\big[
\hat{H}  i \partial_t
\big]^{\dagger}
\big[
\hat{H}  i \partial_t
\big]
 \Phi \rangle \\
&=
\sum_{\vec{n} \in \mathcal{M}(t)}
\bigg[
\langle \vec{n} 
C_{\vec{n}}^*
\hat{H}
+
\langle \vec{n} 
\big(i
\partial_t
C_{\vec{n}}^*
\big) \\
&~~~~~~~~~~~~~~~~
+
\sum_{i=1}^{M}
\langle \vec{n} 
C_{\vec{n}}^*
\hat{a}_{i}^{\dagger}
\int \mathrm{d} \vec{x}
\big(i
\partial_t
\phi_i^* (\vec{x},t)
\big)
\hat{\Psi} (\vec{x})
\bigg] \\
&~
\times
\sum_{\vec{m} \in \mathcal{M}(t)}
\bigg[
\hat{H}
C_{\vec{m}}
 \vec{m} \rangle

\big(i
\partial_t
C_{\vec{m}}
\big)
 \vec{m} \rangle \\
&~~~~~~~~~~~~~~~~

\sum_{j=1}^{M}
\int \mathrm{d} \vec{x}'
\big(i
\partial_t
\phi_j (\vec{x}',t)
\big)
\hat{\Psi}^{\dagger} (\vec{x}')
\hat{a}_{j}
C_{\vec{m}}
 \vec{m} \rangle
\bigg] .
\end{split}
We minimize this instantaneous error, varying the complex variables
$\big( \partial_t C_{\vec{n}} \big)$ and
$\big( \partial_t \phi_i (\vec{x},t) \big)$ subject to the (1 + $M^2$) constraints
\begin{split}
&\partial_t
\bigg[
\sum_{\vec{n}} C_{\vec{n}}^* (t) C_{\vec{n}} (t) = 1
\bigg] \\
&\Rightarrow \quad
\sum_{\vec{n}}
\bigg[
\big(
\partial_t C_{\vec{n}}^* (t)
\big)
C_{\vec{n}} (t)
+
C_{\vec{n}}^* (t)
\big(
\partial_t C_{\vec{n}} (t)
\big)
\bigg]
= 0
\end{split}
and from the orthonormality condition
\begin{split}
&\partial_t
\bigg[
\int \mathrm{d} \vec{x}
\phi_i^* (\vec{x},t)
\phi_j (\vec{x},t)
= \delta_{ij}
\bigg]
\Rightarrow \quad
\int \mathrm{d} \vec{x}
\bigg[
\big(
\partial_t \phi_i^* (\vec{x},t)
\big) \\
&\times
\phi_j (\vec{x},t)
+
\phi_i^* (\vec{x},t)
\big(
\partial_t \phi_j (\vec{x},t)
\big)
\bigg]
= 0 .
\end{split}
The Eq. requires probability conservation of the state itself and Eq. requires conservation of orthonormality of orbitals. Using the expression Eq. , Eq. can be expressed as the hermiticity condition $t_{ji}^* = t_{ij}$.
Variation with respect to $\partial_t C_{\vec{n}}^* $ leads to
\langle \vec{n}  i
\big[
\hat{H}  i \partial_t
\big]
\Phi\rangle
=
\lambda (t) C_{\vec{n}}
resulting in
i \partial_t C_{\vec{n}}
=
\langle \vec{n} 
\big[
\hat{H} 
\sum_{i=1}^{\infty}
\sum_{j=1}^{M}
t_{ij} \hat{a}_{ij}^{\dagger}
\big]
\Phi\rangle
+
i \lambda (t) C_{\vec{n}}.
As the constraint Eq. enforces $\lambda (t) = 0$,
% for the time evolution
\langle \vec{n} 
\big[
\hat{H}

i \partial_t
\big]
\Phi\rangle
= 0,
and therefore the time evolution of the expansion coefficients takes the form
\begin{split}
i \partial_t
C_{\vec{n}}
=
\langle \vec{n} 
\big[
\hat{H}  \hat{t}~
\big]
\Phi\rangle
\end{split}
where $\vec{n} \in M(t)$ and $\hat{t} = \sum_{i,j} t_{ij} \hat{a}_{ij}^{\dagger}$.
Variation with respect to $\partial_t \phi_k^* (\vec{x},t)$ gives
\langle \Phi  i
\hat{a}_{k}^{\dagger}
\hat{\Psi} (\vec{x})
\big[
\hat{H}  i \partial_t
\big]
\Phi\rangle
=
\sum_{l=1}^{M}
\lambda_{kl} \phi_{l} (\vec{x},t) .
After multiplying both sides with $\phi_l^* (\vec{x},t)$ and integrating, one finds
\langle \Phi  i
\hat{a}_{k}^{\dagger}
\hat{a}_{l}
\big[
\hat{H}  i \partial_t
\big]
\Phi\rangle
=
\begin{cases}
\lambda_{kl} (t) &\text{ if } l \leq M \\
0 &\text{ if } l>M .
\end{cases}
Since $\langle\vec{n} \hat{a}_{kl}^{\dagger}$ belongs to ${\mathcal M}(t)$ whenever $\vec{n} \in \mathcal{M}(t)$ and $k,l \leq M$, all $\lambda_{kl} (t)$ here become zero too with the help of Eq. . So for any $k$ and $l$,
% for the time evolution
\langle\Phi 
\hat{a}_{kl}^{\dagger}
\big[ \hat{H}  i \partial_t \big]
 \Phi \rangle
= 0.
Expanding the above equation for $l>M$,
\langle
\hat{a}_{kl}^{\dagger} \hat{H}
\rangle
=
\sum_{j=1}^{M}
\langle
\hat{a}_{kj}^{\dagger}
\rangle
\int \mathrm{d}\vec{x}
\phi_l^* (\vec{x},t)
\big(
i \partial_t \phi_j (\vec{x},t)
\big) .
Since the SPDM can be noninvertible, we reduce the density matrix by eliminating unoccupied orbitals in which the eigenvalues of the SPDM becomes zero (relative to ${\cal O}(N)$ in the limit $N\rightarrow\infty$)
after diagonalizing the SPDM. In the process of diagonalizing the SPDM, the orbitals are unitarily transformed so that the essentially unoccupied orbitals can be found and eliminated. Introducing the inverse of this reduced density matrix
$\langle \rho_{ki} \rangle
\equiv
\langle \hat{a}_{ki}^{\dagger} \rangle
\equiv
\langle \hat{a}_{k}^{\dagger} \hat{a}_{i} \rangle$
%\col{Why is there a dagger here on the $\hat a_{ki}$??}
when the ${\cal O}(N)$
occupied orbitals exist up to the $M_1$th orbital ($\sum_{i=1}^{M_1} \langle \rho \rangle_{ki}^{1} \langle \rho_{ij} \rangle = \delta_{kj}$) and using the completeness relation $\sum_{l=1}^{\infty} \phi_{l}^* (\vec{x}',t) \phi_{l} (\vec{x},t) = \delta (\vec{x}'\vec{x})$, the evolution equation of the orbitals acquires the form
\begin{split}
&i \partial_t
\phi_k (\vec{x},t)
=
\sum_{l=1}^{M}
t_{lk}
\phi_{l} (\vec{x},t)
+
\sum_{l=M+1}^{\infty}
\sum_{i=1}^{M_1}
\langle
\rho
\rangle_{ki}^{1}
\langle
\hat{a}_{i}^{\dagger}
\hat{a}_{l}
\hat{H}
\rangle
\phi_{l} (\vec{x},t) \\
&=
\sum_{l=1}^{M}
t_{lk}
\phi_{l} (\vec{x},t)
+
\sum_{l=M+1}^{\infty}
\sum_{i=1}^{M_1}
\langle
\rho
\rangle_{ki}^{1}
\langle
\bigg[
\sum_{n=1}^{M_1}
\epsilon_{ln}
\hat{a}_{in}^{\dagger}
+
\sum_{n,p,q=1}^{M_1}
V_{lnpq}
\hat{a}_{inpq}^{\dagger\dagger}
\bigg]
\rangle
\phi_{l} (\vec{x},t) \\
&=
\sum_{l=1}^{M}
t_{lk}
\phi_{l} (\vec{x},t)
+
\sum_{l=M+1}^{\infty}
\bigg[
\epsilon_{lk}
+
\sum_{i,n,p,q=1}^{M_1}
V_{lnpq}
\langle
\rho
\rangle_{ki}^{1}
\langle
\hat{a}_{inpq}^{\dagger\dagger}
\rangle
\bigg]
\phi_{l} (\vec{x},t)
\end{split}
for $k \leq M_1$.
Here, we divide the evolution of the orbitals into two parts. We call the left $l\leq M$ part of Eq. inner rotation and the right $l>M$ part of Eq. rotation toward the outside the subHilbert space. Since the subHilbert space spanned by $M$ orbitals, $\sum_{i=1}^{M} c_{i} \phi_{i} (\vec{x},t)$, does not change under inner rotation, we realize that only a rotation toward the outside deforms the subHilbert space. For the evolution inside the subHilbert space, i.e. inner rotation, we simply use $t_{ij}$ defined in ,
which can be any Hermitian matrix. Using Eq. , , , the
expression for the error Eq. can be strongly simplified:
\begin{split}
&\langle \Phi 
\big[ \hat{H}  i \partial_t \big]^{\dagger}
\big[ \hat{H}  i \partial_t \big]
 \Phi \rangle
= \langle \Phi 
\big[ \frac{1}{2} \sum_{i,j,k,l=1}^{\infty} V_{ijkl} \hat{a}_{ijkl}^{\dagger\dagger} \big]
\big[ \hat{H}  i \partial_t \big]
 \Phi \rangle
= \frac{1}{2} \sum_{i,j=1}^{M_1} \sum_{k,l=1}^{\infty} V_{ijkl}
\langle \Phi  \hat{a}_{ijkl}^{\dagger\dagger}
\big[ \hat{H}  i \partial_t \big]
 \Phi \rangle \\
&= \!\sum_{i,j,k,n,r,s,p,q=1}^{M_1} \sum_{l=M+1}^{\infty}\!
V_{ijkl} V_{lspq}
\langle \hat{a}_{ijkn}^{\dagger\dagger} \rangle
\langle \rho \rangle_{nr}^{1}
\langle \hat{a}_{rspq}^{\dagger\dagger} \rangle
+\!\sum_{i,j,k,s,p,q=1}^{M_1} \sum_{l=M+1}^{\infty}\!
V_{ijkl} V_{lspq}
\langle \hat{a}_{ijskpq}^{\dagger\dagger\dagger} \rangle \\
&~~~
+\frac{1}{2}\!\sum_{i,j,p,q=1}^{M_1} \sum_{k=1}^{M} \sum_{l=M+1}^{\infty}\!
V_{ijkl} V_{lkpq}
\langle \hat{a}_{ijpq}^{\dagger\dagger} \rangle
%\\&~~~
+\frac{1}{2}
\sum_{i,j,p,q=1}^{M_1} \sum_{k=1}^{\infty} \sum_{l=M+1}^{\infty}\!
V_{ijkl} V_{lkpq}
\langle \hat{a}_{ijpq}^{\dagger\dagger} \rangle .
\end{split}
The above equation represents our main result, rendering the error of manybody quantum evolution upon truncation systematically computable. As clearly seen, the error stems entirely from interactions. In other words, the error does not depend on the choice of $t_{ij}$ and even on the single particle energy matrix $\epsilon_{ij}$ for any given truncated initial state, provided the evolution of the state, $\partial_t \Phi\rangle$, is optimally taken as in Eq. , , .
### Determining the number of orbitals dynamically
When the error becomes large or, alternatively, when we aim at describing the system more precisely, we have to increase the number of orbitals $M_1$ into $M$. The $(M  M_1)$ additional orbitals
then can be determined by variation of the error with respect to $\phi_{u} (\vec{x},t)$ where $M_1 < u \leq M$, and subject to the orthonormalization condition
$\int \mathrm{d}\vec{x} ~
\phi_{u}^{*} (\vec{x},t)
\phi_{v} (\vec{x},t)
= \delta_{uv}$. The method of Lagrange multipliers for functional variables gives the stationarity condition
\begin{split}
&\sum_{i,j,k,n,r,s,p,q=1}^{M_1}
V_{ijku} \big( \hat{V}_{sp} \phi_{q} (\vec{x},t) \big)
\langle \hat{a}_{ijkn}^{\dagger\dagger} \rangle
\langle \rho \rangle_{nr}^{1}
\langle \hat{a}_{rspq}^{\dagger\dagger} \rangle \\
&
\sum_{i,j,p,q=1}^{M_1} \sum_{k,s=1}^{M}
V_{ijku} \big( \hat{V}_{sp} \phi_{q} (\vec{x},t) \big)
\langle \hat{a}_{ijk}^{\dagger\dagger} \hat{a}_{spq}^{\dagger} \rangle \\
&=
\sum_{v=1}^{M}
\mu_{uv} \phi_{v} (\vec{x},t) .
\end{split}
While this is an equation for the additional orbitals, the additional orbitals are difficult to obtain directly from the above equation. The best we can do is to guess some orbitals that will approximately satisfy the above equation.
Therefore, we use the method of steepest constrained descent again. From an initial trial orbital, we propagate the orbital toward
\begin{split}
&\frac{d \phi_u (\vec{x})}{d \tau}
=
\Delta_{\phi_u} (\tau)
\bigg[
\sum_{i,j,k,n,r,s,p,q=1}^{M_1}
V_{ijku} \big( \hat{V}_{sp} \phi_{q} (\vec{x},t) \big) \\
&\times
\langle \hat{a}_{ijkn}^{\dagger\dagger} \rangle
\langle \rho \rangle_{nr}^{1}
\langle \hat{a}_{rspq}^{\dagger\dagger} \rangle

\sum_{i,j,p,q=1}^{M_1} \sum_{k,s=1}^{M}
V_{ijku} \big( \hat{V}_{sp} \phi_{q} (\vec{x},t) \big) \\
&\langle \hat{a}_{ijk}^{\dagger\dagger} \hat{a}_{spq}^{\dagger} \rangle

\sum_{v=1}^{M}
\mu_{uv} \phi_{v} (\vec{x},t)
\bigg]
\end{split}
so that the error become minimized with this additional orbital. If we propagate orbitals separately and iteratively one after another, i.e. only the $u$th orbital changes along $\tau$, the constraint becomes
$\int \mathrm{d}\vec{x}~
\phi_v^* (\vec{x})
\frac{\phi_u (\vec{x})}{d\tau}
= 0$ for $v \neq u$ and
$\int \mathrm{d}\vec{x}~
\big(
\phi_u^* (\vec{x})
\frac{\phi_u (\vec{x})}{d\tau}
+
\frac{\phi_u^* (\vec{x})}{d\tau}
\phi_u (\vec{x})
\big)
= 0$.
Then the undetermined Lagrange multipliers become
\begin{split}
\mu_{uv}
=
&\sum_{i,j,k,n,r,s,p,q=1}^{M_1}
V_{ijku} V_{vspq}
\langle \hat{a}_{ijkn}^{\dagger\dagger} \rangle
\langle \rho \rangle_{nr}^{1}
\langle \hat{a}_{rspq}^{\dagger\dagger} \rangle
\\&

\sum_{i,j,p,q=1}^{M_1} \sum_{k,s=1}^{M}
V_{ijku} V_{vspq}
\langle \hat{a}_{ijk}^{\dagger\dagger} \hat{a}_{spq}^{\dagger} \rangle ,
\end{split}
where the orbital index ranges are constrained to be $M_1 < u \leq M$ and $1 \leq v \leq M$.
%Explicit expression for the error in terms of correlation functions/orbitals/expansion coefficients?
Since the inner rotation can be arbitrarily chosen, we can take the unit matrix,
$t_{ij}=\epsilon_{ij}$, which renders the result in a simple form.
The evolution Eq. for the expansion coefficients becomes
i \partial_t C_{\vec{n}}
=
\left\langle \vec{n} \right
\frac{1}{2}
\sum_{i,j=1}^{M}
\sum_{k,l=1}^{M_1}
V_{ijkl}
\hat{a}_{ijkl}^{\dagger\dagger}
\left \Phi \right\rangle .
This implies the desired property that, when the interaction is turned off, the coefficients $C_{\vec{n}}$
do not change at all. The Schr\"odinger equation for the orbitals, Eq. , becomes
\begin{split}
&i \partial_t \phi_k (\vec{x},t)
=
\hat{h} \phi_k (\vec{x},t) \\
&+
\sum_{l=M+1}^{\infty}
\sum_{i,n,p,q=1}^{M_1}
V_{lnpq}
\langle \rho \rangle_{ki}^{1}
\langle \hat{a}_{inpq}^{\dagger\dagger} \rangle
\phi_l (\vec{x},t)
\end{split}
for $k \leq M_1$. The projection toward the outside of the subHilbert space $\mathcal{M}$ takes place only on the interaction term. Thus the option of taking $t_{ij}=\epsilon_{ij}$ for the inner rotation shows the effect of the interaction term in this explicit manner.
For $M_1 < k \leq M$, the evolution of the additional orbitals can be chosen in any convenient way, since the time derivatives of the additional orbitals do not occur in the error measure.
The only constraint is the inner rotation. As we have chosen
$t_{lk} \big( \equiv \int \mathrm{d}\vec{x} \phi_l^* (\vec{x},t) i \partial_t \phi_k (\vec{x},t) \big)$
for $M_1 < l \leq M$ and $k \leq M_1$
to be $\epsilon_{lk}$, $t_{kl}$ should be $t_{lk}^* = \epsilon_{lk}^* = \epsilon_{kl}$ to satisfy the orthonormality constraint Eq. .
Taking the rotation toward the outside of the subHilbert space to be also equal to the singleparticle energy matrix, $\epsilon_{kl}$ for $k>M$, the evolution of the additional orbital $\phi_l (\vec{x},t)$ for $M_1<l\leq M$
is determined by the simple equation
i \partial_t \phi_l (\vec{x},t)
= \hat{h} \phi_l (\vec{x},t),
while the additional orbitals are found from Eq. .
We, finally, emphasize again that the key difference from the MCTDHB approach is that additional
macroscopically occupied orbitals during dynamical evolution can be found in our approach.
By this means, we can handle the exceptional case when the SPDM is not invertible, and increase the number of orbitals under any given circumstances and boundary conditions.
## Numerical Implementation
Here we present simple numerical implementations of our method. For the simplicity of the implementation, we investigate a system under infinite well potential. Then for the description of the orbitals, we can use DVR(discrete variable representation) with the finite number $\bar{M}$ of sinusoidal basis functions. The $i$th orbital $\phi_i$ is described by the finite number of complex variables $U_{\bar{k}i}$ through
\phi_i (\vec{x},t)
=
\sum_{\bar{k}=1}^{\bar{M}}
\phi_{\bar{k}} (\vec{x})
U_{\bar{k}i} (t)
\quad\text{with}\quad
\phi_{\bar{k}} (\vec{x})
=
\sqrt{\frac{2}{L}}
\sin \Big( \frac{\bar{k} \pi}{L} x \Big)
where $L$ is the width of the infinite well potential. Then the differentiations and integrations over the orbitals will be represented by matrices. The singleparticle matrix elements are
\begin{split}
\epsilon_{ij}
&= \int \mathrm{d} \vec{x} ~~
\phi_i^* (\vec{x})
\bigg[
\frac{\nabla^2}{2m}
+
V_{\rm trap} (\vec{x})
\bigg]
\phi_j (\vec{x}) \\
&=
\sum_{\bar{k}, \bar{l}=1}^{\bar{M}}
U_{i\bar{k}}
\epsilon_{\bar{k}\bar{l}}
U_{\bar{l}j}
\end{split}
where
\epsilon_{\bar{k}\bar{l}}
=
\frac{\pi^2 \bar{k}^2}{2m L^2}
\delta_{\bar{k}\bar{l}}
+
\int \mathrm{d} \vec{x} ~
\phi_{\bar{k}}^* (\vec{x})
V_{\rm trap} (\vec{x})
\phi_{\bar{l}} (\vec{x}) .
When the trap potential is varied only by a overall factor along the time, i.e. $V_{\rm trap} (\vec{x},t) = h(t) V_{\rm trap} (\vec{x})$, integrations which cost a bunch of time in calculation are needed just once in the beginning and the saved integration datas can be used afterwards saving lots of time. The twobody interaction elements are also calculated by
\begin{split}
V_{ijkl}
&= \iint \mathrm{d} \vec{x}_{\alpha} \mathrm{d} \vec{x}_{\beta} ~
\phi_i^* (\vec{x}_\alpha)
\phi_j^* (\vec{x}_\beta)
V(\vec{x}_\alpha, \vec{x}_\beta)
\phi_k (\vec{x}_\beta)
\phi_l (\vec{x}_\alpha) \\
&=
\sum_{\bar{p}, \bar{q}, \bar{r}, \bar{s}=1}^{\bar{M}}
U_{i\bar{p}}
U_{j\bar{q}}
V_{\bar{p}\bar{q}\bar{r}\bar{s}}
U_{\bar{r}k}
U_{\bar{s}l}
\end{split}
where, when the interaction is given by $V(\vec{x}_\alpha, \vec{x}_\beta) = g \delta(\vec{x}_\alpha  \vec{x}_\beta)$ ,
\begin{split}
V_{\bar{p}\bar{q}\bar{r}\bar{s}}
&=
g
\Big( \frac{2}{L} \Big)^2
\int_{0}^{L} \mathrm{d} x ~
\sin \Big( \frac{\bar{p} \pi}{L} x \Big)
\sin \Big( \frac{\bar{q} \pi}{L} x \Big) \\
&~~~~~~~~~~~~~~~~~~~~~~~~
\times
\sin \Big( \frac{\bar{r} \pi}{L} x \Big)
\sin \Big( \frac{\bar{s} \pi}{L} x \Big) \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!=
\frac{g}{2L}
\big(
\delta_{\bar{p},\bar{q}\bar{r}\bar{s}}
\delta_{\bar{p},\bar{q}\bar{r}\bar{s}}
\delta_{\bar{p},\bar{q}+\bar{r}\bar{s}}
\delta_{\bar{p},\bar{q}\bar{r}+\bar{s}} \\
&~~~
+\delta_{\bar{p},\bar{q}+\bar{r}+\bar{s}}
+\delta_{\bar{p},\bar{q}\bar{r}+\bar{s}}
+\delta_{\bar{p},\bar{q}+\bar{r}\bar{s}}
\delta_{\bar{p},\bar{q}+\bar{r}+\bar{s}}
\big) .
\end{split}
To simplify the calculations, we used $\hbar = m = 1$ unit and length in $\mu$m unit. Therefore time is expressed in units of $\frac{mL_{\mu m}^2}{\hbar}$, and energy in units of $\frac{\hbar^2}{m L_{\mu m}^2}$. For example, $\frac{mL_{\mu m}^2}{\hbar} = 1.37$ msec and $\frac{\hbar^2}{m L_{\mu m}^2} = 7.70 \times 10^{32} \text{ J} = 2\pi \times 116$ Hz when $m$ is put to be the mass of $^{87}$Rb atom .
### Ground state
Using these numerical techniques and the method described in section , we firstly found the manybody ground state under different conditions. Since we are interested in the instantaneous error measure indicating the validity of the simulation, we tried to find two distinguished ground states which can be connected through the time evolution. Then we checked the ground state without any bump and with a bump on the center of the potential, expecting the condensation and the simpleset twofold fragmentation of double well potential.
Fig. shows the schematic potential we implemented. Raising the barrier height $h$, we figured out how the ground state and the degree of fragmentation changes. Fig. shows how the fragmentation changes with respect to the height $h$ of the bump. And fig. , , , shows several ground states obtained numerically with $M=2$ orbitals.
Without any bumps, i.e. when $h=0$, the ground state is almost single condenstate as it is shown in (a) of fig. . 99\% of the particles are condensed on the first symmetric sinusoidallike orbital. Just small excitation on the second orbital is there. But this state can be also represented as the coherent state with unitary transformation of the basis orbitals. See (b) in fig. . With left and right orbitals, we can see the state in a different view. This becomes quite useful in the following arguements analyzing the fragmentation.
Seeing the change of fragmentation degree with respect to the height $h$ in fig. , we noticed that the result is somewhat counter intuitive. Up to $h=1000$, the condensation drops gradually. But between $h=1000$ and $h=6000$, there is a plateau. And from $h=6000$ to $h=8000$, the particles move from the first symmetric orbital to the second antisymmetric orbital. See (a)'s of fig. , , , . Though we expected that the condensation would be twofold fragmented as $h$ goes up, the result was different from it.
%As the bump on the center of the potential goes up, i.e. $h$ increases, the ground state becomes twofold fragmented.
To resolve or explain this irregularity, we checked what the states are looked like and how the state is changing. With seeing only the natural orbitals, which are presented in (a)'s of fig. , , , , and with which the single particle density matrix $\langle \hat{a}^{\dagger}_{ij} \rangle$ is diagonalized, it was hard to understand the states physically. When $C_{\vec{n}}$'s are widely distributed and therefore various features are mixed and superposed, the conceptual understanding of the state becomes quite obscure.
Then we transformed the orbital bases unitarilly to see the same state in a different view and understand it better. See (b)'s in fig. , , , . As it is shown, the distribution of $C_{\vec{n}}$ is quite sharp around center even when the degree of fragmentation is around 85\% and 15\%, i.e. when judged from the eigenvalues of SPDM, the state is regarded as being quite condensated. From these features, we can conclude or argue that this peculiarity is due to the lack of orbitals, i.e. because too small number of orbitals, in this case $M=2$, are used.
For example, we need at least $M=4$ orbitals to describe two independent, in other words spatially quite separated, condensates when we need at least $M=2$ orbitals to describe each condensate well enough. In extreme condition of our case where $h \rightarrow \infty$, left and right are detangled. And therefore, in this case, we might need $M=4$ orbitals to represent the ground state properly. Since the ground state on each left and right infinite well potential is nearly single condensate which is similar to the one in fig. , $M=2$ can be enough in this extreme case. But as our cases implemented are in between, surely it can be insisted that at least $M=4$ orbitals are needed to describe the ground state quite properly.
In conclusion, the non fragmented weird feature above $h \approx 2000$ can be regarded as a side effect of truncating too many orbitals or information about the state, which is generally done to save costs and time of simulation. Since the small excitation on each left and right potential, i.e. excitation on the third and fourth orbital, cannot be counted with $M=2$ simulation, these features might be compensated by the mixing of $C_{\vec{n}}$ around $n_1 = n_2 = 50$. Consequently this causes the decrease of the degree of fragmentation.
Although the more concrete argument requires $M=4$ simulations, it also requires exponentially increased costs, efforts and time. For now, seeing the unitarily transformed states and handling the datas delicately, we claim that, even with $M=2$ simulation, many interesting features can be seen enough and be trustworthy. And for the goal of checking the validity of simulation of time evolution through the instantaneous error measure, this example with $M=2$ is just appropriate.
%\colb{\% This is comment: For more complete paper or confirmation on this argument, we gonna need an additional simulations with at least $M=4$ orbitals. But this will take too long and require lots of efforts additionally.}
### Time evolution
Expecting dynamical fragmentation of single condensate into twofold fragmented state, we ramped up the middle of the infinite well potential . With $M=1$ simulation, the dynamical fragmentation cannot be described. So we can expect that the indicator or the instantaneous error of $M=1$ evolution would show some increases at the point where the fragmentation emerges whereas the error of $M=2$ does not show any increment.
Seeing Fig. , we can see the correspondence between the error and the fragmentation. But there is also troublesome offset problem in this error measure. To resolve this problem, we see meaning of the error in a different view.
### Physical meaning of the error
The one crucial hypothesis in our method is that the most part of the bosons of our interest resides only within $M$orbitals during the time of our interest. In mathematical words, it says
\langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle
\approx N
\quad
\text{when } t<T_{\text{interest}}.
Expanding it with Taylor series, we can see some short time tendancy of the number of particles in $M$orbitals
\langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle
=
N
+
\frac{d \langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle}{d t} t
+
\frac{d^2 \langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle}{2 d t^2} t^2
+
\cdots .
Since the initial manybody state consists of only $M$orbitals or less than $M$orbitals (at least we can assume that the whole particles are initially within $M$orbitals) and the commutation relation satisfies $[\sum_{i=1}^{\infty} \hat{n}_i , \hat{H}] = 0$, the first derivative
\begin{split}
&i \frac{d}{d t}
\langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle \\
&=
\langle
\sum_{i=1}^{M} \hat{n}_{i} (t)
\hat{H}

\hat{H}
\sum_{i=1}^{M} \hat{n}_{i} (t)
+
\sum_{k=1}^{\infty}
\sum_{i=1}^{M}
( t_{ki} \hat{a}^{\dagger}_{ki}

t_{ik} \hat{a}^{\dagger}_{ik} )
\rangle
\end{split}
becomes automatically zero. Then it means that the total occupation number on $M$orbitals doesn't change instantly if the initial state has all bosons under $M$orbitals.
Seeing the second derivatives of number conserving in $M$orbitals,
\begin{split}
&\frac{d^2}{d t^2}
\langle
\sum_{i=1}^{M}
\hat{n}_{i} (t)
\rangle
=
2
\langle
\hat{H}
\Big(
N 
\sum_{i=1}^{M}
\hat{n}_{i}
\Big)
\hat{H}
\rangle

\langle
\big[
\partial_t^2 \sum_{i=1}^{M} \hat{n}_{i} (t)
\big]
\rangle \\
&+
2
\langle
\hat{H}
\big[
i \partial_t \sum_{i=1}^{M} \hat{n}_{i} (t)
\big]
\rangle

2
\langle
\big[
i \partial_t \sum_{i=1}^{M} \hat{n}_{i} (t)
\big]
\hat{H}
\rangle .
\end{split}
Then putting the change of orbitals given in Eq. and arranging the equation, it gives
\begin{split}
\frac{d^2}{d t^2}
\langle \sum_{i=1}^{M} \hat{n}_{i} (t) \rangle
= &2 \sum_{l=M+1}^{\infty}
\Big[ \sum_{i,j,p,q=1}^{M_1} \sum_{k=1}^{\infty} V_{ijkl} V_{lkpq}
\langle \hat{a}_{ijpq}^{\dagger\dagger} \rangle \\
&+ \sum_{i,j,k,s,p,q=1}^{M_1}
V_{ijkl} V_{lspq}
\langle \hat{a}_{ijskpq}^{\dagger\dagger\dagger} \rangle
 \sum_{i,j,k,n,r,s,p,q=1}^{M_1}
V_{ijkl} V_{lspq}
\langle \hat{a}_{ijkn}^{\dagger\dagger} \rangle
\langle \rho \rangle^{1}_{nr}
\langle \hat{a}_{rspq}^{\dagger\dagger} \rangle
\Big] .
\end{split}
This has quite similar form to the instantaneous error of Eq. since it has double derivative with repect to time and therefore $\hat{H}^2$ term. The leaking bosons from $M$orbitals then can be approximately counted to be
\langle \sum_{i=1}^{M} \hat{n}_{i} (t) \rangle
\approx N + \frac{d^2 \langle \sum_{i=1}^{M} \hat{n}_{i} (t) \rangle}{2 d t^2} t^2 .
Therefore our method with $M$orbitals will be valid only for the time scale $T$ which satisfies
 \frac{d^2 \langle
\sum_{i=1}^{M} \hat{n}_{i} (t) \rangle}{2 d t^2} T^2 \ll N .
This means
T \ll \sqrt{ \frac{2N}{d^2 \langle \sum_{i=1}^{M} \hat{n}_{i} (t) \rangle / d t^2} }
where the manybody state in this equation is put by the initial one and the instantaneous state afterwards. The upper bound of time will be updated continuously, using the instantaneous state.
We have checked this double derivative term. Comparing the values of the error and double derivative of the particle conservation, we can see the value of the $d_t^2$ is just 4 times of the error.
\col{But this factor 4 is only when the term below is dominant? You found this numerically to be true or analytically?}
From this hint, we have found that the term $V_{ijkl} V_{lkpq}
\langle \hat{a}_{ijpq}^{\dagger\dagger} \rangle$ is the most dominant one both in Eq. and Eq. . Even though Eq. does not exactly correspond to Eq. , we can adapt some physical concepts of Eq. into the instantaneous error and estimate valid time scale from it.
But still the big offset is troublesome. In Fig. , the offset of the error is almost 10 times bigger than the change of the error along time. This might be because the small fraction outside of $M$orbitals is not appropriately counted. Actually we ignored that part as we set $n_k = 0$ for $k>M$ orbitals along all time. The revived small part will compensate the flow from inside $M$ to outside of $M$orbitals by the flow from outside to inside of $M$. Since this flow will be nearly constant if the small fraction stays small steadily along time, we can argue that we can hopefully ignore the offset and concentrate on the change of the error along time. This statement is also supported by the fact that the offset value of $M=2$ simulation is significantly diminished when compared to $M=1$ simulation.
\col{Obviously, this crucial argument has to be refined still, as it is not yet accurate. What do you mean by ``revived'' small part?}
Therefore seeing Fig. and concentrating on the change of the error ($\approx 20 \times 10^{3}$), we can see the valid time scale Eq. ($\approx 100$) is consistant to the time scale where the fragmentation occurs. Tracking the change of the error, we can actually sense the validity of the simulation.
## Summary
\colb{Theory is estabilished by observing and analyzing experiments about physical systems. And the theory is continuously verified and modified through successive experiments and endless analyses. Sometimes the theory expect first some interesting and useful results, and the experiments follow the expectation. But the equation of the theory is not always exactly solvable, so we approximate the equation within a plausible form by fair reasonings to solve the problem. In these processes, when the welldesigned experiment does not agree with the theoretical anticipation, we have to check whether the theory is wrong and therefore needs to be modified, or the approximate treatment of the theory was too harsh and more accurate calculation is needed. But it will be much better if we can assert that the approximated simulation is quite the same to the exact manipulation of the theory before a direct comparison with experimental outputs. Since there can be mistakes in experiment too, discrimination of theoretical mistakes from experimental one is crucial to establish a sound theory.}
\col{What is the point of this very philosophical paragraph? I do not quite see the relation to our paper.
It is more important to rewrite what comes below, and incorporate the discussion on the numerical
implementation and its outcome there. We have no experiment in our paper, so please rewrite the above such that there
is a connection to our paper made.}
Using McLachlan's principle and the methods of Lagrange multipliers and steepest constrained descent, we have found a computationally feasible way to describe the manybody evolution of bosons in a rigorously controlled manner. Writing the manybody state in Hartree form and limiting the size of the Hilbert space by truncating into a finite number of macroscopically occupied field operator modes, the error from the exact evolution can be minimized selfconsistently. This gives a variationally optimized solution to the evolution of the truncated manybody state.
We have demonstrated that without twobody interactions, our scheme possesses the desired property
that the evolution of the manybody state can be exactly described with zero error, cf. Eq. .
When interactions are turned on, the error accumulates during time evolution. Employing our method, we can evolve the truncated manybody state in an optimized way. Monitoring the error simultaneously, we can increase the accuracy of the evolution by increasing the number of orbitals in a selfconsistent way.
By adaptively changing the number of orbitals based on the instantaneous error measure, we can automatically ensure the validity of the result for the manybody state.
We conclude by a brief summary of our approach applied to the wellknown and ubiquitous GrossPitaevski\v\i\/ equation.
We start by evolving the initial trial
state $\Phi\rangle$ along the $M=1$ version of Eq. , and simultaneously check whether the error Eq. remains small or not. Monitoring the error Eq. , we can determine under which conditions the GrossPitaevski\v\i\/ equation loses its validity. When this happens, the error becoming large, we increase the number of orbitals to $M=2$ (thus, here, $M_1=1$ and $M=2$). The additional second orbital is found by the method of steepest constrained descent, using Eq. . Then the subsequent evolution of the quantum manybody state follows the $M=2$ version of Eq. , , while the evolution of the (initially singular) second orbital follows Eq. . We monitor the error Eq. again, checking that the error is decreased to a sufficient degree. In a selfconsistent manner the processes described are carried out successively until we obtain a prescribed accuracy.
## RRA
Science \textbf{269}, 198 (1995);
Observation of BEC in a Dilute Atomic Vapor;
by M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell;
Phys. Rev. Lett. \textbf{75}, 3969 (1995);
BEC in a Gas of Sodium Atoms;
by K. B. Davis, M.O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle;
Phys. Rev. Lett. \textbf{78}, 985 (1997);
BoseEinstein Condensation of Lithium  Observation of Limited Condensate Number;
by C. C. Bradley, C. A. Sackett, and R. G. Hulet;
For an overview, see {\it Ultracold Bosonic and Fermionic Gases}, Eds. K. Levin, A. L. Fetter and D. M. StamperKurn, Contemporary Concepts of Condensed Matter Science Vol. 5, Elsevier, 2012.
Rev. Mod. Phys. {\bf 71}, 463 (1999);
Theory of trapped Bosecondensed gases;
by F. Dalfovo, S. Giorgini, L. P. Pitaevski\v{\i}, and S. Stringari;
Rev. Mod. Phys. {\bf 80}, 885 (2008);
ManyBody Physics with Ultracold Gases;
by I. Bloch, J. Dalibard, and W. Zwerger,
Proc. Cambridge Philos. Soc. \textbf{26}, 376 (1930);
by P. A. M. Dirac;
J. Frenkel,
{\it Wave Mechanics}, Clarendon Press, Oxford, 1934.
Mol. Phys. \textbf{8}, 39 (2006);
A variational solution of the timedependent Schrodinger equation;
by A. D. McLachlan;
Chem. Phys. Lett. \textbf{149}, 547 (1988);
On the equivalence of timedependent variational principle;
by J. Broeckhove, L. Lathouwers, E. Kesteloot, and P. Van Leuven;
Phys. Rev. A. \textbf{77}, 033613 (2008);
Multiconfigurational timedependent Hartree method for bosons: Manybody dynamics of bosonic systems;
by O. E. Alon, A. I. Streltsov, and L. S. Cederbaum;
A. I. Streltsov, O. E. Alon, and L. S. Cederbaum,
Phys. Rev. Lett. \textbf{100}, 130401 (2008).
%Formation and dynamics of manyboson fragmented states in onedimensional attractive ultracold gases
P. Bader and U. R. Fischer,
Phys. Rev. Lett. \textbf{103}, 060402 (2009).
%Fragmented ManyBody Ground States for Scalar Bosons in a Single Trap
K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum,
Phys. Rev. Lett. \textbf{103}, 220601 (2009).
%Exact Quantum Dynamics of a Bosonic Josephson Junction
A. I. Streltsov, O. E. Alon, and L. S. Cederbaum,
Phys. Rev. Lett. \textbf{106}, 240401 (2011).
%Swift Loss of Coherence of Soliton Trains in Attractive BoseEinstein Condensates
A. Raab,
Chem. Phys. Lett. \textbf{319}, 674 (2000).
%On the Dirac–Frenkel/McLachlan variational principle.
H.J. Kull and D. Pfirsch,
Phys. Rev. E \textbf{61}, 5940 (2000).
%Generalized variational principle for the timedependent HartreeFock equations for a Slater determinant
Z. Baci\'c and J. C. Light,
J. Chem. Phys. \textbf{85}, 4594 (1986).
%Highly excited vibrational levels of ``floppy'' triatomic molecules: A discrete variable representation — Distributed Gaussian basis approach.
We illustrate this statement as follows. The reason why we can use $df = \frac{\partial f}{\partial z}dz + \frac{\partial f}{\partial z^*}dz^*$ is not because $dz$ and $dz^*$ are linearly {\it independent} differentials, but because $dz$ and $dz^*$ are differentials, that is {\it infinitesimal} quantities; $dz^*$ is in fact 100\% dependent on $dz$ since $dz^*$ is simply the complex conjugate of $dz$. Therefore we cannot simply argue that because $dz$ and $dz^*$ are independent variations, the partial derivatives $\frac{\partial f}{\partial z}$ and $\frac{\partial f}{\partial z^*}$ must be zero for the stationarity of $f$. There is a rather widespread misunderstanding of this mathematical fact found in the literature.
When the variables and constraints are given in complex form, the method of Lagrange multipliers is expressed by
$\frac{\partial f} {\partial z_k^*}
= \sum_l \bigg[
\lambda_l \frac{\partial g_l} {\partial z_k^*}
+ \lambda_l^* \frac{\partial g_l^*} {\partial z_k^*} \bigg]$. Though $g_l (z_k, z_k^*)=c$ and $g_l^* (z_k, z_k^*)=c^*$ are the same constraint, we have to add the complex conjugate of it in the Lagrange equation. Be careful to note that $(\frac{\partial g_l}{\partial z_k^*})^*$ is not equal to $\frac{\partial g_l^*}{\partial z_k^*}$. The above equation gives the condition for the stationarity of the realvalued function $f$.
$\vec{m}_k^l$ indicates the shorthanded notation of $\hat{a}_{l}^{\dagger} \hat{a}_{k}  \vec{m} \rangle = \sqrt{(m_{l}+1)m_{k}}  \vec{m}_k^l \rangle$ which means that, from the given configuration $\vec{m}$, one particle is removed from $k$orbital and one particle is added to $l$orbital and the state $ \vec{m}_k^l \rangle$ is properly normalized.
A. I. Streltsov, O. E. Alon, and L. S. Cederbaum,
Phys. Rev. A. \textbf{73}, 063626 (2006).
%General variational manybody theory with complete selfconsistency for trapped bosonic systems.
J. J. McKeown, D. Meegan, and D. Sprevak, {\it An Introduction to Unconstrained Optimization},
IOP Publishing, 1990.
The steepest descent direction in terms of real variables is given by the gradient: $d\vec{x} =  \vec{\nabla} f$. On the other hand, the steepest descent direction in terms of complex variables is given by the differential with respect to the {\it complex conjugate} of the variables: $dz_k =  \frac{\partial f}{\partial z_k^*}$. Then the steepest {\it constrained} descent is given by
$\frac{d z_k} {d \tau}
= \Delta(\tau) \bigg[ \frac{\partial f} {\partial z_k^*}
 \sum_l \Big[\lambda_l \frac{\partial g_l} {\partial z_k^*} + \lambda_l^* \frac{\partial g_l^*} {\partial z_k^*} \Big] \bigg]$ where $\Delta(\tau)$ can be any arbitrary positive definite function.
A. I. Streltsov, O. E. Alon, and L. S. Cederbaum,
Phys. Rev. Lett. \textbf{99}, 030402 (2007).
%Role of Excited States in the Splitting of a Trapped Interacting BoseEinstein Condensate by a TimeDependent Barrier