Commit 32ab23c3 authored by Jens Jørgen Mortensen's avatar Jens Jørgen Mortensen
Browse files

Ran proselint on the ReST files

parent 7d703747
Pipeline #171256431 passed with stage
in 3 minutes and 48 seconds
......@@ -126,7 +126,7 @@ pseudo density is simple. For GGA functionals, a nearest neighbor
finite difference stencil is used for the gradient operator. In the
PAW method, there is a correction to the XC-energy inside the
augmentation spheres. The integration is done on a non-linear radial
grid - very dense close to the nuclei and less dense away from the
grid - dense close to the nuclei and less dense away from the
nuclei.
......
......@@ -78,7 +78,7 @@ Common sources of bugs
class A:
def __init__(self, a=[]):
self.a = a # All instances get the same list!
self.a = a # all instances get the same list!
- There are subtle differences between ``x == y`` and ``x is y``.
......
......@@ -90,7 +90,7 @@ execution. Breakpoints can be set also on the source-code window.
C debugging
===========
First of all, the C-extension should be compiled with the *-g* flag in
First, the C-extension should be compiled with the *-g* flag in
order to get the debug information into the library. Also, the
optimizations should be switched off which could be done in
:ref:`siteconfig.py <siteconfig>` as::
......
......@@ -5,7 +5,7 @@ Development
===========
GPAW development can be done by anyone! Just take a look at the
`issue tracker`_ and find something that suits your talents!
`issue tracker`_ and find something that suits your talents.
The primary source of information is still the :ref:`manual` and
:ref:`documentation`, but as a developer you might need additional
......@@ -15,7 +15,7 @@ As a developer, you should subscribe to the GPAW :ref:`mail list`.
We would also like to encourage you to join our channel for :ref:`irc`.
Now you are ready to to perfom a :ref:`developer installation` and
start development!
start development.
.. _issue tracker: https://gitlab.com/gpaw/gpaw/issues/
......
......@@ -56,7 +56,7 @@ The :class:`gpaw.calculator.GPAW` inherits from:
.. note::
GPAW uses atomic units internally (`\hbar=e=m=1`) and ASE uses
Angstrom and eV (:mod:`~ase.units`).
Ångström and eV (:mod:`~ase.units`).
Generating a GPAW instance from scratch
......
......@@ -11,7 +11,7 @@ encountering convergence problems:
1) Make sure the geometry and spin-state is physically sound.
Remember that ASE uses Angstrom and not Bohr or nm!
Remember that ASE uses Ångström and not Bohr or nm.
For spin polarized systems, make sure you have sensible initial magnetic
moments. Don't do spin-paired calculations for molecules with an odd
number of electrons. Before performing calculations of isolated atoms
......
......@@ -176,7 +176,7 @@ scheme describes the propagation of electrons and nuclei in the GPAW implementat
where `{\bf R}`, `M` and `{\bf F}` denote the positions of the nuclei, atomic masses and atomic forces, respectively, and `n` denotes the
electron density. Calculation of the atomic forces is tricky in PAW-based Ehrenfest dynamics due to the atomic position-dependent
PAW transformation. In the GPAW program the force is derived on the grounds the the total energy of the quantum-classical
PAW transformation. In the GPAW program the force is derived on the grounds the total energy of the quantum-classical
system is conserved.
The atomic forces in Ehrenfest dynamics are thoroughly analysed and explained
......
......@@ -253,7 +253,7 @@ Memory consumption:
Speed:
For small systems with many **k**-points, PW mode beats everything else.
For larger systems LCAO will be most efficient. Whereas PW beats FD for
smallish systems, the opposite is true for very large systems where FD
smallish systems, the opposite is true for large systems where FD
will parallelize better.
Absolute convergence:
......@@ -303,7 +303,7 @@ unoccupied bands will improve convergence.
.. tip::
``nbands='nao'`` will use the the same number of bands as there are
``nbands='nao'`` will use the same number of bands as there are
atomic orbitals. This corresponds to the maximum ``nbands`` value that
can be used in LCAO mode.
......@@ -458,7 +458,7 @@ a grid-point density of `1/h^3`. For more details, see :ref:`grids`.
If you are more used to think in terms of plane waves; a conversion
formula between plane wave energy cutoffs and realspace grid spacings
have been provided by Briggs *et. al* PRB **54**, 14362 (1996). The
have been provided by Briggs *et al.* PRB **54**, 14362 (1996). The
conversion can be done like this::
>>> from gpaw.utilities.tools import cutoff2gridspacing, gridspacing2cutoff
......@@ -838,7 +838,7 @@ convergence can be obtained with a different eigensolver. One option is the
RMM-DIIS (Residual minimization method - direct inversion in iterative
subspace), (``eigensolver='rmm-diis'``), which performs well when only a few
unoccupied states are calculated. Another option is the conjugate gradient
method (``eigensolver='cg'``), which is very stable but slower.
method (``eigensolver='cg'``), which is stable but slower.
If parallellization over bands is necessary, then Davidson or RMM-DIIS must
be used.
......@@ -1068,7 +1068,7 @@ at a later time, this can be done as follows:
Everything will be just as before we wrote the :file:`H2.gpw` file.
Often, one wants to restart the calculation with one or two parameters
changed slightly. This is very simple to do. Suppose you want to
changed slightly. This is simple to do. Suppose you want to
change the number of grid points:
>>> atoms, calc = restart('H2.gpw', gpts=(20, 20, 20))
......
......@@ -266,7 +266,7 @@ reproduces the total DOS (more efficiently computed using
electrons contained in the region ascribed to atom `a` (more
efficiently computed using ``calc.get_wigner_seitz_densities(spin)``.
Notice that the domain ascribed to each atom is deduced purely on a
geometrical criterion. A more advanced scheme for assigning the charge
geometric criterion. A more advanced scheme for assigning the charge
density to atoms is the :ref:`bader analysis` algorithm (all though the
Wigner-Seitz approach is faster).
......
......@@ -9,7 +9,7 @@ of non-equilibrium transport calculations where electron
exchange and correlation effect are threated using many body perturbation
theory such as Hartree-Fock, second Born and the GW approximation.
It is recommended that you go through the ASE/GPAW electron transport exercice
to get familiar with the general transport setup and definitions used
to get familiar with the general transport setup and definitions used
in ase and gpaw and the KGF code.
-------------------------
......@@ -21,19 +21,19 @@ GPAW and is expected to be part of the GPAW package in the near future.
The latest revision can be obtained from svn::
$ svn checkout https://svn.fysik.dtu.dk/projects/KeldyshGF/trunk KeldyshGF
Installation is completed by adding the path of KeldyshGF to the PYTHONPATH
Installation is completed by adding the path of KeldyshGF to the PYTHONPATH
environment variable.
-----------------------
Doing a KGF calculation
-----------------------
The KGF code can perform finite bias non-equilibrium calculation
of a molecular junction using various electron exchange and
correlation approximations.
The KGF code can perform finite bias non-equilibrium calculation
of a molecular junction using various electron exchange and
correlation approximations.
It is assumed that interactions are only included in a central region.
The KGF code can handle both model Hamiltonians of the the Pariser-Parr-Pople
The KGF code can handle both model Hamiltonians of the Pariser-Parr-Pople
(extended Hubbard) as well as abinitio calculated Hamiltonians.
A KGF calculatoin normally involves the following steps:
......@@ -41,7 +41,7 @@ A KGF calculatoin normally involves the following steps:
- Setting up the non-interacting lead and scattering Hamiltonian.
- Setting up a non-interacting GF
- Setting up various self-energies to handle Hartree, exchange and correlation
- Runnig the calculation
- Runnig the calculation
XXX.
......@@ -52,24 +52,24 @@ Example: Pariser-Parr-Model Hamiltonian
To do an electron transport calculation using a model Hamiltonian
the parameters of both the non-interacting
part as well as the interacting part of the Hamiltonian need to
be explicitly specified. The non-interacting part
h_ij describe kinetic energy and electron-electron interaction
part in the PPP approximation
part as well as the interacting part of the Hamiltonian need to
be explicitly specified. The non-interacting part
h_ij describe kinetic energy and electron-electron interaction
part in the PPP approximation
is on the form V_ij = v_iijj, where the v_ijkl's are two electron
Coulomb integrals.
To get started consider a simple four site interacting
model. The four the x's in the figure below represent the sites where
model. The four the x's in the figure below represent the sites where
electron-electron interactions are included.
The o's (dashes) represents non-interacting sites.::
Left lead Molecule Right Lead
---------------------------------
---------------------------------
o o o o x| x x x x |x o o o o
---------------------------------
0 1 2 3 4 5
The numbers refers to indix numbers in the Green functions
The numbers refers to indix numbers in the Green functions
- the Green function will be a 6x6 matrix where the subspace corresponding
to the molecule will be the central 4x4 matrix.
Leads are treated as simple nearest neighbour tight-binding chains with
......@@ -78,16 +78,16 @@ a principal layer size of one.
The following parameters will be used to simulate a physisorbed molecule:
================= ========= ==============================
parameter value description
parameter value description
================= ========= ==============================
``t_ll`` -20.0 intra lead hopping
``t_lm`` -1.0 lead-molecule hopping
``t_mm`` -2.4 intra molecule hopping
``V`` electron-electron interaction
``V`` electron-electron interaction
================= ========= ==============================
where V is the matrix::
where V is the matrix::
V = [[ 0. 7.45 4.54 3.18 2.42 0. ]
[ 7.45 11.26 7.45 4.54 3.18 2.42]
[ 4.54 7.45 11.26 7.45 4.54 3.18]
......@@ -101,13 +101,13 @@ In Python code the input parameters can generated like this:
We begin by performing an equilibrium calculation (zero bias).
An equilibrium involces setting the relevant Green's functions and
self-energies. All Green's functions are represented on energy grid
self-energies. All Green's functions are represented on energy grid
which should have a grid spacing fine enough to resolve all spectreal
feautres. In practise this accomplished by choosing an energy grid spacing
about half the size of the infinitesimal eta appearing in the
about half the size of the infinitesimal eta appearing in the
Green's functions (which is given a finite value
in numerical calculations).
in numerical calculations).
In Python code an equilibrium non-interacting calculatoins followed
by a Hartree-Fock calculations and a GW calculation look like this:
......@@ -127,4 +127,4 @@ Saving calculated data to a NetCDFFile
--------------------------------------
The GF method ``WriteSpectralInfoToNetCDFFile`` is used to save all
the calculated data such as spectral functions, transmission function etc.
to a NetCDFFile.
to a NetCDFFile.
......@@ -32,7 +32,7 @@ quasiparticle energy (electron addition/removal energies) of the state
`(\mathbf{k}, n)` is given by
.. math::
\epsilon^\text{qp}_{n \mathbf{k}} =
\epsilon_{n \mathbf{k}} + Z_{n \mathbf{k}} \cdot \text{Re}
\left(\Sigma_{n \mathbf{k}}^{\vphantom{\text{XC}}} +
......@@ -74,7 +74,7 @@ Note that in order to use the ``diagonalize_full_hamiltonian()`` method, the cal
G0W0 calculation
----------------
To do a GW calculation is very easy. First we must decide which states we actually want to perform the calculation for. For just finding the band gap we can many times just do with the locations of the conduction band minimum and valence band maximum. However the quasiparticle spectrum might be qualitatively different from the DFT spectrum, so its best to do the calculation for all k-points. Here's a script that does this:
To do a GW calculation is easy. First we must decide which states we actually want to perform the calculation for. For just finding the band gap we can many times just do with the locations of the conduction band minimum and valence band maximum. However the quasiparticle spectrum might be qualitatively different from the DFT spectrum, so its best to do the calculation for all k-points. Here's a script that does this:
.. literalinclude:: Si_g0w0_ppa.py
......@@ -97,11 +97,11 @@ The fine features of the dielectric function are often averaged out in the integ
- Try making the frequency grid denser or coarser by setting the parameter ``domega0`` to something different than its default value, which is ``domega0=0.025``. The calculation time scales linearly with the number of frequency points, so making it half as big doubles the time. Do your results depend a lot on the frequency grid? When can you safely say they are converged?
Next, we need to make sure that we have enough plane waves to properly describe the wavefunctions, by adjusting the plane wave cut-off. This, however, does not come for free; The GW calculation scales quadratically in the number of plane waves and since we set the number of bands to the same number, it actually scales as the **third power** of the number of plane waves!!!
Next, we need to make sure that we have enough plane waves to properly describe the wavefunctions, by adjusting the plane wave cut-off. This, however, does not come for free; The GW calculation scales quadratically in the number of plane waves and since we set the number of bands to the same number, it actually scales as the **third power** of the number of plane waves.
- Try making a couple of calculations where you change the plane wave cut-off energy from say 25 eV to 150 eV. Just use the default frequency grid. If you wish, you can actually read off the number of plane waves used by looking in the generated log file for the screened potential that ends in ``.w.txt``.
Lastly, we also need to make sure that the calculations is converged with respect to the k-point sampling. To do this, one must make new ground state calculations with different k-point samplings to be put into the G0W0 calculator. The calculation of the quasiparticle energy of one state scales quadratically in the number of k-points, but if one want the full band structure there's an extra factor of the number of k-points, so this quickly becomes very heavy!
Lastly, we also need to make sure that the calculations is converged with respect to the k-point sampling. To do this, one must make new ground state calculations with different k-point samplings to be put into the G0W0 calculator. The calculation of the quasiparticle energy of one state scales quadratically in the number of k-points, but if one want the full band structure there's an extra factor of the number of k-points, so this quickly becomes very heavy.
- Make new groundstate calculations with k-point samplings 4x4x4, 6x6x6 and 8x8x8 and so on and find the DFT band gap. When is it safe to say that the DFT band gap is converged?
- Perform GW calculations (parallelize over minimum four cpus) for the different k-point samplings (4, 6 and 8 k-points only) and compare the gaps. How big is the variation in the gaps compared to the variation in the DFT result? When do you think the GW band gap is converged?
......@@ -4,8 +4,8 @@
Plane wave mode and Stress tensor
=================================
The major advantage of running DFT calculations on a real space grid, is a
very efficient parallelization scheme when dealing with large systems.
The major advantage of running DFT calculations on a real space grid, is an
efficient parallelization scheme when dealing with large systems.
However, for small systems it is often faster to use a plane wave basis set
instead. In this case all quantities are represented by their Fourier
transforms on the periodic super cell and periodic boundary conditions are
......
......@@ -4,7 +4,7 @@
ibmsc.csc.fi
============
Here you find information about the the system
Here you find information about the system
`<http://www.csc.fi/english/csc/news/customerinfo/IBMSC_phased_out>`_.
Debug like this::
......@@ -12,7 +12,7 @@ Debug like this::
p690m ~/gpaw/demo> dbx /p/bin/python2.3
Type 'help' for help.
reading symbolic information ...warning: no source compiled with -g
(dbx) run
Python 2.3.4 (#4, May 28 2004, 15:30:35) [C] on aix5
Type "help", "copyright", "credits" or "license" for more information.
......
......@@ -4,7 +4,7 @@
jump.fz-juelich.de
==================
Here you find information about the the system
Here you find information about the system
`<http://www.fz-juelich.de/jsc/service/sco_ibmP6>`_.
The only way we found to compile numpy is using python2.3 and
......
......@@ -4,7 +4,7 @@
seaborg.nersc.gov
=================
Here you find information about the the system
Here you find information about the system
`<http://www.nersc.gov/nusers/systems/SP/>`_.
We need to use the mpi-enabled compiler ``mpcc`` and we should link to
......
......@@ -4,13 +4,13 @@
FreeBSD
=======
Here you find information about the the system
Here you find information about the system
`<http://freebsd.org/>`_.
To build gpaw add to the ``gpaw/customize.py``::
compiler='gcc44'
extra_compile_args += ['-Wall', '-std=c99']
extra_compile_args += ['-Wall', '-std=c99']
library_dirs += ['/usr/local/lib']
libraries += ['blas', 'lapack', 'gfortran']
......
......@@ -7,7 +7,7 @@ Curie (BullX cluster, Intel Nehalem, Infiniband QDR, MKL)
.. note::
These instructions are up-to-date as of October 2014
Here you find information about the the system
Here you find information about the system
http://www-hpc.cea.fr/en/complexe/tgcc-curie.htm
For large calculations, it is suggested that one utilizes the Scalable Python
......
......@@ -4,7 +4,7 @@
curie.ccc.cea.fr GPU
====================
Here you find information about the the system
Here you find information about the system
http://www-hpc.cea.fr/en/complexe/tgcc-curie.htm.
**Warning**: May 14 2013: rpa-gpu-expt branch fails to compile
......
.. _hermit:
============================
hermit.hww.de (Cray XE6)
hermit.hww.de (Cray XE6)
============================
Here you find information about the the system
Here you find information about the system
`<http://www.hlrs.de/systems/platforms/cray-xe6-hermit/>`_.
.. note::
......@@ -14,26 +14,26 @@ Scalable Python
===============
As the Hermit system is intedend for simulations with thousands of CPU cores,
a special Python interpreter is used here. The scalable Python reduces the
import time by performing all import related I/O operations with a single CPU
a special Python interpreter is used here. The scalable Python reduces the
import time by performing all import related I/O operations with a single CPU
core and uses MPI for broadcasting the data. As a limitation, all the MPI tasks
have to perform the same **import** statements.
As HLRS does not allow general internet access on compute system, e.g. version
control repositories cannot be accessed directly (it is possible to setup
ssh tunnel for some services). Here, we download the scalable Python first to
As HLRS does not allow general internet access on compute system, e.g. version
control repositories cannot be accessed directly (it is possible to setup
ssh tunnel for some services). Here, we download the scalable Python first to
to a local machine and use then scp for copying it to Hermit::
git clone git@gitorious.org:scalable-python/scalable-python.git scalable-python-src
scp -r scalable-python-src username@hermit1.hww.de:
We will build scalable Python with GNU compilers (other compilers can be used
for actual GPAW build), so start by changing the default programming
We will build scalable Python with GNU compilers (other compilers can be used
for actual GPAW build), so start by changing the default programming
environment on Hermit::
module swap PrgEnv-cray PrgEnv-gnu
Due to cross-compile environment in Cray XE6, a normal Python interpreter is
Due to cross-compile environment in Cray XE6, a normal Python interpreter is
build for the front-end nodes and the MPI-enabled one for the compute nodes.
The build can be accomplished by the following ``build_gcc`` script
......@@ -46,7 +46,7 @@ NumPy
=====
As the performance of the HOME filesystem is not very good, we install all the
other components than the pure Python to a disk within the workspace mechanism
of HLRS (with disadvantage that the workspaces expire and have to be
of HLRS (with disadvantage that the workspaces expire and have to be
manually reallocated). Otherwise, no special tricks are needed for installing
NumPy::
......@@ -55,7 +55,7 @@ NumPy::
GPAW
====
On Hermit, Intel compiler together with ACML library seemed to give best
On Hermit, Intel compiler together with ACML library seemed to give best
performance for GPAW, in addition HDF5 will be used for parallel I/O. Thus,
load the followgin modules::
......@@ -63,10 +63,10 @@ load the followgin modules::
module load acml
module load hdf5-parallel
The compilation is relatively straightforward, however, as we build NumPy for
compute nodes it does not work in front-end, and one has to specify NumPy
include dirs in ``customize.py`` and provide ``--ignore-numpy`` flag when
building. The system NumPy headers seem to work fine, but safer option is to
The compilation is relatively straightforward, however, as we build NumPy for
compute nodes it does not work in front-end, and one has to specify NumPy
include dirs in ``customize.py`` and provide ``--ignore-numpy`` flag when
building. The system NumPy headers seem to work fine, but safer option is to
use headers of own NumPy installation
.. literalinclude:: customize_hermit.py
......@@ -84,7 +84,7 @@ Users can define their own modules for making it easier to setup environment var
append-path MODULEPATH $HOME/modules
Now, custom modules can be put to the ``modules`` directory, e.g. GPAW module
Now, custom modules can be put to the ``modules`` directory, e.g. GPAW module
in file ``modules/gpaw``::
#%Module1.0
......
......@@ -4,7 +4,7 @@
jaguar (Cray XT5)
==================
Here you find information about the the system
Here you find information about the system
http://www.nccs.gov/computing-resources/jaguar/.
The current operating system in Cray XT4/XT5 compute nodes, Compute Linux
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment