Commit 5718a2bd authored by Brandon's avatar Brandon Committed by Brian Friesen

make all internal links relative

parent d6567af0
......@@ -41,6 +41,11 @@ mkdocs_build:
key: ${CI_JOB_NAME}
paths:
- .build_cache
check_internal_links:
stage: build
script:
- bash util/check-internal-links.sh
check_links:
stage: test
......
......@@ -16,7 +16,7 @@ nersc$ module avail abinit
## Example
See the [example jobs page](/jobs/examples/) for additional
See the [example jobs page](../../jobs/examples/index.md) for additional
examples and infortmation about jobs.
### Edison
......
......@@ -18,7 +18,7 @@ nersc$ module avail berkeleygw
## Example
See the [example jobs page](/jobs/examples/) for additional
See the [example jobs page](../../jobs/examples/index.md) for additional
examples and information about jobs.
### Edison
......
......@@ -17,7 +17,7 @@ nersc$ module avail paratec
## Example
See the [example jobs page](/jobs/examples/) for additional
See the [example jobs page](../../jobs/examples/index.md) for additional
examples and information about jobs.
### Edison
......
......@@ -54,7 +54,7 @@ is currently no efficient OpenMP implementation available.
We have optimized the hybrid DFT calculations in Quantum ESPRESSO
(`pw.x`). These changes are described in our
[Quantum ESPRESSO case study](/performance/case-studies/quantum-espresso/index.md)
[Quantum ESPRESSO case study](../../performance/case-studies/quantum-espresso/index.md)
and available in the `espresso/6.1` module we provide.
The following scripts provides the best `pw.x` performance for hybrid
......
......@@ -18,7 +18,7 @@ nersc$ module avail wannier90
## Example
See the [example jobs page](/jobs/examples/) for additional
See the [example jobs page](../../jobs/examples/index.md) for additional
examples and information about jobs.
### Edison
......
# Login Nodes
Opening an [SSH connection](./ssh.md) to NERSC systems results in a
Opening an [SSH connection](ssh.md) to NERSC systems results in a
connection to a login node. Systems such as Cori and Edison have
multiple login nodes which sit behind a load balancer. New connections
will be assigned a random node. If an account has recently connected
......@@ -17,7 +17,7 @@ the previous connection.
* compile codes (limit to e.g. `make -j 8`)
* edit files
* submit [jobs](/jobs/index.md)
* submit [jobs](../jobs/index.md)
Some workflows require interactive use of applications such as IDL,
MATLAB, NCL, python, and ROOT. For **small** datasets and **short**
......@@ -25,7 +25,7 @@ runtimes it is acceptable to run these on login nodes. For extended
runtimes or large datasets these should be run in the batch queues.
!!! tip
An [interactive qos](/jobs/examples/#interactive) is available
An [interactive qos](../jobs/examples/index.md#interactive) is available
on Cori for compute and memory intensive interactive work.
### Data transfers
......
......@@ -14,4 +14,4 @@ and support library developed at the National Center for Atmospheric
Research (NCAR). Like HDF5, NetCDF is self-describing and portable
across platforms.
* [HDF5](/programming/libraries/hdf5/index.md)
* [HDF5](../programming/libraries/hdf5/index.md)
......@@ -33,10 +33,10 @@ For a full list of options pass the `--help` flag.
## project directories
The [project](/filesystems/project.md) filesystem allows sharing of
The [project](../filesystems/project.md) filesystem allows sharing of
data within a project.
## Science Gateways
* [Science gateways](/services/science-gateways.md)
* [Science gateways](../services/science-gateways.md)
......@@ -12,4 +12,4 @@ depends on the granularity and size of the Burst Buffer
allocation. Performance is also dependent on access pattern, transfer
size and access method (e.g. MPI I/O, shared files).
* [Examples](/jobs/examples/index.md)
* [Examples](../jobs/examples/index.md)
......@@ -52,7 +52,7 @@ of their home directories. Every directory and sub-directory in
Global homes are backed up to [HPSS](archive.md) monthly.
If the snapshot capability does not meet your need
contact [NERSC Consulting](/help/index.md) with pathnames and
contact [NERSC Consulting](../help/index.md) with pathnames and
timestamps of the missing data.
!!! note
......
......@@ -59,17 +59,17 @@ storage of data that is not frequently accessed.
### Scratch
[Edison](/systems/edison/index.md) and [Cori](/systems/cori/index.md)
[Edison](../systems/edison/index.md) and [Cori](../systems/cori/index.md)
each have dedicated, large, local, parallel scratch file systems based
on Lustre. The scratch file systems are intended for temporary uses
such as storage of checkpoints or application input and output.
* [Cori scratch](/filesystems/cori-scratch.md)
* [Edison scratch](/filesystems/edison-scratch.md)
* [Cori scratch](cori-scratch.md)
* [Edison scratch](edison-scratch.md)
### [Burst Buffer](/filesystems/cori-burst-buffer.md)
### [Burst Buffer](cori-burst-buffer.md)
Cori's [Burst Buffer](/filesystems/cori-burst-buffer.md) provides very
Cori's [Burst Buffer](cori-burst-buffer.md) provides very
high performance I/O on a per-job or short-term basis. It is
particularly useful for codes that are I/O-bound, for example, codes
that produce large checkpoint files, or that have small or random I/O
......
......@@ -5,11 +5,9 @@ the world.
## FAQ
* [Password Resets](/accounts/passwords.md#forgotten-passwords)
* [Connection problems](/connect/ssh)
* [Debug Crashed job](#)
* [File Permissions](#)
* [Software Installtion](#)
* [Password Resets](../accounts/passwords.md#forgotten-passwords)
* [Connection problems](../connect/ssh.md)
* [File Permissions](../filesystems/unix-file-permissions.md)
## Help Desk
......
......@@ -75,11 +75,11 @@ illustrative, not accurate).
(mostly set to "threads" or "cores"). These are useful for fine
tuning thread affinity.
1. Use the [Slurm bcast option](/jobs/best-practices/) for large jobs
1. Use the [Slurm bcast option](../best-practices.md) for large jobs
to copy executables to the compute nodes before jobs starting. See
details here.
1. Use the [core specialization](/jobs/best-practices/) feature to
1. Use the [core specialization](../best-practices.md) feature to
isloate system overhead to specific cores.
## Job Script Generator
......
......@@ -16,7 +16,7 @@ Simulations which must run for a long period of time achieve the best
throughput when composed of main small jobs utilizing
checkpoint/restart chained together.
* [Example: job chaining](./examples/index.md#dependencies)
* [Example: job chaining](examples/index.md#dependencies)
## I/O performance
......@@ -30,7 +30,7 @@ These systems should be referenced with the environment variable
`$SCRATCH`.
!!! tip
On Cori the [Burst Buffer](#) offers the best I/O performance.
On Cori the [Burst Buffer](examples/index.md#burst-buffer-test) offers the best I/O performance.
!!! warn
Scratch filesystems are not backed up and old files are
......@@ -199,6 +199,6 @@ Cori and Edison.
Users requiring large numbers of serial jobs have several options at
NERSC.
* [shared qos](/jobs/examples/index.md#shared)
* [job arrays](/jobs/examples/index.md#job-arrays)
* [workflow tools](/jobs/workflow-tools.md)
* [shared qos](examples/index.md#shared)
* [job arrays](examples/index.md#job-arrays)
* [workflow tools](workflow-tools.md)
......@@ -706,7 +706,7 @@ the "craynetwork" resource, we use the "--gres" flag available in both
This is example is quite similar to the mutliple srun jobs shown for
[running simultaneous parallel
jobs](/jobs/examples#multiple-parallel-jobs-simultaneously), with the
jobs](#multiple-parallel-jobs-simultaneously), with the
following exceptions:
1. For our sbatch job, we have requested "--gres=craynetwork:2" which
......
......@@ -34,7 +34,7 @@ srun -n 1024 -c 2 --cpu-bind=cores /tmp/my_program.x
For applications with dynamic executables and many libraries
(*especially* python based applications)
use [shifter](/programming/shifter/overview.md).
use [shifter](../programming/shifter/overview.md).
## Topology
......
......@@ -162,7 +162,7 @@ values for modules are:
| Module Name | Function |
|-------------|-----------|
|mpich |Allows communication between nodes using the high-speed interconnect|
|cvmfs |Makes access to DVS shared [CVMFS software stack](/services/cvmfs.md) available at /cvmfs in the image|
|cvmfs |Makes access to DVS shared [CVMFS software stack](../../services/cvmfs.md) available at /cvmfs in the image|
|none |Turns off all modules|
Modules can be used together. So if you wanted MPI functionality and
......
......@@ -38,13 +38,13 @@ CI platforms -- such Travis, Jenkins, AppVeyor, etc. -- as it [extensively
simplifies diagnosing CI issues](https://cdash.nersc.gov/viewBuildError.php?type=1&buildid=1358)
and provides a location to log performance history.
If the project uses CMake to generate a build system, see [CDash submission for CMake projects](/services/cdash/with_cmake.md).
If the project uses CMake to generate a build system, see [CDash submission for CMake projects](with_cmake.md).
If the project does not use CMake, NERSC provides python bindings to CMake/CTest in a project called
[pyctest](https://github.com/jrmadsen/pyctest). These bindings allow projects to generate CTest
tests and submit to the dashboard regardless of the build system (CMake, autotools, setup.py, etc.).
This package is available with PyPi (source distribution) and Anaconda (pre-compiled distributions).
For usage, see [CDash submission for non-CMake projects](/services/cdash/without_cmake.md),
For usage, see [CDash submission for non-CMake projects](without_cmake.md),
the [pyctest documentation](https://pyctest.readthedocs.io/en/latest/).
- Anaconda: `conda install -c conda-forge pyctest`
......
......@@ -33,7 +33,7 @@ above doesn't change that all our mounted repositories appear under
CVMFS is also available on Cori and Edison compute nodes with or
without shifter
If using [shifter](/programming/shifter/how-to-use.md) you should
If using [shifter](../programming/shifter/how-to-use.md) you should
specify the `--modules=cvmfs` flag to shifter to make that mount
appear inside your container (unless you want to use your own version
of cvmfs inside the container).
......
......@@ -26,7 +26,7 @@ Eventually these two hubs will be merged and an options form will allow you to s
The two existing hubs are:
* https://jupyter.nersc.gov/
* Runs as a [Spin](/services/spin/) service and is thus external to NERSC's Cray systems
* Runs as a [Spin](../services/spin/index.md) service and is thus external to NERSC's Cray systems
* Notebooks spawned by this service have access to GPFS (e.g. `/project`, `$HOME`)
* Python software environments and kernels run in the Spin service, not on Cori
* https://jupyter-dev.nersc.gov/
......
......@@ -62,12 +62,12 @@ all applications before they run in the production environment.
The name **cattle** refers to the container *Orchestrator* which we use to manage
containers and is part of Rancher. Rancher names many of their components with
'Ranch'-themed names, such as 'Longhorn' or 'Wagyu'. To read more information
on Rancher, please read the [Spin Getting Started Guide overview](/services/spin/getting_started).
on Rancher, please read the [Spin Getting Started Guide overview](index.md).
### Follow the NERSC Security Requirements
Note that all applications sent to Spin must follow the NERSC security requirements,
which are outlined in the [Spin Best Practices Guide](/services/spin/best_practices). If
which are outlined in the [Spin Best Practices Guide](../best_practices.md). If
the application breaks one of the security requirements, Spin will refuse to
run the application and will print an error, such as in the following example:
......@@ -336,7 +336,7 @@ following:
username & group. If you wish to run the Spin containers under your
own Unix username and group instead of a UID & GID. We'll cover how
to do this in
the [Spin Best Practices Guide](/services/spin/best_practices).
the [Spin Best Practices Guide](../best_practices.md).
* The permissions for all files used by your Spin application must be
readable by this account.
* We are looking into ways to improve this experience, and are waiting
......@@ -492,7 +492,7 @@ major differences:
* Sensitive ports, such as MySQL (port 3306), Postgres (5432) are
restricted to NERSC & LBL networks only.
* A detailed list of Ports and their accessibility can be found in the
[Spin Best Practices Guide](/services/spin/best_practices), under "Networking".
[Spin Best Practices Guide](../best_practices.md), under "Networking".
* The `retain_ip` parameter helps a service to keep the same IP address during upgrades.
* Users and groups
* The application runs as your UID & GID account, so it can access files
......@@ -513,7 +513,7 @@ major differences:
even more secure. If you needed to add specific capabilities back
to the container, you can add them with the **cap_add:**
parameter, which is discussed more in
the [Spin Best Practices Guide](/services/spin/best_practices).
the [Spin Best Practices Guide](../best_practices.md).
### Step 3: Sanity checks
......@@ -837,7 +837,7 @@ After running this command, note that the container previously marked as
## Other ways to work with your stack
The Rancher CLI can do many things. Here are some common tasks you can do with
the Rancher CLI. A more comprehensive list can be found at [Spin: Tips & Examples](/services/spin/tips_and_examples/).
the Rancher CLI. A more comprehensive list can be found at [Spin: Tips & Examples](../tips_and_examples.md).
### View all of your stacks
......@@ -927,7 +927,7 @@ See the topic [#removing-your-stack-if-you-get-stuck] above.
## Other useful command line tasks
For more examples of how to use the Rancher CLI, please see [Spin: Tips & Examples](/services/spin/tips_and_examples/).
For more examples of how to use the Rancher CLI, please see [Spin: Tips & Examples](../tips_and_examples.md).
# Next Steps: Lesson 3
......
......@@ -298,7 +298,7 @@ passwords in a Docker Compose file. We'll rectify that problem here by
converting our passwords into Rancher **Secrets**. Secrets should be
used to store any sensitive data, such as passwords or private keys,
and explained further in
the [Spin Best Practices Guide](/docs/services/spin/best_practices/).
the [Spin Best Practices Guide](../best_practices.md).
### Build
......@@ -315,7 +315,7 @@ normally obtain the files using `rancher export`.
Exporting a stack can be useful for collaboration and to store the
configuration files in a version control system. You can only export a
stack that you have permission to do so. For more info on Exporting a
stack, see the [Spin Tips](/docs/services/spin/tips_and_examples/)
stack, see the [Spin Tips](../tips_and_examples.md)
guide.
**We've already obtained our Docker Compose files from the Git repo**,
......
......@@ -15,10 +15,10 @@ storage device.
## Filesystems
* [Cori scratch](/filesystems/cori-scratch.md)
* [Burst Buffer](/filesystems/cori-burst-buffer.md)
* [Cori scratch](../../filesystems/cori-scratch.md)
* [Burst Buffer](../../filesystems/cori-burst-buffer.md)
[Filesystems at NERSC](/filesystems/index.md)
[Filesystems at NERSC](../../filesystems/index.md)
## Configuration
......
......@@ -6,8 +6,8 @@ fast Intel Xeon processors, and 64 GB of memory per node.
## Filesystems
* [Edison scratch](/filesystems/edison-scratch.md)
* [Cori scratch](/filesystems/cori-scratch.md)[^1]
* [Edison scratch](../../filesystems/edison-scratch.md)
* [Cori scratch](../../filesystems/cori-scratch.md)[^1]
[^1]: Cori's scratch filesystem is also mounted on Edison login and
compute nodes
#!/bin/bash
grep -nr '\](/' --include="*md" docs
test $? = 1
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment