JupyterHub¶
JupyterHub is the best way to serve Jupyter notebook for multiple users. It can be used in a classes of students, a corporate data science group or scientific research group. It is a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.
To make life easier, JupyterHub have distributions. Be sure to take a look at them before continuing with the configuration of the broad original system of JupyterHub. Today, you can find two main cases:
If you need a simple case for a small amount of users (0-100) and single server take a look at The Littlest JupyterHub distribution.
If you need to allow for even more users, a dynamic amount of servers can be used on a cloud, take a look at the Zero to JupyterHub with Kubernetes .
Four subsystems make up JupyterHub:
a Hub (tornado process) that is the heart of JupyterHub
a configurable http proxy (node-http-proxy) that receives the requests from the client’s browser
multiple single-user Jupyter notebook servers (Python/IPython/tornado) that are monitored by Spawners
an authentication class that manages how users can access the system
Besides these central pieces, you can add optional configurations through a config.py
file and manage users kernels on an admin panel. A simplification of the whole system can be seen in the figure below:

JupyterHub performs the following functions:
The Hub launches a proxy
The proxy forwards all requests to the Hub by default
The Hub handles user login and spawns single-user servers on demand
The Hub configures the proxy to forward URL prefixes to the single-user notebook servers
For convenient administration of the Hub, its users, and services, JupyterHub also provides a REST API.
The JupyterHub team and Project Jupyter value our community, and JupyterHub follows the Jupyter Community Guides.
Contents¶
Distributions¶
A JupyterHub distribution is tailored towards a particular set of use cases. These are generally easier to set up than setting up JupyterHub from scratch, assuming they fit your use case.
The two popular ones are:
Zero to JupyterHub on Kubernetes, for running JupyterHub on top of Kubernetes. This can scale to large number of machines & users.
The Littlest JupyterHub, for an easy to set up & run JupyterHub supporting 1-100 users on a single machine.
Installation Guide¶
Installation Guide¶
Quickstart¶
Prerequisites¶
Before installing JupyterHub, you will need:
a Linux/Unix based system
Python 3.5 or greater. An understanding of using
pip
orconda
for installing Python packages is helpful.nodejs/npm. Install nodejs/npm, using your operating system’s package manager.
If you are using
conda
, the nodejs and npm dependencies will be installed for you by conda.If you are using
pip
, install a recent version of nodejs/npm. For example, install it on Linux (Debian/Ubuntu) using:sudo apt-get install npm nodejs-legacy
The
nodejs-legacy
package installs thenode
executable and is currently required for npm to work on Debian/Ubuntu.
TLS certificate and key for HTTPS communication
Domain name
Before running the single-user notebook servers (which may be on the same system as the Hub or not), you will need:
Jupyter Notebook version 4 or greater
Installation¶
JupyterHub can be installed with pip
(and the proxy with npm
) or conda
:
pip, npm:
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
python3 -m pip install notebook # needed if running the notebook servers locally
conda (one command installs jupyterhub and proxy):
conda install -c conda-forge jupyterhub # installs jupyterhub and proxy
conda install notebook # needed if running the notebook servers locally
Test your installation. If installed, these commands should return the packages’ help contents:
jupyterhub -h
configurable-http-proxy -h
Start the Hub server¶
To start the Hub server, run the command:
jupyterhub
Visit https://localhost:8000
in your browser, and sign in with your unix
credentials.
To allow multiple users to sign in to the Hub server, you must start
jupyterhub
as a privileged user, such as root:
sudo jupyterhub
The wiki describes how to run the server as a less privileged user. This requires additional configuration of the system.
Using Docker¶
Important
We highly recommend following the Zero to JupyterHub tutorial for installing JupyterHub.
Alternate installation using Docker¶
A ready to go docker image gives a straightforward deployment of JupyterHub.
Note
This jupyterhub/jupyterhub
docker image is only an image for running
the Hub service itself. It does not provide the other Jupyter components,
such as Notebook installation, which are needed by the single-user servers.
To run the single-user servers, which may be on the same system as the Hub or
not, Jupyter Notebook version 4 or greater must be installed.
Starting JupyterHub with docker¶
The JupyterHub docker image can be started with the following command:
docker run -d -p 8000:8000 --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named jupyterhub
that you can
stop and resume with docker stop/start
.
The Hub service will be listening on all interfaces at port 8000, which makes this a good choice for testing JupyterHub on your desktop or laptop.
If you want to run docker on a computer that has a public IP then you should (as in MUST) secure it with ssl by adding ssl options to your docker configuration or using a ssl enabled proxy.
Mounting volumes will allow you to store data outside the docker image (host system) so it will be persistent, even when you start a new image.
The command docker exec -it jupyterhub bash
will spawn a root shell in your
docker container. You can use the root shell to create system users in the container.
These accounts will be used for authentication in JupyterHub’s default
configuration.
Installation Basics¶
Platform support¶
JupyterHub is supported on Linux/Unix based systems. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your team on the network. The JupyterHub server can be on an internal network at your organization, or it can run on the public internet (in which case, take care with the Hub’s security).
JupyterHub officially does not support Windows. You may be able to use JupyterHub on Windows if you use a Spawner and Authenticator that work on Windows, but the JupyterHub defaults will not. Bugs reported on Windows will not be accepted, and the test suite will not run on Windows. Small patches that fix minor Windows compatibility issues (such as basic installation) may be accepted, however. For Windows-based systems, we would recommend running JupyterHub in a docker container or Linux VM.
Additional Reference: Tornado’s documentation on Windows platform support
Planning your installation¶
Prior to beginning installation, it’s helpful to consider some of the following:
deployment system (bare metal, Docker)
Authentication (PAM, OAuth, etc.)
Spawner of singleuser notebook servers (Docker, Batch, etc.)
Services (nbgrader, etc.)
JupyterHub database (default SQLite; traditional RDBMS such as PostgreSQL,) MySQL, or other databases supported by SQLAlchemy)
Folders and File Locations¶
It is recommended to put all of the files used by JupyterHub into standard UNIX filesystem locations.
/srv/jupyterhub
for all security and runtime files/etc/jupyterhub
for all configuration files/var/log
for log files
Getting Started¶
Getting Started¶
Configuration Basics¶
The section contains basic information about configuring settings for a JupyterHub deployment. The Technical Reference documentation provides additional details.
This section will help you learn how to:
generate a default configuration file,
jupyterhub_config.py
start with a specific configuration file
configure JupyterHub using command line options
find information and examples for some common deployments
Generate a default config file¶
On startup, JupyterHub will look by default for a configuration file,
jupyterhub_config.py
, in the current working directory.
To generate a default config file, jupyterhub_config.py
:
jupyterhub --generate-config
This default jupyterhub_config.py
file contains comments and guidance for all
configuration variables and their default values. We recommend storing
configuration files in the standard UNIX filesystem location, i.e.
/etc/jupyterhub
.
Start with a specific config file¶
You can load a specific config file and start JupyterHub using:
jupyterhub -f /path/to/jupyterhub_config.py
If you have stored your configuration file in the recommended UNIX filesystem
location, /etc/jupyterhub
, the following command will start JupyterHub using
the configuration file:
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
The IPython documentation provides additional information on the config system that Jupyter uses.
Configure using command line options¶
To display all command line options that are available for configuration:
jupyterhub --help-all
Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on 10.0.1.2:443
with https, you
would enter:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
All configurable options may technically be set on the command-line,
though some are inconvenient to type. To set a particular configuration
parameter, c.Class.trait
, you would use the command line option,
--Class.trait
, when starting JupyterHub. For example, to configure the
c.Spawner.notebook_dir
trait from the command-line, use the
--Spawner.notebook_dir
option:
jupyterhub --Spawner.notebook_dir='~/assignments'
Configure for various deployment environments¶
The default authentication and process spawning mechanisms can be replaced, and specific authenticators and spawners can be set in the configuration file. This enables JupyterHub to be used with a variety of authentication methods or process control and deployment environments. Some examples, meant as illustration, are:
Using GitHub OAuth instead of PAM with OAuthenticator
Spawning single-user servers with Docker, using the DockerSpawner
Run the proxy separately¶
This is not strictly necessary, but useful in many cases. If you use a custom proxy (e.g. Traefik), this also not needed.
Connections to user servers go through the proxy, and not the hub itself. If the proxy stays running when the hub restarts (for maintenance, re-configuration, etc.), then use connections are not interrupted. For simplicity, by default the hub starts the proxy automatically, so if the hub restarts, the proxy restarts, and user connections are interrupted. It is easy to run the proxy separately, for information see the separate proxy page.
Networking basics¶
This section will help you with basic proxy and network configuration to:
set the proxy’s IP address and port
set the proxy’s REST API URL
configure the Hub if the Proxy or Spawners are remote or isolated
set the
hub_connect_ip
which services will use to communicate with the hub
Set the Proxy’s IP address and port¶
The Proxy’s main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces
(''
) on port 8000. Note: Use of '*'
is discouraged for IP configuration;
instead, use of '0.0.0.0'
is preferred.
Changing the Proxy’s main IP address and port can be done with the following JupyterHub command line options:
jupyterhub --ip=192.168.1.2 --port=443
Or by placing the following lines in a configuration file,
jupyterhub_config.py
:
c.JupyterHub.ip = '192.168.1.2'
c.JupyterHub.port = 443
Port 443 is used in the examples since 443 is the default port for SSL/HTTPS.
Configuring only the main IP and port of JupyterHub should be sufficient for most deployments of JupyterHub. However, more customized scenarios may need additional networking details to be configured.
Note that c.JupyterHub.ip
and c.JupyterHub.port
are single values,
not tuples or lists – JupyterHub listens to only a single IP address and
port.
Set the Proxy’s REST API communication URL (optional)¶
By default, this REST API listens on port 8081 of localhost
only.
The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately and override the default settings.
The URL to access the API, c.configurableHTTPProxy.api_url
, is configurable.
An example entry to set the proxy’s API URL in jupyterhub_config.py
is:
c.ConfigurableHTTPProxy.api_url = 'http://10.0.1.4:5432'
If running the Proxy separate from the Hub, configure the REST API communication
IP address and port by adding this to the jupyterhub_config.py
file:
# ideally a private network address
c.JupyterHub.proxy_api_ip = '10.0.1.4'
c.JupyterHub.proxy_api_port = 5432
We recommend using the proxy’s api_url
setting instead of the deprecated
settings, proxy_api_ip
and proxy_api_port
.
Configure the Hub if the Proxy or Spawners are remote or isolated¶
The Hub service listens only on localhost
(port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of localhost
is fine.
If either the Proxy or (more likely) the Spawners will be remote or isolated in containers, the Hub must listen on an IP that is accessible.
c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321
Added in 0.8: The c.JupyterHub.hub_connect_ip
setting is the ip address or
hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is:
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
Adjusting the hub’s URL¶
The hub will most commonly be running on a hostname of its own. If it
is not – for example, if the hub is being reverse-proxied and being
exposed at a URL such as https://proxy.example.org/jupyter/
– then
you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set
c.JupyterHub.base_url = '/jupyter/'
in the configuration.
Security settings¶
Important
You should not run JupyterHub without SSL encryption on a public network.
Security is the most important aspect of configuring Jupyter. Three configuration settings are the main aspects of security configuration:
SSL encryption (to enable HTTPS)
Cookie secret (a key for encrypting browser cookies)
Proxy authentication token (used for the Hub and other services to authenticate to the Proxy)
The Hub hashes all secrets (e.g., auth tokens) before storing them in its database. A loss of control over read-access to the database should have minimal impact on your deployment; if your database has been compromised, it is still a good idea to revoke existing tokens.
Enabling SSL encryption¶
Since JupyterHub includes authentication and allows arbitrary code execution, you should not run it without SSL (HTTPS).
This will require you to obtain an official, trusted SSL certificate or create a
self-signed certificate. Once you have obtained and installed a key and
certificate you need to specify their locations in the jupyterhub_config.py
configuration file as follows:
c.JupyterHub.ssl_key = '/path/to/my.key'
c.JupyterHub.ssl_cert = '/path/to/my.cert'
Some cert files also contain the key, in which case only the cert is needed. It is important that these files be put in a secure location on your server, where they are not readable by regular users.
If you are using a chain certificate, see also chained certificate for SSL in the JupyterHub Troubleshooting FAQ.
It is also possible to use letsencrypt to obtain
a free, trusted SSL certificate. If you run letsencrypt using the default
options, the needed configuration is (replace mydomain.tld
by your fully
qualified domain name):
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'
If the fully qualified domain name (FQDN) is example.com
, the following
would be the needed configuration:
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
In certain cases, for example if the hub is running behind a reverse proxy, and SSL termination is being provided by NGINX, it is reasonable to run the hub without SSL.
To achieve this, simply omit the configuration settings
c.JupyterHub.ssl_key
and c.JupyterHub.ssl_cert
(setting them to None
does not have the same effect, and is an error).
Cookie secret¶
The cookie secret is an encryption key, used to encrypt the browser cookies which are used for authentication. Three common methods are described for generating and configuring the cookie secret.
The cookie secret should be 32 random bytes, encoded as hex, and is typically
stored in a jupyterhub_cookie_secret
file. An example command to generate the
jupyterhub_cookie_secret
file is:
openssl rand -hex 32 > /srv/jupyterhub/jupyterhub_cookie_secret
In most deployments of JupyterHub, you should point this to a secure location on
the file system, such as /srv/jupyterhub/jupyterhub_cookie_secret
.
The location of the jupyterhub_cookie_secret
file can be specified in the
jupyterhub_config.py
file as follows:
c.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret'
If the cookie secret file doesn’t exist when the Hub starts, a new cookie
secret is generated and stored in the file. The file must not be readable by
group
or other
or the server won’t start. The recommended permissions
for the cookie secret file are 600
(owner-only rw).
If you would like to avoid the need for files, the value can be loaded in the
Hub process from the JPY_COOKIE_SECRET
environment variable, which is a
hex-encoded string. You can set it this way:
export JPY_COOKIE_SECRET=$(openssl rand -hex 32)
For security reasons, this environment variable should only be visible to the Hub. If you set it dynamically as above, all users will be logged out each time the Hub starts.
You can also set the cookie secret in the configuration file
itself, jupyterhub_config.py
, as a binary string:
c.JupyterHub.cookie_secret = bytes.fromhex('64 CHAR HEX STRING')
Important
If the cookie secret value changes for the Hub, all single-user notebook servers must also be restarted.
Proxy authentication token¶
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. The value of this string should be a random
string (for example, generated by openssl rand -hex 32
).
Or you can set the value in the configuration file, jupyterhub_config.py
:
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5'
You can pass this value of the proxy authentication token to the Hub and Proxy
using the CONFIGPROXY_AUTH_TOKEN
environment variable:
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
This environment variable needs to be visible to the Hub and Proxy.
If you don’t set the Proxy authentication token, the Hub will generate a random key itself, which means that any time you restart the Hub you must also restart the Proxy. If the proxy is a subprocess of the Hub, this should happen automatically (this is the default configuration).
Authentication and User Basics¶
The default Authenticator uses PAM to authenticate system users with their username and password. With the default Authenticator, any user with an account and password on the system will be allowed to login.
Create a whitelist of users¶
You can restrict which users are allowed to login with a whitelist,
Authenticator.whitelist
:
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'}
Users in the whitelist are added to the Hub database when the Hub is started.
Configure admins (admin_users
)¶
Admin users of JupyterHub, admin_users
, can add and remove users from
the user whitelist
. admin_users
can take actions on other users’
behalf, such as stopping and restarting their servers.
A set of initial admin users, admin_users
can configured be as follows:
c.Authenticator.admin_users = {'mal', 'zoe'}
Users in the admin list are automatically added to the user whitelist
,
if they are not already present.
Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub use the PAMAuthenticator which provide the
admin_groups
option and can determine administrator status base on a user
groups. For example we can let any users in the wheel
group be admin:
c.PAMAuthenticator.admin_groups = {'wheel'}
Give admin access to other users’ notebook servers (admin_access
)¶
Since the default JupyterHub.admin_access
setting is False, the admins
do not have permission to log in to the single user notebook servers
owned by other users. If JupyterHub.admin_access
is set to True,
then admins have permission to log in as other users on their
respective machines, for debugging. As a courtesy, you should make
sure your users know if admin_access is enabled.
Add or remove users from the Hub¶
Users can be added to and removed from the Hub via either the admin panel or the REST API. When a user is added, the user will be automatically added to the whitelist and database. Restarting the Hub will not require manually updating the whitelist in your config file, as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to remove a user
from the whitelist in your config file. You must also remove the user
from the Hub’s database, either by deleting the user from JupyterHub’s
admin page, or you can clear the jupyterhub.sqlite
database and start
fresh.
Use LocalAuthenticator to create system users¶
The LocalAuthenticator
is a special kind of authenticator that has
the ability to manage users on the local system. When you try to add a
new user to the Hub, a LocalAuthenticator
will check if the user
already exists. If you set the configuration value, create_system_users
,
to True
in the configuration file, the LocalAuthenticator
has
the privileges to add users to the system. The setting in the config
file is:
c.LocalAuthenticator.create_system_users = True
Adding a user to the Hub that doesn’t already exist on the system will
result in the Hub creating that user via the system adduser
command
line tool. This option is typically used on hosted deployments of
JupyterHub, to avoid the need to manually create all your users before
launching the service. This approach is not recommended when running
JupyterHub in situations where JupyterHub users map directly onto the
system’s UNIX users.
Use OAuthenticator to support OAuth with popular service providers¶
JupyterHub’s OAuthenticator currently supports the following popular services:
Auth0
Bitbucket
CILogon
GitHub
GitLab
Globus
Google
MediaWiki
Okpy
OpenShift
A generic implementation, which you can use for OAuth authentication with any provider, is also available.
Use DummyAuthenticator for testing¶
The :class:~jupyterhub.auth.DummyAuthenticator
is a simple authenticator that
allows for any username/password unless if a global password has been set. If
set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file:
c.DummyAuthenticator.password = "some_password"
Spawners and single-user notebook servers¶
Since the single-user server is an instance of jupyter notebook
, an entire separate
multi-process application, there are many aspect of that server can configure, and a lot of ways
to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
Spawner.notebook_dir
, which lets you set the root directory for a user’s server. This root
notebook directory is the highest level directory users will be able to access in the notebook
dashboard. In this example, the root notebook directory is set to ~/notebooks
, where ~
is
expanded to the user’s home directory.
c.Spawner.notebook_dir = '~/notebooks'
You can also specify extra command-line arguments to the notebook server with:
c.Spawner.args = ['--debug', '--profile=PHYS131']
This could be used to set the users default page for the single user server:
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
Since the single-user server extends the notebook server application,
it still loads configuration from the jupyter_notebook_config.py
config file.
Each user may have one of these files in $HOME/.jupyter/
.
Jupyter also supports loading system-wide config files from /etc/jupyter/
,
which is the place to put configuration that you want to affect all of your users.
External services¶
When working with JupyterHub, a Service is defined as a process that interacts with the Hub’s REST API. A Service may perform a specific or action or task. For example, shutting down individuals’ single user notebook servers that have been idle for some time is a good example of a task that could be automated by a Service. Let’s look at how the cull_idle_servers script can be used as a Service.
Real-world example to cull idle servers¶
JupyterHub has a REST API that can be used by external services. This document will:
explain some basic information about API tokens
clarify that API tokens can be used to authenticate to single-user servers as of version 0.8.0
show how the cull_idle_servers script can be:
used in a Hub-managed service
run as a standalone script
Both examples for cull_idle_servers
will communicate tasks to the
Hub via the REST API.
API Token basics¶
To run such an external service, an API token must be created and provided to the service.
As of version 0.6.0, the preferred way of doing this is to first generate an API token:
openssl rand -hex 32
In version 0.8.0, a TOKEN request page for generating an API token is available from the JupyterHub user interface:
In the case of cull_idle_servers
, it is passed as the environment
variable called JUPYTERHUB_API_TOKEN
.
While API tokens are often associated with a specific user, API tokens
can be used by services that require external access for activities
that may not correspond to a specific human, e.g. adding users during
setup for a tutorial or workshop. Add a service and its API token to the
JupyterHub configuration file, jupyterhub_config.py
:
c.JupyterHub.services = [
{'name': 'adding-users', 'api_token': 'super-secret-token'},
]
Upon restarting JupyterHub, you should see a message like below in the logs:
Adding API token for <username>
Authenticating to single-user servers using API token¶
In JupyterHub 0.7, there is no mechanism for token authentication to single-user servers, and only cookies can be used for authentication. 0.8 supports using JupyterHub API tokens to authenticate to single-user servers.
Configure cull-idle
to run as a Hub-Managed Service¶
In jupyterhub_config.py
, add the following dictionary for the
cull-idle
Service to the c.JupyterHub.services
list:
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': [sys.executable, 'cull_idle_servers.py', '--timeout=3600'],
}
]
where:
'admin': True
indicates that the Service has ‘admin’ permissions, and'command'
indicates that the Service will be launched as a subprocess, managed by the Hub.
Run cull-idle
manually as a standalone script¶
Now you can run your script, i.e. cull_idle_servers
, by providing it
the API token and it will authenticate through the REST API to
interact with it.
This will run cull-idle
manually. cull-idle
can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub’s REST API. In order to shutdown the
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the JUPYTERHUB_API_TOKEN
environment
variable. Run cull_idle_servers.py
manually.
export JUPYTERHUB_API_TOKEN='token'
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
Technical Reference¶
Technical Reference¶
Technical Overview¶
The Technical Overview section gives you a high-level view of:
JupyterHub’s Subsystems: Hub, Proxy, Single-User Notebook Server
how the subsystems interact
the process from JupyterHub access to user login
JupyterHub’s default behavior
customizing JupyterHub
The goal of this section is to share a deeper technical understanding of JupyterHub and how it works.
The Subsystems: Hub, Proxy, Single-User Notebook Server¶
JupyterHub is a set of processes that together provide a single user Jupyter
Notebook server for each person in a group. Three major subsystems are started
by the jupyterhub
command line program:
Hub (Python/Tornado): manages user accounts, authentication, and coordinates Single User Notebook Servers using a Spawner.
Proxy: the public facing part of JupyterHub that uses a dynamic proxy to route HTTP requests to the Hub and Single User Notebook Servers. configurable http proxy (node-http-proxy) is the default proxy.
Single-User Notebook Server (Python/Tornado): a dedicated, single-user, Jupyter Notebook server is started for each user on the system when the user logs in. The object that starts the single-user notebook servers is called a Spawner.
How the Subsystems Interact¶
Users access JupyterHub through a web browser, by going to the IP address or the domain name of the server.
The basic principles of operation are:
The Hub spawns the proxy (in the default JupyterHub configuration)
The proxy forwards all requests to the Hub by default
The Hub handles login, and spawns single-user notebook servers on demand
The Hub configures the proxy to forward url prefixes to single-user notebook servers
The proxy is the only process that listens on a public interface. The Hub sits
behind the proxy at /hub
. Single-user servers sit behind the proxy at
/user/[username]
.
Different authenticators control access to JupyterHub. The default one (PAM) uses the user accounts on the server where JupyterHub is running. If you use this, you will need to create a user account on the system for each user on your team. Using other authenticators, you can allow users to sign in with e.g. a GitHub account, or with any single-sign-on system your organization has.
Next, spawners control how JupyterHub starts the individual notebook server for each user. The default spawner will start a notebook server on the same machine running under their system username. The other main option is to start each server in a separate container, often using Docker.
The Process from JupyterHub Access to User Login¶
When a user accesses JupyterHub, the following events take place:
Login data is handed to the Authenticator instance for validation
The Authenticator returns the username if the login information is valid
A single-user notebook server instance is spawned for the logged-in user
When the single-user notebook server starts, the proxy is notified to forward requests to
/user/[username]/*
to the single-user notebook server.A cookie is set on
/hub/
, containing an encrypted token. (Prior to version 0.8, a cookie for/user/[username]
was used too.)The browser is redirected to
/user/[username]
, and the request is handled by the single-user notebook server.
The single-user server identifies the user with the Hub via OAuth:
on request, the single-user server checks a cookie
if no cookie is set, redirect to the Hub for verification via OAuth
after verification at the Hub, the browser is redirected back to the single-user server
the token is verified and stored in a cookie
if no user is identified, the browser is redirected back to
/hub/login
Default Behavior¶
By default, the Proxy listens on all public interfaces on port 8000. Thus you can reach JupyterHub through either:
http://localhost:8000
or any other public IP or domain pointing to your system.
In their default configuration, the other services, the Hub and Single-User Notebook Servers, all communicate with each other on localhost only.
By default, starting JupyterHub will write two files to disk in the current working directory:
jupyterhub.sqlite
is the SQLite database containing all of the state of the Hub. This file allows the Hub to remember which users are running and where, as well as storing other information enabling you to restart parts of JupyterHub separately. It is important to note that this database contains no sensitive information other than Hub usernames.jupyterhub_cookie_secret
is the encryption key used for securing cookies. This file needs to persist so that a Hub server restart will avoid invalidating cookies. Conversely, deleting this file and restarting the server effectively invalidates all login cookies. The cookie secret file is discussed in the Cookie Secret section of the Security Settings document.
The location of these files can be specified via configuration settings. It is
recommended that these files be stored in standard UNIX filesystem locations,
such as /etc/jupyterhub
for all configuration files and /srv/jupyterhub
for
all security and runtime files.
Customizing JupyterHub¶
There are two basic extension points for JupyterHub:
How users are authenticated by Authenticators
How user’s single-user notebook server processes are started by Spawners
Each is governed by a customizable class, and JupyterHub ships with basic defaults for each.
To enable custom authentication and/or spawning, subclass Authenticator
or
Spawner
, and override the relevant methods.
JupyterHub URL scheme¶
This document describes how JupyterHub routes requests.
This does not include the REST API urls.
In general, all URLs can be prefixed with c.JupyterHub.base_url
to
run the whole JupyterHub application on a prefix.
All authenticated handlers redirect to /hub/login
to login users
prior to being redirected back to the originating page.
The returned request should preserve all query parameters.
/
¶
The top-level request is always a simple redirect to /hub/
,
to be handled by the default JupyterHub handler.
In general, all requests to /anything
that do not start with /hub/
but are routed to the Hub, will be redirected to /hub/anything
before being handled by the Hub.
/hub/
¶
This is an authenticated URL.
This handler redirects users to the default URL of the application,
which defaults to the user’s default server.
That is, it redirects to /hub/spawn
if the user’s server is not running,
or the server itself (/user/:name
) if the server is running.
This default url behavior can be customized in two ways:
To redirect users to the JupyterHub home page (/hub/home
)
instead of spawning their server,
set redirect_to_server
to False:
c.JupyterHub.redirect_to_server = False
This might be useful if you have a Hub where you expect users to be managing multiple server configurations and automatic spawning is not desirable.
Second, you can customise the landing page to any page you like, such as a custom service you have deployed e.g. with course information:
c.JupyterHub.default_url = '/services/my-landing-service'
/hub/home
¶
By default, the Hub home page has just one or two buttons for starting and stopping the user’s server.
If named servers are enabled, there will be some additional tools for management of named servers.
Version added: 1.0 named server UI is new in 1.0.
/hub/login
¶
This is the JupyterHub login page. If you have a form-based username+password login, such as the default PAMAuthenticator, this page will render the login form.
If login is handled by an external service, e.g. with OAuth, this page will have a button, declaring “Login with …” which users can click to login with the chosen service.
If you want to skip the user-interaction to initiate logging in via the button, you can set
c.Authenticator.auto_login = True
This can be useful when the user is “already logged in” via some mechanism, but a handshake via redirects is necessary to complete the authentication with JupyterHub.
/hub/logout
¶
Visiting /hub/logout
clears cookies from the current browser.
Note that logging out does not stop a user’s server(s) by default.
If you would like to shutdown user servers on logout, you can enable this behavior with:
c.JupyterHub.shutdown_on_logout = True
Be careful with this setting because logging out one browser does not mean the user is no longer actively using their server from another machine.
/user/:username[/:servername]
¶
If a user’s server is running, this URL is handled by the user’s given server, not the Hub. The username is the first part and, if using named servers, the server name is the second part.
If the user’s server is not running, this will be redirected to /hub/user/:username/...
/hub/user/:username[/:servername]
¶
This URL indicates a request for a user server that is not running
(because /user/...
would have been handled by the notebook server
if the specified server were running).
Handling this URL is the most complicated condition in JupyterHub, because there can be many states:
server is not active a. user matches b. user doesn’t match
server is ready
server is pending, but not ready
If the server is pending spawn,
the browser will be redirected to /hub/spawn-pending/:username/:servername
to see a progress page while waiting for the server to be ready.
If the server is not active at all,
a page will be served with a link to /hub/spawn/:username/:servername
.
Following that link will launch the requested server.
The HTTP status will be 503 in this case because a request has been made for a server that is not running.
If the server is ready, it is assumed that the proxy has not yet registered the route.
Some checks are performed and a delay is added before redirecting back to /user/:username/:servername/...
.
If something is really wrong, this can result in a redirect loop.
Visiting this page will never result in triggering the spawn of servers without additional user action (i.e. clicking the link on the page)
Version changed: 1.0
Prior to 1.0, this URL itself was responsible for spawning servers, and served the progress page if it was pending, redirected to running servers, and This was useful because it made sure that requested servers were restarted after they stopped, but could also be harmful because unused servers would continuously be restarted if e.g. an idle JupyterLab frontend were open pointed at it, which constantly makes polling requests.
Requests to /user/:username[/:servername]/api/...
are assumed to be
from applications connected to stopped servers.
These are failed with 503 and an informative JSON error message
indicating how to spawn the server.
This is meant to help applications such as JupyterLab
that are connected to a server that has stopped.
Version changed: 1.0
JupyterHub 0.9 failed these API requests with status 404, but 1.0 uses 503.
/user-redirect/...
¶
This URL is for sharing a URL that will redirect a user to a path on their own default server. This is useful when users have the same file at the same URL on their servers, and you want a single link to give to any user that will open that file on their server.
e.g. a link to /user-redirect/notebooks/Index.ipynb
will send user hortense
to /user/hortense/notebooks/Index.ipynb
DO NOT share links to your own server with other users. This will not work in general, unless you grant those users access to your server.
Contributions welcome: The JupyterLab “shareable link” should share this link when run with JupyterHub, but it does not. See jupyterlab-hub where this should probably be done and this issue in JupyterLab that is intended to make it possible.
Spawning¶
/hub/spawn[/:username[/:servername]]
¶Requesting /hub/spawn
will spawn the default server for the current user.
If username
and optionally servername
are specified,
then the specified server for the specified user will be spawned.
Once spawn has been requested,
the browser is redirected to /hub/spawn-pending/...
.
If Spawner.options_form
is used,
this will render a form,
and a POST request will trigger the actual spawn and redirect.
Version added: 1.0
1.0 adds the ability to specify username and servername.
Prior to 1.0, only /hub/spawn
was recognized for the default server.
Version changed: 1.0
Prior to 1.0, this page redirected back to /hub/user/:username
,
which was responsible for triggering spawn and rendering progress, etc.
/hub/spawn-pending[/:username[/:servername]]
¶Version added: 1.0 this URL is new in JupyterHub 1.0.
This page renders the progress view for the given spawn request.
Once the server is ready,
the browser is redirected to the running server at /user/:username/:servername/...
.
If this page is requested at any time after the specified server is ready, the browser will be redirected to the running server.
Requesting this page will never trigger any side effects.
If the server is not running (e.g. because the spawn has failed),
the spawn failure message (if applicable) will be displayed,
and the page will show a link back to /hub/spawn/...
.
/hub/token
¶
On this page, users can manage their JupyterHub API tokens. They can revoke access and request new tokens for writing scripts against the JupyterHub REST API.
/hub/admin
¶
Administrators can take various administrative actions from this page:
add/remove users
grant admin privileges
start/stop user servers
shutdown JupyterHub itself
Security Overview¶
The Security Overview section helps you learn about:
the design of JupyterHub with respect to web security
the semi-trusted user
the available mitigations to protect untrusted users from each other
the value of periodic security audits.
This overview also helps you obtain a deeper understanding of how JupyterHub works.
Semi-trusted and untrusted users¶
JupyterHub is designed to be a simple multi-user server for modestly sized groups of semi-trusted users. While the design reflects serving semi-trusted users, JupyterHub is not necessarily unsuitable for serving untrusted users.
Using JupyterHub with untrusted users does mean more work by the administrator. Much care is required to secure a Hub, with extra caution on protecting users from each other as the Hub is serving untrusted users.
One aspect of JupyterHub’s design simplicity for semi-trusted users is that the Hub and single-user servers are placed in a single domain, behind a proxy. If the Hub is serving untrusted users, many of the web’s cross-site protections are not applied between single-user servers and the Hub, or between single-user servers and each other, since browsers see the whole thing (proxy, Hub, and single user servers) as a single website (i.e. single domain).
Protect users from each other¶
To protect users from each other, a user must never be able to write arbitrary HTML and serve it to another user on the Hub’s domain. JupyterHub’s authentication setup prevents a user writing arbitrary HTML and serving it to another user because only the owner of a given single-user notebook server is allowed to view user-authored pages served by the given single-user notebook server.
To protect all users from each other, JupyterHub administrators must ensure that:
A user does not have permission to modify their single-user notebook server, including:
A user may not install new packages in the Python environment that runs their single-user server.
If the
PATH
is used to resolve the single-user executable (instead of using an absolute path), a user may not create new files in anyPATH
directory that precedes the directory containingjupyterhub-singleuser
.A user may not modify environment variables (e.g. PATH, PYTHONPATH) for their single-user server.
A user may not modify the configuration of the notebook server (the
~/.jupyter
orJUPYTER_CONFIG_DIR
directory).
If any additional services are run on the same domain as the Hub, the services must never display user-authored HTML that is neither sanitized nor sandboxed (e.g. IFramed) to any user that lacks authentication as the author of a file.
Mitigate security issues¶
Several approaches to mitigating these issues with configuration options provided by JupyterHub include:
JupyterHub provides the ability to run single-user servers on their own
subdomains. This means the cross-origin protections between servers has the
desired effect, and user servers and the Hub are protected from each other. A
user’s single-user server will be at username.jupyter.mydomain.com
. This also
requires all user subdomains to point to the same address, which is most easily
accomplished with wildcard DNS. Since this spreads the service across multiple
domains, you will need wildcard SSL, as well. Unfortunately, for many
institutional domains, wildcard DNS and SSL are not available. If you do plan
to serve untrusted users, enabling subdomains is highly encouraged, as it
resolves the cross-site issues.
If subdomains are not available or not desirable, JupyterHub provides a
configuration option Spawner.disable_user_config
, which can be set to prevent
the user-owned configuration files from being loaded. After implementing this
option, PATHs and package installation and PATHs are the other things that the
admin must enforce.
For most Spawners, PATH
is not something users can influence, but care should
be taken to ensure that the Spawner does not evaluate shell configuration
files prior to launching the server.
Package isolation is most easily handled by running the single-user server in a virtualenv with disabled system-site-packages. The user should not have permission to install packages into this environment.
It is important to note that the control over the environment only affects the single-user server, and not the environment(s) in which the user’s kernel(s) may run. Installing additional packages in the kernel environment does not pose additional risk to the web application’s security.
By default, all communication on the server, between the proxy, hub, and single
-user notebooks is performed unencrypted. Setting the internal_ssl
flag in
jupyterhub_config.py
secures the aforementioned routes. Turning this
feature on does require that the enabled Spawner
can use the certificates
generated by the Hub
(the default LocalProcessSpawner
can, for instance).
It is also important to note that this encryption does not (yet) cover the
zmq tcp
sockets between the Notebook client and kernel. While users cannot
submit arbitrary commands to another user’s kernel, they can bind to these
sockets and listen. When serving untrusted users, this eavesdropping can be
mitigated by setting KernelManager.transport
to ipc
. This applies standard
Unix permissions to the communication sockets thereby restricting
communication to the socket owner. The internal_ssl
option will eventually
extend to securing the tcp
sockets as well.
Security audits¶
We recommend that you do periodic reviews of your deployment’s security. It’s good practice to keep JupyterHub, configurable-http-proxy, and nodejs versions up to date.
A handy website for testing your deployment is Qualsys’ SSL analyzer tool.
Vulnerability reporting¶
If you believe you’ve found a security vulnerability in JupyterHub, or any Jupyter project, please report it to security@ipython.org. If you prefer to encrypt your security reports, you can use this PGP public key.
Authenticators¶
The Authenticator is the mechanism for authorizing users to use the Hub and single user notebook servers.
The default PAM Authenticator¶
JupyterHub ships with the default PAM-based Authenticator, for logging in with local user accounts via a username and password.
The OAuthenticator¶
Some login mechanisms, such as OAuth, don’t map onto username and password authentication, and instead use tokens. When using these mechanisms, you can override the login handlers.
You can see an example implementation of an Authenticator that uses GitHub OAuth at OAuthenticator.
JupyterHub’s OAuthenticator currently supports the following popular services:
Auth0
Bitbucket
CILogon
GitHub
GitLab
Globus
Google
MediaWiki
Okpy
OpenShift
A generic implementation, which you can use for OAuth authentication with any provider, is also available.
The Dummy Authenticator¶
When testing, it may be helpful to use the
:class:~jupyterhub.auth.DummyAuthenticator
. This allows for any username and
password unless if a global password has been set. Once set, any username will
still be accepted but the correct password will need to be provided.
Additional Authenticators¶
A partial list of other authenticators is available on the JupyterHub wiki.
Technical Overview of Authentication¶
The base authenticator uses simple username and password authentication.
The base Authenticator has one central method:
Authenticator.authenticate(handler, data)
This method is passed the Tornado RequestHandler
and the POST data
from JupyterHub’s login form. Unless the login form has been customized,
data
will have two keys:
username
password
The authenticate
method’s job is simple:
return the username (non-empty str) of the authenticated user if authentication is successful
return
None
otherwise
Writing an Authenticator that looks up passwords in a dictionary requires only overriding this one method:
from IPython.utils.traitlets import Dict
from jupyterhub.auth import Authenticator
class DictionaryAuthenticator(Authenticator):
passwords = Dict(config=True,
help="""dict of username:password for authentication"""
)
async def authenticate(self, handler, data):
if self.passwords.get(data['username']) == data['password']:
return data['username']
Since the Authenticator and Spawner both use the same username,
sometimes you want to transform the name coming from the authentication service
(e.g. turning email addresses into local system usernames) before adding them to the Hub service.
Authenticators can define normalize_username
, which takes a username.
The default normalization is to cast names to lowercase
For simple mappings, a configurable dict Authenticator.username_map
is used to turn one name into another:
c.Authenticator.username_map = {
'service-name': 'localname'
}
When using PAMAuthenticator
, you can set
c.PAMAuthenticator.pam_normalize_username = True
, which will
normalize usernames using PAM (basically round-tripping them: username
to uid to username), which is useful in case you use some external
service that allows multiple usernames mapping to the same user (such
as ActiveDirectory, yes, this really happens). When
pam_normalize_username
is on, usernames are not normalized to
lowercase.
In most cases, there is a very limited set of acceptable usernames.
Authenticators can define validate_username(username)
,
which should return True for a valid username and False for an invalid one.
The primary effect this has is improving error messages during user creation.
The default behavior is to use configurable Authenticator.username_pattern
,
which is a regular expression string for validation.
To only allow usernames that start with ‘w’:
c.Authenticator.username_pattern = r'w.*'
You can use custom Authenticator subclasses to enable authentication via other mechanisms. One such example is using GitHub OAuth.
Because the username is passed from the Authenticator to the Spawner, a custom Authenticator and Spawner are often used together. For example, the Authenticator methods, pre_spawn_start(user, spawner) and post_spawn_stop(user, spawner), are hooks that can be used to do auth-related startup (e.g. opening PAM sessions) and cleanup (e.g. closing PAM sessions).
See a list of custom Authenticators on the wiki.
If you are interested in writing a custom authenticator, you can read this tutorial.
As of JupyterHub 1.0, custom authenticators can register themselves via
the jupyterhub.authenticators
entry point metadata.
To do this, in your setup.py
add:
setup(
...
entry_points={
'jupyterhub.authenticators': [
'myservice = mypackage:MyAuthenticator',
],
},
)
If you have added this metadata to your package, users can select your authenticator with the configuration:
c.JupyterHub.authenticator_class = 'myservice'
instead of the full
c.JupyterHub.authenticator_class = 'mypackage:MyAuthenticator'
previously required.
Additionally, configurable attributes for your authenticator will
appear in jupyterhub help output and auto-generated configuration files
via jupyterhub --generate-config
.
JupyterHub 0.8 adds the ability to persist state related to authentication,
such as auth-related tokens.
If such state should be persisted, .authenticate()
should return a dictionary of the form:
{
'name': username,
'auth_state': {
'key': 'value',
}
}
where username
is the username that has been authenticated,
and auth_state
is any JSON-serializable dictionary.
Because auth_state
may contain sensitive information,
it is encrypted before being stored in the database.
To store auth_state, two conditions must be met:
persisting auth state must be enabled explicitly via configuration
c.Authenticator.enable_auth_state = True
encryption must be enabled by the presence of
JUPYTERHUB_CRYPT_KEY
environment variable, which should be a hex-encoded 32-byte key. For example:export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
JupyterHub uses Fernet to encrypt auth_state.
To facilitate key-rotation, JUPYTERHUB_CRYPT_KEY
may be a semicolon-separated list of encryption keys.
If there are multiple keys present, the first key is always used to persist any new auth_state.
Typically, if auth_state
is persisted it is desirable to affect the Spawner environment in some way.
This may mean defining environment variables, placing certificate in the user’s home directory, etc.
The Authenticator.pre_spawn_start
method can be used to pass information from authenticator state
to Spawner environment:
class MyAuthenticator(Authenticator):
@gen.coroutine
def authenticate(self, handler, data=None):
username = yield identify_user(handler, data)
upstream_token = yield token_for_user(username)
return {
'name': username,
'auth_state': {
'upstream_token': upstream_token,
},
}
@gen.coroutine
def pre_spawn_start(self, user, spawner):
"""Pass upstream_token to spawner via environment variable"""
auth_state = yield user.get_auth_state()
if not auth_state:
# auth_state not enabled
return
spawner.environment['UPSTREAM_TOKEN'] = auth_state['upstream_token']
pre_spawn_start and post_spawn_stop hooks¶
Authenticators uses two hooks, pre_spawn_start(user, spawner) and post_spawn_stop(user, spawner) to add pass additional state information between the authenticator and a spawner. These hooks are typically used auth-related startup, i.e. opening a PAM session, and auth-related cleanup, i.e. closing a PAM session.
JupyterHub as an OAuth provider¶
Beginning with version 0.8, JupyterHub is an OAuth provider.
Spawners¶
A Spawner starts each single-user notebook server. The Spawner represents an abstract interface to a process, and a custom Spawner needs to be able to take three actions:
start the process
poll whether the process is still running
stop the process
Examples¶
Custom Spawners for JupyterHub can be found on the JupyterHub wiki. Some examples include:
DockerSpawner for spawning user servers in Docker containers
dockerspawner.DockerSpawner
for spawning identical Docker containers for each usersdockerspawner.SystemUserSpawner
for spawning Docker containers with an environment and home directory for each usersboth
DockerSpawner
andSystemUserSpawner
also work with Docker Swarm for launching containers on remote machines
SudoSpawner enables JupyterHub to run without being root, by spawning an intermediate process via
sudo
BatchSpawner for spawning remote servers using batch systems
YarnSpawner for spawning notebook servers in YARN containers on a Hadoop cluster
RemoteSpawner to spawn notebooks and a remote server and tunnel the port via SSH
Spawner control methods¶
Spawner.start
should start the single-user server for a single user.
Information about the user can be retrieved from self.user
,
an object encapsulating the user’s name, authentication, and server info.
The return value of Spawner.start
should be the (ip, port) of the running server.
NOTE: When writing coroutines, never yield
in between a database change and a commit.
Most Spawner.start
functions will look similar to this example:
def start(self):
self.ip = '127.0.0.1'
self.port = random_port()
# get environment variables,
# several of which are required for configuring the single-user server
env = self.get_env()
cmd = []
# get jupyterhub command to run,
# typically ['jupyterhub-singleuser']
cmd.extend(self.cmd)
cmd.extend(self.get_args())
yield self._actually_start_server_somehow(cmd, env)
return (self.ip, self.port)
When Spawner.start
returns, the single-user server process should actually be running,
not just requested. JupyterHub can handle Spawner.start
being very slow
(such as PBS-style batch queues, or instantiating whole AWS instances)
via relaxing the Spawner.start_timeout
config value.
Spawner.poll
should check if the spawner is still running.
It should return None
if it is still running,
and an integer exit status, otherwise.
For the local process case, Spawner.poll
uses os.kill(PID, 0)
to check if the local process is still running.
Spawner.stop
should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting.
Spawner state¶
JupyterHub should be able to stop and restart without tearing down single-user notebook servers. To do this task, a Spawner may need to persist some information that can be restored later. A JSON-able dictionary of state can be used to store persisted information.
Unlike start, stop, and poll methods, the state methods must not be coroutines.
For the single-process case, the Spawner state is only the process ID of the server:
def get_state(self):
"""get the current state"""
state = super().get_state()
if self.pid:
state['pid'] = self.pid
return state
def load_state(self, state):
"""load state from the database"""
super().load_state(state)
if 'pid' in state:
self.pid = state['pid']
def clear_state(self):
"""clear any state (called after shutdown)"""
super().clear_state()
self.pid = 0
Spawner options form¶
(new in 0.4)
Some deployments may want to offer options to users to influence how their servers are started. This may include cluster-based deployments, where users specify what resources should be available, or docker-based deployments where users can select from a list of base images.
This feature is enabled by setting Spawner.options_form
, which is an HTML form snippet
inserted unmodified into the spawn form.
If the Spawner.options_form
is defined, when a user tries to start their server, they will be directed to a form page, like this:
If Spawner.options_form
is undefined, the user’s server is spawned directly, and no spawn page is rendered.
See this example for a form that allows custom CLI args for the local spawner.
Spawner.options_from_form
¶Options from this form will always be a dictionary of lists of strings, e.g.:
{
'integer': ['5'],
'text': ['some text'],
'select': ['a', 'b'],
}
When formdata
arrives, it is passed through Spawner.options_from_form(formdata)
,
which is a method to turn the form data into the correct structure.
This method must return a dictionary, and is meant to interpret the lists-of-strings into the correct types. For example, the options_from_form
for the above form would look like:
def options_from_form(self, formdata):
options = {}
options['integer'] = int(formdata['integer'][0]) # single integer value
options['text'] = formdata['text'][0] # single string value
options['select'] = formdata['select'] # list already correct
options['notinform'] = 'extra info' # not in the form at all
return options
which would return:
{
'integer': 5,
'text': 'some text',
'select': ['a', 'b'],
'notinform': 'extra info',
}
When Spawner.start
is called, this dictionary is accessible as self.user_options
.
Writing a custom spawner¶
If you are interested in building a custom spawner, you can read this tutorial.
As of JupyterHub 1.0, custom Spawners can register themselves via
the jupyterhub.spawners
entry point metadata.
To do this, in your setup.py
add:
setup(
...
entry_points={
'jupyterhub.spawners': [
'myservice = mypackage:MySpawner',
],
},
)
If you have added this metadata to your package, users can select your spawner with the configuration:
c.JupyterHub.spawner_class = 'myservice'
instead of the full
c.JupyterHub.spawner_class = 'mypackage:MySpawner'
previously required.
Additionally, configurable attributes for your spawner will
appear in jupyterhub help output and auto-generated configuration files
via jupyterhub --generate-config
.
Spawners, resource limits, and guarantees (Optional)¶
Some spawners of the single-user notebook servers allow setting limits or
guarantees on resources, such as CPU and memory. To provide a consistent
experience for sysadmins and users, we provide a standard way to set and
discover these resource limits and guarantees, such as for memory and CPU.
For the limits and guarantees to be useful, the spawner must implement
support for them. For example, LocalProcessSpawner, the default
spawner, does not support limits and guarantees. One of the spawners
that supports limits and guarantees is the systemdspawner
.
c.Spawner.mem_limit
: A limit specifies the maximum amount of memory
that may be allocated, though there is no promise that the maximum amount will
be available. In supported spawners, you can set c.Spawner.mem_limit
to
limit the total amount of memory that a single-user notebook server can
allocate. Attempting to use more memory than this limit will cause errors. The
single-user notebook server can discover its own memory limit by looking at
the environment variable MEM_LIMIT
, which is specified in absolute bytes.
c.Spawner.mem_guarantee
: Sometimes, a guarantee of a minimum amount of
memory is desirable. In this case, you can set c.Spawner.mem_guarantee
to
to provide a guarantee that at minimum this much memory will always be
available for the single-user notebook server to use. The environment variable
MEM_GUARANTEE
will also be set in the single-user notebook server.
The spawner’s underlying system or cluster is responsible for enforcing these
limits and providing these guarantees. If these values are set to None
, no
limits or guarantees are provided, and no environment values are set.
c.Spawner.cpu_limit
: In supported spawners, you can set
c.Spawner.cpu_limit
to limit the total number of cpu-cores that a
single-user notebook server can use. These can be fractional - 0.5
means 50%
of one CPU core, 4.0
is 4 cpu-cores, etc. This value is also set in the
single-user notebook server’s environment variable CPU_LIMIT
. The limit does
not claim that you will be able to use all the CPU up to your limit as other
higher priority applications might be taking up CPU.
c.Spawner.cpu_guarantee
: You can set c.Spawner.cpu_guarantee
to provide a
guarantee for CPU usage. The environment variable CPU_GUARANTEE
will be set
in the single-user notebook server when a guarantee is being provided.
The spawner’s underlying system or cluster is responsible for enforcing these
limits and providing these guarantees. If these values are set to None
, no
limits or guarantees are provided, and no environment values are set.
Communication between the Proxy
, Hub
, and Notebook
can be secured by
turning on internal_ssl
in jupyterhub_config.py
. For a custom spawner to
utilize these certs, there are two methods of interest on the base Spawner
class: .create_certs
and .move_certs
.
The first method, .create_certs
will sign a key-cert pair using an internally
trusted authority for notebooks. During this process, .create_certs
can
apply ip
and dns
name information to the cert via an alt_names
kwarg
.
This is used for certificate authentication (verification). Without proper
verification, the Notebook
will be unable to communicate with the Hub
and
vice versa when internal_ssl
is enabled. For example, given a deployment
using the DockerSpawner
which will start containers with ips
from the
docker
subnet pool, the DockerSpawner
would need to instead choose a
container ip
prior to starting and pass that to .create_certs
(TODO: edit).
In general though, this method will not need to be changed and the default
ip
/dns
(localhost) info will suffice.
When .create_certs
is run, it will .create_certs
in a default, central
location specified by c.JupyterHub.internal_certs_location
. For Spawners
that need access to these certs elsewhere (i.e. on another host altogether),
the .move_certs
method can be overridden to move the certs appropriately.
Again, using DockerSpawner
as an example, this would entail moving certs
to a directory that will get mounted into the container this spawner starts.
Services¶
With version 0.7, JupyterHub adds support for Services.
This section provides the following information about Services:
Definition of a Service¶
When working with JupyterHub, a Service is defined as a process that interacts with the Hub’s REST API. A Service may perform a specific action or task. For example, the following tasks can each be a unique Service:
shutting down individuals’ single user notebook servers that have been idle for some time
registering additional web servers which should use the Hub’s authentication and be served behind the Hub’s proxy.
Two key features help define a Service:
Is the Service managed by JupyterHub?
Does the Service have a web server that should be added to the proxy’s table?
Currently, these characteristics distinguish two types of Services:
A Hub-Managed Service which is managed by JupyterHub
An Externally-Managed Service which runs its own web server and communicates operation instructions via the Hub’s API.
Properties of a Service¶
A Service may have the following properties:
name: str
- the name of the serviceadmin: bool (default - false)
- whether the service should have administrative privilegesurl: str (default - None)
- The URL where the service is/should be. If a url is specified for where the Service runs its own web server, the service will be added to the proxy at/services/:name
api_token: str (default - None)
- For Externally-Managed Services you need to specify an API token to perform API requests to the Hub
If a service is also to be managed by the Hub, it has a few extra options:
command: (str/Popen list
) - Command for JupyterHub to spawn the service. - Only use this if the service should be a subprocess. - If command is not specified, the Service is assumed to be managed externally. - If a command is specified for launching the Service, the Service will be started and managed by the Hub.environment: dict
- additional environment variables for the Service.user: str
- the name of a system user to manage the Service. If unspecified, run as the same user as the Hub.
Hub-Managed Services¶
A Hub-Managed Service is started by the Hub, and the Hub is responsible for the Service’s actions. A Hub-Managed Service can only be a local subprocess of the Hub. The Hub will take care of starting the process and restarts it if it stops.
While Hub-Managed Services share some similarities with notebook Spawners, there are no plans for Hub-Managed Services to support the same spawning abstractions as a notebook Spawner.
If you wish to run a Service in a Docker container or other deployment environments, the Service can be registered as an Externally-Managed Service, as described below.
Launching a Hub-Managed Service¶
A Hub-Managed Service is characterized by its specified command
for launching
the Service. For example, a ‘cull idle’ notebook server task configured as a
Hub-Managed Service would include:
the Service name,
admin permissions, and
the
command
to launch the Service which will cull idle servers after a timeout interval
This example would be configured as follows in jupyterhub_config.py
:
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': [sys.executable, '/path/to/cull-idle.py', '--timeout']
}
]
A Hub-Managed Service may also be configured with additional optional parameters, which describe the environment needed to start the Service process:
environment: dict
- additional environment variables for the Service.user: str
- name of the user to run the server if different from the Hub. Requires Hub to be root.cwd: path
directory in which to run the Service, if different from the Hub directory.
The Hub will pass the following environment variables to launch the Service:
JUPYTERHUB_SERVICE_NAME: The name of the service
JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: URL for the JupyterHub API (default, http://127.0.0.1:8080/hub/api)
JUPYTERHUB_BASE_URL: Base URL of the Hub (https://mydomain[:port]/)
JUPYTERHUB_SERVICE_PREFIX: URL path prefix of this service (/services/:service-name/)
JUPYTERHUB_SERVICE_URL: Local URL where the service is expected to be listening.
Only for proxied web services.
For the previous ‘cull idle’ Service example, these environment variables would be passed to the Service when the Hub starts the ‘cull idle’ Service:
JUPYTERHUB_SERVICE_NAME: 'cull-idle'
JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: http://127.0.0.1:8080/hub/api
JUPYTERHUB_BASE_URL: https://mydomain[:port]
JUPYTERHUB_SERVICE_PREFIX: /services/cull-idle/
See the JupyterHub GitHub repo for additional information about the
cull-idle
example.
Externally-Managed Services¶
You may prefer to use your own service management tools, such as Docker or systemd, to manage a JupyterHub Service. These Externally-Managed Services, unlike Hub-Managed Services, are not subprocesses of the Hub. You must tell JupyterHub which API token the Externally-Managed Service is using to perform its API requests. Each Externally-Managed Service will need a unique API token, because the Hub authenticates each API request and the API token is used to identify the originating Service or user.
A configuration example of an Externally-Managed Service with admin access and running its own web server is:
c.JupyterHub.services = [
{
'name': 'my-web-service',
'url': 'https://10.0.1.1:1984',
'api_token': 'super-secret',
}
]
In this case, the url
field will be passed along to the Service as
JUPYTERHUB_SERVICE_URL
.
Writing your own Services¶
When writing your own services, you have a few decisions to make (in addition to what your service does!):
Does my service need a public URL?
Do I want JupyterHub to start/stop the service?
Does my service need to authenticate users?
When a Service is managed by JupyterHub, the Hub will pass the necessary information to the Service via the environment variables described above. A flexible Service, whether managed by the Hub or not, can make use of these same environment variables.
When you run a service that has a url, it will be accessible under a
/services/
prefix, such as https://myhub.horse/services/my-service/
. For
your service to route proxied requests properly, it must take
JUPYTERHUB_SERVICE_PREFIX
into account when routing requests. For example, a
web service would normally service its root handler at '/'
, but the proxied
service would need to serve JUPYTERHUB_SERVICE_PREFIX
.
Note that JUPYTERHUB_SERVICE_PREFIX
will contain a trailing slash. This must
be taken into consideration when creating the service routes. If you include an
extra slash you might get unexpected behavior. For example if your service has a
/foo
endpoint, the route would be JUPYTERHUB_SERVICE_PREFIX + foo
, and
/foo/bar
would be JUPYTERHUB_SERVICE_PREFIX + foo/bar
.
Hub Authentication and Services¶
JupyterHub 0.7 introduces some utilities for using the Hub’s authentication
mechanism to govern access to your service. When a user logs into JupyterHub,
the Hub sets a cookie (jupyterhub-services
). The service can use this
cookie to authenticate requests.
JupyterHub ships with a reference implementation of Hub authentication that can be used by services. You may go beyond this reference implementation and create custom hub-authenticating clients and services. We describe the process below.
The reference, or base, implementation is the HubAuth
class,
which implements the requests to the Hub.
To use HubAuth, you must set the .api_token
, either programmatically when constructing the class,
or via the JUPYTERHUB_API_TOKEN
environment variable.
Most of the logic for authentication implementation is found in the
HubAuth.user_for_cookie
and in the
HubAuth.user_for_token
methods, which makes a request of the Hub, and returns:
None, if no user could be identified, or
a dict of the following form:
{ "name": "username", "groups": ["list", "of", "groups"], "admin": False, # or True }
You are then free to use the returned user information to take appropriate action.
HubAuth also caches the Hub’s response for a number of seconds,
configurable by the cookie_cache_max_age
setting (default: five minutes).
For example, you have a Flask service that returns information about a user.
JupyterHub’s HubAuth class can be used to authenticate requests to the Flask
service. See the service-whoami-flask
example in the
JupyterHub GitHub repo
for more details.
from functools import wraps
import json
import os
from urllib.parse import quote
from flask import Flask, redirect, request, Response
from jupyterhub.services.auth import HubAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubAuth(
api_token=os.environ['JUPYTERHUB_API_TOKEN'],
cache_max_age=60,
)
app = Flask(__name__)
def authenticated(f):
"""Decorator for authenticating with the Hub"""
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
token = request.headers.get(auth.auth_header_name)
if cookie:
user = auth.user_for_cookie(cookie)
elif token:
user = auth.user_for_token(token)
else:
user = None
if user:
return f(user, *args, **kwargs)
else:
# redirect to login url on failed auth
return redirect(auth.login_url + '?next=%s' % quote(request.path))
return decorated
@app.route(prefix)
@authenticated
def whoami(user):
return Response(
json.dumps(user, indent=1, sort_keys=True),
mimetype='application/json',
)
Since most Jupyter services are written with tornado,
we include a mixin class, HubAuthenticated
,
for quickly authenticating your own tornado services with JupyterHub.
Tornado’s @web.authenticated
method calls a Handler’s .get_current_user
method to identify the user. Mixing in HubAuthenticated
defines
get_current_user
to use HubAuth. If you want to configure the HubAuth
instance beyond the default, you’ll want to define an initialize
method,
such as:
class MyHandler(HubAuthenticated, web.RequestHandler):
hub_users = {'inara', 'mal'}
def initialize(self, hub_auth):
self.hub_auth = hub_auth
@web.authenticated
def get(self):
...
The HubAuth will automatically load the desired configuration from the Service environment variables.
If you want to limit user access, you can whitelist users through either the
.hub_users
attribute or .hub_groups
. These are sets that check against the
username and user group list, respectively. If a user matches neither the user
list nor the group list, they will not be allowed access. If both are left
undefined, then any user will be allowed.
If you don’t want to use the reference implementation
(e.g. you find the implementation a poor fit for your Flask app),
you can implement authentication via the Hub yourself.
We recommend looking at the HubAuth
class implementation for reference,
and taking note of the following process:
retrieve the cookie
jupyterhub-services
from the request.Make an API request
GET /hub/api/authorizations/cookie/jupyterhub-services/cookie-value
, where cookie-value is the url-encoded value of thejupyterhub-services
cookie. This request must be authenticated with a Hub API token in theAuthorization
header. For example, with requests:r = requests.get( '/'.join((["http://127.0.0.1:8081/hub/api", "authorizations/cookie/jupyterhub-services", quote(encrypted_cookie, safe=''), ]), headers = { 'Authorization' : 'token %s' % api_token, }, ) r.raise_for_status() user = r.json()
On success, the reply will be a JSON model describing the user:
{ "name": "inara", "groups": ["serenity", "guild"], }
An example of using an Externally-Managed Service and authentication is in nbviewer README section on securing the notebook viewer, and an example of its configuration is found here. nbviewer can also be run as a Hub-Managed Service as described nbviewer README section on securing the notebook viewer.
Writing a custom Proxy implementation¶
JupyterHub 0.8 introduced the ability to write a custom implementation of the proxy. This enables deployments with different needs than the default proxy, configurable-http-proxy (CHP). CHP is a single-process nodejs proxy that the Hub manages by default as a subprocess (it can be run externally, as well, and typically is in production deployments).
The upside to CHP, and why we use it by default, is that it’s easy to install and run (if you have nodejs, you are set!). The downsides are that it’s a single process and does not support any persistence of the routing table. So if the proxy process dies, your whole JupyterHub instance is inaccessible until the Hub notices, restarts the proxy, and restores the routing table. For deployments that want to avoid such a single point of failure, or leverage existing proxy infrastructure in their chosen deployment (such as Kubernetes ingress objects), the Proxy API provides a way to do that.
In general, for a proxy to be usable by JupyterHub, it must:
support websockets without prior knowledge of the URL where websockets may occur
support trie-based routing (i.e. allow different routes on
/foo
and/foo/bar
and route based on specificity)adding or removing a route should not cause existing connections to drop
Optionally, if the JupyterHub deployment is to use host-based routing, the Proxy must additionally support routing based on the Host of the request.
Subclassing Proxy¶
To start, any Proxy implementation should subclass the base Proxy class, as is done with custom Spawners and Authenticators.
from jupyterhub.proxy import Proxy
class MyProxy(Proxy):
"""My Proxy implementation"""
...
Starting and stopping the proxy¶
If your proxy should be launched when the Hub starts, you must define how to start and stop your proxy:
class MyProxy(Proxy):
...
async def start(self):
"""Start the proxy"""
async def stop(self):
"""Stop the proxy"""
These methods may be coroutines.
c.Proxy.should_start
is a configurable flag that determines whether the
Hub should call these methods when the Hub itself starts and stops.
Encryption¶
When using internal_ssl
to encrypt traffic behind the proxy, at minimum,
your Proxy
will need client ssl certificates which the Hub
must be made
aware of. These can be generated with the command jupyterhub --generate-certs
which will write them to the internal_certs_location
in folders named
proxy_api
and proxy_client
. Alternatively, these can be provided to the
hub via the jupyterhub_config.py
file by providing a dict
of named paths
to the external_authorities
option. The hub will include all certificates
provided in that dict
in the trust bundle utilized by all internal
components.
Probably most custom proxies will be externally managed,
such as Kubernetes ingress-based implementations.
In this case, you do not need to define start
and stop
.
To disable the methods, you can define should_start = False
at the class level:
class MyProxy(Proxy):
should_start = False
Routes¶
At its most basic, a Proxy implementation defines a mechanism to add, remove, and retrieve routes. A proxy that implements these three methods is complete. Each of these methods may be a coroutine.
Definition: routespec
A routespec, which will appear in these methods, is a string describing a
route to be proxied, such as /user/name/
. A routespec will:
always end with
/
always start with
/
if it is a path-based route/proxy/path/
precede the leading
/
with a host for host-based routing, e.g.host.tld/proxy/path/
When adding a route, JupyterHub may pass a JSON-serializable dict as a data
argument that should be attached to the proxy route. When that route is
retrieved, the data
argument should be returned as well. If your proxy
implementation doesn’t support storing data attached to routes, then your
Python wrapper may have to handle storing the data
piece itself, e.g in a
simple file or database.
async def add_route(self, routespec, target, data):
"""Proxy `routespec` to `target`.
Store `data` associated with the routespec
for retrieval later.
"""
Adding a route for a user looks like this:
await proxy.add_route('/user/pgeorgiou/', 'http://127.0.0.1:1227',
{'user': 'pgeorgiou'})
delete_route()
is given a routespec to delete. If there is no such route,
delete_route
should still succeed, but a warning may be issued.
async def delete_route(self, routespec):
"""Delete the route"""
For retrieval, you only need to implement a single method that retrieves all
routes. The return value for this function should be a dictionary, keyed by
routespect
, of dicts whose keys are the same three arguments passed to
add_route
(routespec
, target
, data
)
async def get_all_routes(self):
"""Return all routes, keyed by routespec"""
{
'/proxy/path/': {
'routespec': '/proxy/path/',
'target': 'http://...',
'data': {},
},
}
Note on activity tracking¶
JupyterHub can track activity of users, for use in services such as culling
idle servers. As of JupyterHub 0.8, this activity tracking is the
responsibility of the proxy. If your proxy implementation can track activity
to endpoints, it may add a last_activity
key to the data
of routes
retrieved in .get_all_routes()
. If present, the value of last_activity
should be an ISO8601 UTC date
string:
{
'/user/pgeorgiou/': {
'routespec': '/user/pgeorgiou/',
'target': 'http://127.0.0.1:1227',
'data': {
'user': 'pgeourgiou',
'last_activity': '2017-10-03T10:33:49.570Z',
},
},
}
If the proxy does not track activity, then only activity to the Hub itself is tracked, and services such as cull-idle will not work.
Now that notebook-5.0
tracks activity internally, we can retrieve activity
information from the single-user servers instead, removing the need to track
activity in the proxy. But this is not yet implemented in JupyterHub 0.8.0.
As of JupyterHub 1.0, custom proxy implementations can register themselves via
the jupyterhub.proxies
entry point metadata.
To do this, in your setup.py
add:
setup(
...
entry_points={
'jupyterhub.proxies': [
'mything = mypackage:MyProxy',
],
},
)
If you have added this metadata to your package, users can select your proxy with the configuration:
c.JupyterHub.proxy_class = 'mything'
instead of the full
c.JupyterHub.proxy_class = 'mypackage:MyProxy'
previously required.
Additionally, configurable attributes for your proxy will
appear in jupyterhub help output and auto-generated configuration files
via jupyterhub --generate-config
.
Running proxy separately from the hub¶
Background¶
The thing which users directly connect to is the proxy, by default
configurable-http-proxy
. The proxy either redirects users to the
hub (for login and managing servers), or to their own single-user
servers. Thus, as long as the proxy stays running, access to existing
servers continues, even if the hub itself restarts or goes down.
When you first configure the hub, you may not even realize this because the proxy is automatically managed by the hub. This is great for getting started and even most use, but everytime you restart the hub, all user connections also get restarted. But it’s also simple to run the proxy as a service separate from the hub, so that you are free to reconfigure the hub while only interrupting users who are currently actively starting the hub.
The default JupyterHub proxy is configurable-http-proxy, and that page has some docs. If you are using a different proxy, such as Traefik, these instructions are probably not relevant to you.
Configuration options¶
c.JupyterHub.cleanup_servers = False
should be set, which tells the
hub to not stop servers when the hub restarts (this is useful even if
you don’t run the proxy separately).
c.ConfigurableHTTPProxy.should_start = False
should be set, which
tells the hub that the proxy should not be started (because you start
it yourself).
c.ConfigurableHTTPProxy.auth_token = "CONFIGPROXY_AUTH_TOKEN"
should be set to a
token for authenticating communication with the proxy.
c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'
should be
set to the URL which the hub uses to connect to the proxy’s API.
Proxy configuration¶
You need to configure a service to start the proxy. An example
command line for this is configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error
. (Details for how to
do this is out of scope for this tutorial - for example it might be a
systemd service on within another docker cotainer). The proxy has no
configuration files, all configuration is via the command line and
environment variables.
--api-ip
and --api-port
(which tells the proxy where to listen) should match the hub’s ConfigurableHTTPProxy.api_url
.
--ip
, -port
, and other options configure the user connections to the proxy.
--default-target
and --error-target
should point to the hub, and used when users navigate to the proxy originally.
You must define the environment variable CONFIGPROXY_AUTH_TOKEN
to
match the token given to c.ConfigurableHTTPProxy.auth_token
.
You should check the configurable-http-proxy options to see what other options are needed, for example SSL options. Note that these are configured in the hub if the hub is starting the proxy - you need to move the options to here.
Docker image¶
You can use jupyterhub configurable-http-proxy docker image to run the proxy.
See also¶
Using JupyterHub’s REST API¶
This section will give you information on:
what you can do with the API
create an API token
add API tokens to the config files
make an API request programmatically using the requests library
learn more about JupyterHub’s API
What you can do with the API¶
Using the JupyterHub REST API, you can perform actions on the Hub, such as:
checking which users are active
adding or removing users
stopping or starting single user notebook servers
authenticating services
A REST API provides a standard way for users to get and send information to the Hub.
Create an API token¶
To send requests using JupyterHub API, you must pass an API token with the request.
As of version 0.6.0, the preferred way of generating an API token is:
openssl rand -hex 32
This openssl
command generates a potential token that can then be
added to JupyterHub using .api_tokens
configuration setting in
jupyterhub_config.py
.
Alternatively, use the jupyterhub token
command to generate a token
for a specific hub user by passing the ‘username’:
jupyterhub token <username>
This command generates a random string to use as a token and registers it for the given user with the Hub’s database.
In version 0.8.0, a TOKEN request page for generating an API token is available from the JupyterHub user interface:
Add API tokens to the config file¶
You may also add a dictionary of API tokens and usernames to the hub’s
configuration file, jupyterhub_config.py
(note that
the key is the ‘secret-token’ while the value is the ‘username’):
c.JupyterHub.api_tokens = {
'secret-token': 'username',
}
Make an API request¶
To authenticate your requests, pass the API token in the request’s Authorization header.
Using the popular Python requests library, here’s example code to make an API request for the users of a JupyterHub deployment. An API GET request is made, and the request sends an API token for authorization. The response contains information about the users:
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
r = requests.get(api_url + '/users',
headers={
'Authorization': 'token %s' % token,
}
)
r.raise_for_status()
users = r.json()
This example provides a slightly more complicated request, yet the process is very similar:
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
data = {'name': 'mygroup', 'users': ['user1', 'user2']}
r = requests.post(api_url + '/groups/formgrade-data301/users',
headers={
'Authorization': 'token %s' % token,
},
json=data
)
r.raise_for_status()
r.json()
The same API token can also authorize access to the Jupyter Notebook REST API provided by notebook servers managed by JupyterHub if one of the following is true:
The token is for the same user as the owner of the notebook
The token is tied to an admin user or service and
c.JupyterHub.admin_access
is set toTrue
Enabling users to spawn multiple named-servers via the API¶
With JupyterHub version 0.8, support for multiple servers per user has landed. Prior to that, each user could only launch a single default server via the API like this:
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/server"
With the named-server functionality, it’s now possible to launch more than one specifically named servers against a given user. This could be used, for instance, to launch each server based on a different image.
First you must enable named-servers by including the following setting in the jupyterhub_config.py
file.
c.JupyterHub.allow_named_servers = True
If using the zero-to-jupyterhub-k8s set-up to run JupyterHub,
then instead of editing the jupyterhub_config.py
file directly, you could pass
the following as part of the config.yaml
file, as per the tutorial:
hub:
extraConfig: |
c.JupyterHub.allow_named_servers = True
With that setting in place, a new named-server is activated like this:
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
The same servers can be stopped by substituting DELETE
for POST
above.
For named-servers via the API to work, the spawner used to spawn these servers will need to be able to handle the case of multiple servers per user and ensure uniqueness of names, particularly if servers are spawned via docker containers or kubernetes pods.
Learn more about the API¶
You can see the full JupyterHub REST API for details. This REST API Spec can be viewed in a more interactive style on swagger’s petstore. Both resources contain the same information and differ only in its display. Note: The Swagger specification is being renamed the OpenAPI Initiative.
The Hub’s Database¶
JupyterHub uses a database to store information about users, services, and other data needed for operating the Hub.
Default SQLite database¶
The default database for JupyterHub is a SQLite database. We have chosen SQLite as JupyterHub’s default for its lightweight simplicity in certain uses such as testing, small deployments and workshops.
For production systems, SQLite has some disadvantages when used with JupyterHub:
upgrade-db
may not work, and you may need to start with a fresh databasedowngrade-db
will not work if you want to rollback to an earlier version, so backup thejupyterhub.sqlite
file before upgrading
The sqlite documentation provides a helpful page about when to use SQLite and where traditional RDBMS may be a better choice.
Using an RDBMS (PostgreSQL, MySQL)¶
When running a long term deployment or a production system, we recommend using
a traditional RDBMS database, such as PostgreSQL
or MySQL, that supports the SQL ALTER TABLE
statement.
Notes and Tips¶
The SQLite database should not be used on NFS. SQLite uses reader/writer locks
to control access to the database. This locking mechanism might not work
correctly if the database file is kept on an NFS filesystem. This is because
fcntl()
file locking is broken on many NFS implementations. Therefore, you
should avoid putting SQLite database files on NFS since it will not handle well
multiple processes which might try to access the file at the same time.
We recommend using PostgreSQL for production if you are unsure whether to use MySQL or PostgreSQL or if you do not have a strong preference. There is additional configuration required for MySQL that is not needed for PostgreSQL.
You should use the
pymysql
sqlalchemy provider (the other one, MySQLdb, isn’t available for py3).You also need to set
pool_recycle
to some value (typically 60 - 300) which depends on your MySQL setup. This is necessary since MySQL kills connections serverside if they’ve been idle for a while, and the connection from the hub will be idle for longer than most connections. This behavior will lead to frustrating ‘the connection has gone away’ errors from sqlalchemy ifpool_recycle
is not set.If you use
utf8mb4
collation with MySQL earlier than 5.7.7 or MariaDB earlier than 10.2.1 you may get an1709, Index column size too large
error. To fix this you need to setinnodb_large_prefix
to enabled andinnodb_file_format
toBarracuda
to allow for the index sizes jupyterhub uses.row_format
will be set toDYNAMIC
as long as those options are set correctly. Later versions of MariaDB and MySQL should set these values by default, as well as have a defaultDYNAMIC
row_format
and pose no trouble to users.
Working with templates and UI¶
The pages of the JupyterHub application are generated from Jinja templates. These allow the header, for example, to be defined once and incorporated into all pages. By providing your own templates, you can have complete control over JupyterHub’s appearance.
Custom Templates¶
JupyterHub will look for custom templates in all of the paths in the
JupyterHub.template_paths
configuration option, falling back on the
default templates
if no custom template with that name is found. This fallback
behavior is new in version 0.9; previous versions searched only those paths
explicitly included in template_paths
. You may override as many
or as few templates as you desire.
Extending Templates¶
Jinja provides a mechanism to extend templates.
A base template can define a block
, and child templates can replace or
supplement the material in the block. The
JupyterHub templates
make extensive use of blocks, which allows you to customize parts of the
interface easily.
In general, a child template can extend a base template, page.html
, by beginning with:
{% extends "page.html" %}
This works, unless you are trying to extend the default template for the same
file name. Starting in version 0.9, you may refer to the base file with a
templates/
prefix. Thus, if you are writing a custom page.html
, start the
file with this block:
{% extends "templates/page.html" %}
By defining block
s with same name as in the base template, child templates
can replace those sections with custom content. The content from the base
template can be included with the {{ super() }}
directive.
To add an additional message to the spawn-pending page, below the existing
text about the server starting up, place this content in a file named
spawn_pending.html
in a directory included in the
JupyterHub.template_paths
configuration option.
{% extends "templates/spawn_pending.html" %}
{% block message %}
{{ super() }}
<p>Patience is a virtue.</p>
{% endblock %}
Page Announcements¶
To add announcements to be displayed on a page, you have two options:
Extend the page templates as described above
Use configuration variables
If you set the configuration variable JupyterHub.template_vars = {'announcement': 'some_text'}
, the given some_text
will be placed on
the top of all pages. The more specific variables
announcement_login
, announcement_spawn
, announcement_home
, and
announcement_logout
are more specific and only show on their
respective pages (overriding the global announcement
variable).
Note that changing these variables require a restart, unlike direct
template extension.
You can get the same effect by extending templates, which allows you
to update the messages without restarting. Set
c.JupyterHub.template_paths
as mentioned above, and then create a
template (for example, login.html
) with:
{% extends "templates/login.html" %}
{% set announcement = 'some message' %}
Extending page.html
puts the message on all pages, but note that
extending page.html
take precedence over an extension of a specific
page (unlike the variable-based approach above).
Configuring user environments¶
Deploying JupyterHub means you are providing Jupyter notebook environments for multiple users. Often, this includes a desire to configure the user environment in some way.
Since the jupyterhub-singleuser
server extends the standard Jupyter notebook
server, most configuration and documentation that applies to Jupyter Notebook
applies to the single-user environments. Configuration of user environments
typically does not occur through JupyterHub itself, but rather through system-
wide configuration of Jupyter, which is inherited by jupyterhub-singleuser
.
Tip: When searching for configuration tips for JupyterHub user environments, try removing JupyterHub from your search because there are a lot more people out there configuring Jupyter than JupyterHub and the configuration is the same.
This section will focus on user environments, including:
Installing packages
Configuring Jupyter and IPython
Installing kernelspecs
Using containers vs. multi-user hosts
Installing packages¶
To make packages available to users, you generally will install packages system-wide or in a shared environment.
This installation location should always be in the same environment that
jupyterhub-singleuser
itself is installed in, and must be readable and
executable by your users. If you want users to be able to install additional
packages, it must also be writable by your users.
If you are using a standard system Python install, you would use:
sudo python3 -m pip install numpy
to install the numpy package in the default system Python 3 environment
(typically /usr/local
).
You may also use conda to install packages. If you do, you should make sure that the conda environment has appropriate permissions for users to be able to run Python code in the env.
Configuring Jupyter and IPython¶
Jupyter and IPython have their own configuration systems.
As a JupyterHub administrator, you will typically want to install and configure environments for all JupyterHub users. For example, you wish for each student in a class to have the same user environment configuration.
Jupyter and IPython support “system-wide” locations for configuration, which is the logical place to put global configuration that you want to affect all users. It’s generally more efficient to configure user environments “system-wide”, and it’s a good idea to avoid creating files in users’ home directories.
The typical locations for these config files are:
system-wide in
/etc/{jupyter|ipython}
env-wide (environment wide) in
{sys.prefix}/etc/{jupyter|ipython}
.
For example, to enable the cython
IPython extension for all of your users,
create the file /etc/ipython/ipython_config.py
:
c.InteractiveShellApp.extensions.append("cython")
To enable Jupyter notebook’s internal idle-shutdown behavior (requires
notebook ≥ 5.4), set the following in the /etc/jupyter/jupyter_notebook_config.py
file:
# shutdown the server after no activity for an hour
c.NotebookApp.shutdown_no_activity_timeout = 60 * 60
# shutdown kernels after no activity for 20 minutes
c.MappingKernelManager.cull_idle_timeout = 20 * 60
# check for idle kernels every two minutes
c.MappingKernelManager.cull_interval = 2 * 60
Installing kernelspecs¶
You may have multiple Jupyter kernels installed and want to make sure that
they are available to all of your users. This means installing kernelspecs
either system-wide (e.g. in /usr/local/) or in the sys.prefix
of JupyterHub
itself.
Jupyter kernelspec installation is system wide by default, but some kernels may default to installing kernelspecs in your home directory. These will need to be moved system-wide to ensure that they are accessible.
You can see where your kernelspecs are with:
jupyter kernelspec list
Assuming I have a Python 2 and Python 3 environment that I want to make sure are available, I can install their specs system-wide (in /usr/local) with:
/path/to/python3 -m IPython kernel install --prefix=/usr/local
/path/to/python2 -m IPython kernel install --prefix=/usr/local
Multi-user hosts vs. Containers¶
There are two broad categories of user environments that depend on what Spawner you choose:
Multi-user hosts (shared system)
Container-based
How you configure user environments for each category can differ a bit depending on what Spawner you are using.
The first category is a shared system (multi-user host) where
each user has a JupyterHub account and a home directory as well as being
a real system user. In this example, shared configuration and installation
must be in a ‘system-wide’ location, such as /etc/
or /usr/local
or a custom prefix such as /opt/conda
.
When JupyterHub uses container-based Spawners (e.g. KubeSpawner or DockerSpawner), the ‘system-wide’ environment is really the container image which you are using for users.
In both cases, you want to avoid putting configuration in user home directories because users can change those configuration settings. Also, home directories typically persist once they are created, so they are difficult for admins to update later.
Named servers¶
By default, in a JupyterHub deployment each user has exactly one server.
JupyterHub can, however, have multiple servers per user. This is most useful in deployments where users can configure the environment in which their server will start (e.g. resource requests on an HPC cluster), so that a given user can have multiple configurations running at the same time, without having to stop and restart their one server.
To allow named servers:
c.JupyterHub.allow_named_servers = True
Named servers were implemented in the REST API in JupyterHub 0.8, and JupyterHub 1.0 introduces UI for managing named servers via the user home page:
as well as the admin page:
Named servers can be accessed, created, started, stopped, and deleted from these pages. Activity tracking is now per-server as well.
The number of named servers per user can be limited by setting
c.JupyterHub.named_server_limit_per_user = 5
Configuration examples¶
The following sections provide examples, including configuration files and tips, for the following:
Configuring GitHub OAuth
Using reverse proxy (nginx and Apache)
Run JupyterHub without root privileges using
sudo
Configure GitHub OAuth¶
In this example, we show a configuration file for a fairly standard JupyterHub deployment with the following assumptions:
Running JupyterHub on a single cloud server
Using SSL on the standard HTTPS port 443
Using GitHub OAuth (using oauthenticator) for login
Using the default spawner (to configure other spawners, uncomment and edit
spawner_class
as well as follow the instructions for your desired spawner)Users exist locally on the server
Users’ notebooks to be served from
~/assignments
to allow users to browse for notebooks within other users’ home directoriesYou want the landing page for each user to be a
Welcome.ipynb
notebook in their assignments directory.All runtime files are put into
/srv/jupyterhub
and log files in/var/log
.
The jupyterhub_config.py
file would have these settings:
# jupyterhub_config.py file
c = get_config()
import os
pjoin = os.path.join
runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
os.makedirs(ssl_dir)
# Allows multiple single-server per user
c.JupyterHub.allow_named_servers = True
# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')
# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# uses the default spawner
# To use a different spawner, uncomment `spawner_class` and set to desired
# spawner (e.g. SudoSpawner). Follow instructions for desired spawner
# configuration.
# c.JupyterHub.spawner_class = 'sudospawner.SudoSpawner'
# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/jupyter/jupyter_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
Using the GitHub Authenticator requires a few additional environment variable to be set prior to launching JupyterHub:
export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
# append log output to log file /var/log/jupyterhub.log
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
Using a reverse proxy¶
In the following example, we show configuration files for a JupyterHub server
running locally on port 8000
but accessible from the outside on the standard
SSL port 443
. This could be useful if the JupyterHub server machine is also
hosting other domains or content on 443
. The goal in this example is to
satisfy the following:
JupyterHub is running on a server, accessed only via
HUB.DOMAIN.TLD:443
On the same machine,
NO_HUB.DOMAIN.TLD
strictly serves different content, also on port443
nginx
orapache
is used as the public access point (which means that only nginx/apache will bind to443
)After testing, the server in question should be able to score at least an A on the Qualys SSL Labs SSL Server Test
Let’s start out with needed JupyterHub configuration in jupyterhub_config.py
:
# Force the proxy to only listen to connections to 127.0.0.1 (on port 8000)
c.JupyterHub.bind_url = 'http://127.0.0.1:8000'
(For Jupyterhub < 0.9 use c.JupyterHub.ip = '127.0.0.1'
.)
For high-quality SSL configuration, we also generate Diffie-Helman parameters. This can take a few minutes:
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
nginx¶
This nginx
config file is fairly standard fare except for the two
location
blocks within the main section for HUB.DOMAIN.tld.
To create a new site for jupyterhub in your nginx config, make a new file
in sites.enabled
, e.g. /etc/nginx/sites.enabled/jupyterhub.conf
:
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name HUB.DOMAIN.TLD;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
If nginx
is not running on port 443, substitute $http_host
for $host
on
the lines setting the Host
header.
nginx
will now be the front facing element of JupyterHub on 443
which means
it is also free to bind other servers, like NO_HUB.DOMAIN.TLD
to the same port
on the same machine and network interface. In fact, one can simply use the same
server blocks as above for NO_HUB
and simply add line for the root directory
of the site as well as the applicable location call:
server {
listen 80;
server_name NO_HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443;
ssl on;
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# SSL cert may differ
# Set the appropriate root directory
root /var/www/html
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
Now restart nginx
, restart the JupyterHub, and enjoy accessing
https://HUB.DOMAIN.TLD
while serving other content securely on
https://NO_HUB.DOMAIN.TLD
.
Apache¶
As with nginx above, you can use Apache as the reverse proxy. First, we will need to enable the apache modules that we are going to need:
a2enmod ssl rewrite proxy proxy_http proxy_wstunnel
Our Apache configuration is equivalent to the nginx configuration above:
Redirect HTTP to HTTPS
Good SSL Configuration
Support for websockets on any proxied URL
JupyterHub is running locally at http://127.0.0.1:8000
# redirect HTTP to HTTPS
Listen 80
<VirtualHost HUB.DOMAIN.TLD:80>
ServerName HUB.DOMAIN.TLD
Redirect / https://HUB.DOMAIN.TLD/
</VirtualHost>
Listen 443
<VirtualHost HUB.DOMAIN.TLD:443>
ServerName HUB.DOMAIN.TLD
# configure SSL
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
SSLProtocol All -SSLv2 -SSLv3
SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
# Use RewriteEngine to handle websocket connection upgrades
RewriteEngine On
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:8000/$1 [P,L]
<Location "/">
# preserve Host header to avoid cross-origin problems
ProxyPreserveHost on
# proxy to JupyterHub
ProxyPass http://127.0.0.1:8000/
ProxyPassReverse http://127.0.0.1:8000/
</Location>
</VirtualHost>
In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:
JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location
httpd.conf amendments:
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [P,L]
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [P,L]
ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/
jupyterhub_config.py amendments:
--The public facing URL of the whole JupyterHub application.
--This is the address on which the proxy will bind. Sets protocol, ip, base_url
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
Run JupyterHub without root privileges using sudo
¶
Note: Setting up sudo
permissions involves many pieces of system
configuration. It is quite easy to get wrong and very difficult to debug.
Only do this if you are very sure you must.
Overview¶
There are many Authenticators and Spawners available for JupyterHub. Some, such as DockerSpawner or OAuthenticator, do not need any elevated permissions. This document describes how to get the full default behavior of JupyterHub while running notebook servers as real system users on a shared system without running the Hub itself as root.
Since JupyterHub needs to spawn processes as other users, the simplest way is to run it as root, spawning user servers with setuid. But this isn’t especially safe, because you have a process running on the public web as root.
A more prudent way to run the server while preserving functionality is to
create a dedicated user with sudo
access restricted to launching and
monitoring single-user servers.
Create a user¶
To do this, first create a user that will run the Hub:
sudo useradd rhea
This user shouldn’t have a login shell or password (possible with -r).
Set up sudospawner¶
Next, you will need sudospawner to enable monitoring the single-user servers with sudo:
sudo python3 -m pip install sudospawner
Now we have to configure sudo to allow the Hub user (rhea
) to launch
the sudospawner script on behalf of our hub users (here zoe
and wash
).
We want to confine these permissions to only what we really need.
Edit /etc/sudoers
¶
To do this we add to /etc/sudoers
(use visudo
for safe editing of sudoers):
specify the list of users
JUPYTER_USERS
for whomrhea
can spawn serversset the command
JUPYTER_CMD
thatrhea
can execute on behalf of usersgive
rhea
permission to runJUPYTER_CMD
on behalf ofJUPYTER_USERS
without entering a password
For example:
# comma-separated whitelist of users that can spawn single-user servers
# this should include all of your Hub users
Runas_Alias JUPYTER_USERS = rhea, zoe, wash
# the command(s) the Hub can run on behalf of the above users without needing a password
# the exact path may differ, depending on how sudospawner was installed
Cmnd_Alias JUPYTER_CMD = /usr/local/bin/sudospawner
# actually give the Hub user permission to run the above command on behalf
# of the above users without prompting for a password
rhea ALL=(JUPYTER_USERS) NOPASSWD:JUPYTER_CMD
It might be useful to modify secure_path
to add commands in path.
As an alternative to adding every user to the /etc/sudoers
file, you can
use a group in the last line above, instead of JUPYTER_USERS
:
rhea ALL=(%jupyterhub) NOPASSWD:JUPYTER_CMD
If the jupyterhub
group exists, there will be no need to edit /etc/sudoers
again. A new user will gain access to the application when added to the group:
$ adduser -G jupyterhub newuser
Test sudo
setup¶
Test that the new user doesn’t need to enter a password to run the sudospawner command.
This should prompt for your password to switch to rhea, but not prompt for any password for the second switch. It should show some help output about logging options:
$ sudo -u rhea sudo -n -u $USER /usr/local/bin/sudospawner --help
Usage: /usr/local/bin/sudospawner [OPTIONS]
Options:
--help show this help information
...
And this should fail:
$ sudo -u rhea sudo -n -u $USER echo 'fail'
sudo: a password is required
Enable PAM for non-root¶
By default, PAM authentication is used by JupyterHub. To use PAM, the process may need to be able to read the shadow password database.
$ ls -l /etc/shadow
-rw-r----- 1 root shadow 2197 Jul 21 13:41 shadow
If there’s already a shadow group, you are set. If its permissions are more like:
$ ls -l /etc/shadow
-rw------- 1 root wheel 2197 Jul 21 13:41 shadow
Then you may want to add a shadow group, and make the shadow file group-readable:
$ sudo groupadd shadow
$ sudo chgrp shadow /etc/shadow
$ sudo chmod g+r /etc/shadow
We want our new user to be able to read the shadow passwords, so add it to the shadow group:
$ sudo usermod -a -G shadow rhea
If you want jupyterhub to serve pages on a restricted port (such as port 80 for http),
then you will need to give node
permission to do so:
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
However, you may want to further understand the consequences of this.
You may also be interested in limiting the amount of CPU any process can use
on your server. cpulimit
is a useful tool that is available for many Linux
distributions’ packaging system. This can be used to keep any user’s process
from using too much CPU cycles. You can configure it accoring to these
instructions.
NOTE: This has not been tested and may not work as expected.
$ ls -l /etc/spwd.db /etc/master.passwd
-rw------- 1 root wheel 2516 Aug 22 13:35 /etc/master.passwd
-rw------- 1 root wheel 40960 Aug 22 13:35 /etc/spwd.db
Add a shadow group if there isn’t one, and make the shadow file group-readable:
$ sudo pw group add shadow
$ sudo chgrp shadow /etc/spwd.db
$ sudo chmod g+r /etc/spwd.db
$ sudo chgrp shadow /etc/master.passwd
$ sudo chmod g+r /etc/master.passwd
We want our new user to be able to read the shadow passwords, so add it to the shadow group:
$ sudo pw user mod rhea -G shadow
Test that PAM works¶
We can verify that PAM is working, with:
$ sudo -u rhea python3 -c "import pamela, getpass; print(pamela.authenticate('$USER', getpass.getpass()))"
Password: [enter your unix password]
Make a directory for JupyterHub¶
JupyterHub stores its state in a database, so it needs write access to a directory. The simplest way to deal with this is to make a directory owned by your Hub user, and use that as the CWD when launching the server.
$ sudo mkdir /etc/jupyterhub
$ sudo chown rhea /etc/jupyterhub
Start jupyterhub¶
Finally, start the server as our newly configured user, rhea
:
$ cd /etc/jupyterhub
$ sudo -u rhea jupyterhub --JupyterHub.spawner_class=sudospawner.SudoSpawner
And try logging in.
Troubleshooting: SELinux¶
If you still get a generic Permission denied
PermissionError
, it’s possible SELinux is blocking you.
Here’s how you can make a module to allow this.
First, put this in a file named sudo_exec_selinux.te
:
module sudo_exec_selinux 1.1;
require {
type unconfined_t;
type sudo_exec_t;
class file { read entrypoint };
}
#============= unconfined_t ==============
allow unconfined_t sudo_exec_t:file entrypoint;
Then run all of these commands as root:
$ checkmodule -M -m -o sudo_exec_selinux.mod sudo_exec_selinux.te
$ semodule_package -o sudo_exec_selinux.pp -m sudo_exec_selinux.mod
$ semodule -i sudo_exec_selinux.pp
Troubleshooting: PAM session errors¶
If the PAM authentication doesn’t work and you see errors for
login:session-auth
, or similar, considering updating to a more recent version
of jupyterhub and disabling the opening of PAM sessions with
c.PAMAuthenticator.open_sessions=False
.
Contributing¶
We want you to contribute to JupyterHub in ways that are most exciting & useful to you. We value documentation, testing, bug reporting & code equally, and are glad to have your contributions in whatever form you wish :)
Our Code of Conduct (reporting guidelines) helps keep our community welcoming to as many people as possible.
Community communication channels¶
We use Discourse <https://discourse.jupyter.org>
for online discussion.
Everyone in the Jupyter community is welcome to bring ideas and questions there.
In addition, we use Gitter for online, real-time text chat,
a place for more ephemeral discussions.
The primary Gitter channel for JupyterHub is jupyterhub/jupyterhub.
Gitter isn’t archived or searchable, so we recommend going to discourse first
to make sure that discussions are most useful and accessible to the community.
Remember that our community is distributed across the world in various
timezones, so be patient if you do not get an answer immediately!
GitHub issues are used for most long-form project discussions, bug reports and feature requests. Issues related to a specific authenticator or spawner should be directed to the appropriate repository for the authenticator or spawner. If you are using a specific JupyterHub distribution (such as Zero to JupyterHub on Kubernetes or The Littlest JupyterHub), you should open issues directly in their repository. If you can not find a repository to open your issue in, do not worry! Create it in the main JupyterHub repository and our community will help you figure it out.
A mailing list for all of Project Jupyter exists, along with one for teaching with Jupyter.
Setting up a development install¶
System requirements¶
JupyterHub can only run on MacOS or Linux operating systems. If you are using Windows, we recommend using VirtualBox or a similar system to run Ubuntu Linux for development.
Install Python¶
JupyterHub is written in the Python programming language, and requires you have at least version 3.5 installed locally. If you haven’t installed Python before, the recommended way to install it is to use miniconda. Remember to get the ‘Python 3’ version, and not the ‘Python 2’ version!
Install nodejs¶
configurable-http-proxy
, the default proxy implementation for
JupyterHub, is written in Javascript to run on NodeJS. If you have not installed nodejs before, we
recommend installing it in the miniconda
environment you set up for
Python. You can do so with conda install nodejs
.
Install git¶
JupyterHub uses git & GitHub for development & collaboration. You need to install git to work on JupyterHub. We also recommend getting a free account on GitHub.com.
Setting up a development install¶
When developing JupyterHub, you need to make changes to the code & see their effects quickly. You need to do a developer install to make that happen.
Clone the JupyterHub git repository to your computer.
git clone https://github.com/jupyterhub/jupyterhub cd jupyterhub
Make sure the
python
you installed and thenpm
you installed are available to you on the command line.python -V
This should return a version number greater than or equal to 3.5.
npm -v
This should return a version number greater than or equal to 5.0.
Install
configurable-http-proxy
. This is required to run JupyterHub.npm install -g configurable-http-proxy
If you get an error that says
Error: EACCES: permission denied
, you might need to prefix the command withsudo
. If you do not have access to sudo, you may instead run the following commands:npm install configurable-http-proxy export PATH=$PATH:$(pwd)/node_modules/.bin
The second line needs to be run every time you open a new terminal.
Install the python packages required for JupyterHub development.
python3 -m pip install -r dev-requirements.txt python3 -m pip install -r requirements.txt
Install the development version of JupyterHub. This lets you edit JupyterHub code in a text editor & restart the JupyterHub process to see your code changes immediately.
python3 -m pip install --editable .
You are now ready to start JupyterHub!
jupyterhub
You can access JupyterHub from your browser at
http://localhost:8000
now.
Happy developing!
Using DummyAuthenticator & SimpleLocalProcessSpawner¶
To simplify testing of JupyterHub, it’s helpful to use
DummyAuthenticator
instead of the default JupyterHub
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
There is a sample configuration file that does this in
testing/jupyterhub_config.py
. To launch jupyterhub with this
configuration:
jupyterhub -f testing/jupyterhub_config.py
The default JupyterHub authenticator & spawner require your system to have user accounts for each user you want to log in to JupyterHub as.
DummyAuthenticator allows you to log in with any username & password, while SimpleLocalProcessSpawner allows you to start servers without having to create a unix user for each JupyterHub user. Together, these make it much easier to test JupyterHub.
Tip: If you are working on parts of JupyterHub that are common to all authenticators & spawners, we recommend using both DummyAuthenticator & SimpleLocalProcessSpawner. If you are working on just authenticator related parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on just spawner related parts, use only DummyAuthenticator.
Troubleshooting¶
This section lists common ways setting up your development environment may fail, and how to fix them. Please add to the list if you encounter yet another way it can fail!
lessc
not found¶
If the python3 -m pip install --editable .
command fails and complains about
lessc
being unavailable, you may need to explicitly install some
additional JavaScript dependencies:
npm install
This will fetch client-side JavaScript dependencies necessary to compile CSS.
You may also need to manually update JavaScript and CSS after some development updates, with:
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources
Contributing Documentation¶
Documentation is often more important than code. This page helps you get set up on how to contribute documentation to JupyterHub.
Building documentation locally¶
We use sphinx to build our documentation. It takes
our documentation source files (written in markdown or reStructuredText &
stored under the docs/source
directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
Make sure you have successfuly completed Setting up a development install.
Install the packages required to build the docs.
python3 -m pip install -r docs/requirements.txt
Build the html version of the docs. This is the most commonly used output format, so verifying it renders as you should is usually good enough.
cd docs make html
This step will display any syntax or formatting errors in the documentation, along with the filename / line number in which they occurred. Fix them, and re-run the
make html
command to re-render the documentation.View the rendered documentation by opening
build/html/index.html
in a web browser.Tip
On macOS, you can open a file from the terminal with
open <path-to-file>
. On Linux, you can do the same withxdg-open <path-to-file>
.
Documentation conventions¶
This section lists various conventions we use in our documentation. This is a living document that grows over time, so feel free to add to it / change it!
Our entire documentation does not yet fully conform to these conventions yet, so help in making it so would be appreciated!
pip
invocation¶
There are many ways to invoke a pip
command, we recommend the following
approach:
python3 -m pip
This invokes pip explicitly using the python3 binary that you are currently using. This is the recommended way to invoke pip in our documentation, since it is least likely to cause problems with python3 and pip being from different environments.
For more information on how to invoke pip
commands, see
the pip documentation.
Testing JupyterHub¶
Unit test help validate that JupyterHub works the way we think it does, and continues to do so when changes occur. They also help communicate precisely what we expect our code to do.
JupyterHub uses pytest for all our tests. You
can find them under jupyterhub/tests
directory in the git repository.
Running the tests¶
Make sure you have completed Setting up a development install. You should be able to start
jupyterhub
from the commandline & access it from your web browser. This ensures that the dev environment is properly set up for tests to run.You can run all tests in JupyterHub
pytest --async-test-timeout 15 -v jupyterhub/tests
This should display progress as it runs all the tests, printing information about any test failures as they occur.
The
--async-test-timeout
parameter is used by pytest-tornado to set the asynchronous test timeout to 15 seconds rather than the default 5, since some of our tests take longer than 5s to execute.You can also run tests in just a specific file:
pytest --async-test-timeout 15 -v jupyterhub/tests/<test-file-name>
To run a specific test only, you can do:
pytest --async-test-timeout 15 -v jupyterhub/tests/<test-file-name>::<test-name>
This runs the test with function name
<test-name>
defined in<test-file-name>
. This is very useful when you are iteratively developing a single test.For example, to run the test
test_shutdown
in the filetest_api.py
, you would run:pytest -v jupyterhub/tests/test_api.py::test_shutdown
Troubleshooting Test Failures¶
All the tests are failing¶
Make sure you have completed all the steps in Setting up a development install sucessfully, and
can launch jupyterhub
from the terminal.
Tests are timing out¶
The --async-test-timeout
parameter to pytest
is used by
pytest-tornado to set
the asynchronous test timeout to a higher value than the default of 5s,
since some of our tests take longer than 5s to execute. If the tests
are still timing out, try increasing that value even more. You can
also set an environment variable ASYNC_TEST_TIMEOUT
instead of
passing --async-test-timeout
to each invocation of pytest.
The JupyterHub roadmap¶
This roadmap collects “next steps” for JupyterHub. It is about creating a shared understanding of the project’s vision and direction amongst the community of users, contributors, and maintainers. The goal is to communicate priorities and upcoming release plans. It is not a aimed at limiting contributions to what is listed here.
Using the roadmap¶
Sharing Feedback on the Roadmap¶
All of the community is encouraged to provide feedback as well as share new ideas with the community. Please do so by submitting an issue. If you want to have an informal conversation first use one of the other communication channels. After submitting the issue, others from the community will probably respond with questions or comments they have to clarify the issue. The maintainers will help identify what a good next step is for the issue.
What do we mean by “next step”?¶
When submitting an issue, think about what “next step” category best describes your issue:
now, concrete/actionable step that is ready for someone to start work on. These might be items that have a link to an issue or more abstract like “decrease typos and dead links in the documentation”
soon, less concrete/actionable step that is going to happen soon, discussions around the topic are coming close to an end at which point it can move into the “now” category
later, abstract ideas or tasks, need a lot of discussion or experimentation to shape the idea so that it can be executed. Can also contain concrete/actionable steps that have been postponed on purpose (these are steps that could be in “now” but the decision was taken to work on them later)
Reviewing and Updating the Roadmap¶
The roadmap will get updated as time passes (next review by 1st December) based on discussions and ideas captured as issues. This means this list should not be exhaustive, it should only represent the “top of the stack” of ideas. It should not function as a wish list, collection of feature requests or todo list. For those please create a new issue.
The roadmap should give the reader an idea of what is happening next, what needs input and discussion before it can happen and what has been postponed.
The roadmap proper¶
Project vision¶
JupyterHub is a dependable tool used by humans that reduces the complexity of creating the environment in which a piece of software can be executed.
Now¶
These “Now” items are considered active areas of focus for the project:
HubShare - a sharing service for use with JupyterHub.
Users should be able to:
Push a project to other users.
Get a checkout of a project from other users.
Push updates to a published project.
Pull updates from a published project.
Manage conflicts/merges by simply picking a version (our/theirs)
Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
Have directories that are managed by git completely separately from our stuff.
Look at pushed content that they have access to without an explicit pull.
Define and manage teams of users.
Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
Build other services, such as static HTML publishing and dashboarding on top of these things.
Soon¶
These “Soon” items are under discussion. Once an item reaches the point of an actionable plan, the item will be moved to the “Now” section. Typically, these will be moved at a future review of the roadmap.
resource monitoring and management:
(prometheus?) API for resource monitoring
tracking activity on single-user servers instead of the proxy
notes and activity tracking per API token
UI for managing named servers
Later¶
The “Later” items are things that are at the back of the project’s mind. At this time there is no active plan for an item. The project would like to find the resources and time to discuss these ideas.
real-time collaboration
Enter into real-time collaboration mode for a project that starts a shared execution context.
Once the single-user notebook package supports realtime collaboration, implement sharing mechanism integrated into the Hub.
Reporting security issues in Jupyter or JupyterHub¶
If you find a security vulnerability in Jupyter or JupyterHub, whether it is a failure of the security model described in Security Overview or a failure in implementation, please report it to security@ipython.org.
If you prefer to encrypt your security reports,
you can use this PGP public key
.
Upgrading JupyterHub¶
We try to make upgrades between minor versions as painless as possible.
Upgrading JupyterHub¶
JupyterHub offers easy upgrade pathways between minor versions. This document describes how to do these upgrades.
If you are using a JupyterHub distribution, you should consult the distribution’s documentation on how to upgrade. This document is if you have set up your own JupyterHub without using a distribution.
It is long because is pretty detailed! Most likely, upgrading JupyterHub is painless, quick and with minimal user interruption.
Read the Changelog¶
The changelog contains information on what has changed with the new JupyterHub release, and any deprecation warnings. Read these notes to familiarize yourself with the coming changes. There might be new releases of authenticators & spawners you are using, so read the changelogs for those too!
Notify your users¶
If you are using the default configuration where configurable-http-proxy
is managed by JupyterHub, your users will see service disruption during
the upgrade process. You should notify them, and pick a time to do the
upgrade where they will be least disrupted.
If you are using a different proxy, or running configurable-http-proxy
independent of JupyterHub, your users will be able to continue using notebook
servers they had already launched, but will not be able to launch new servers
nor sign in.
Backup database & config¶
Before doing an upgrade, it is critical to back up:
Your JupyterHub database (sqlite by default, or MySQL / Postgres if you used those). If you are using sqlite (the default), you should backup the
jupyterhub.sqlite
file.Your
jupyterhub_config.py
file.Your user’s home directories. This is unlikely to be affected directly by a JupyterHub upgrade, but we recommend a backup since user data is very critical.
Shutdown JupyterHub¶
Shutdown the JupyterHub process. This would vary depending on how you
have set up JupyterHub to run. Most likely, it is using a process
supervisor of some sort (systemd
or supervisord
or even docker
).
Use the supervisor specific command to stop the JupyterHub process.
Upgrade JupyterHub packages¶
There are two environments where the jupyterhub
package is installed:
The hub environment, which is where the JupyterHub server process runs. This is started with the
jupyterhub
command, and is what people generally think of as JupyterHub.The notebook user environments. This is where the user notebook servers are launched from, and is probably custom to your own installation. This could be just one environment (different from the hub environment) that is shared by all users, one environment per user, or same environment as the hub environment. The hub launched the
jupyterhub-singleuser
command in this environment, which in turn starts the notebook server.
You need to make sure the version of the jupyterhub
package matches
in both these environments. If you installed jupyterhub
with pip,
you can upgrade it with:
python3 -m pip install --upgrade jupyterhub==<version>
Where <version>
is the version of JupyterHub you are upgrading to.
If you used conda
to install jupyterhub
, you should upgrade it
with:
conda install -c conda-forge jupyterhub==<version>
Where <version>
is the version of JupyterHub you are upgrading to.
You should also check for new releases of the authenticator & spawner you are using. You might wish to upgrade those packages too along with JupyterHub, or upgrade them separately.
Upgrade JupyterHub database¶
Once new packages are installed, you need to upgrade the JupyterHub
database. From the hub environment, in the same directory as your
jupyterhub_config.py
file, you should run:
jupyterhub upgrade-db
This should find the location of your database, and run necessary upgrades for it.
SQLite database disadvantages¶
SQLite has some disadvantages when it comes to upgrading JupyterHub. These are:
upgrade-db
may not work, and you may need delete your database and start with a fresh one.downgrade-db
will not work if you want to rollback to an earlier version, so backup thejupyterhub.sqlite
file before upgrading
What happens if I delete my database?¶
Losing the Hub database is often not a big deal. Information that resides only in the Hub database includes:
active login tokens (user cookies, service tokens)
users added via JupyterHub UI, instead of config files
info about running servers
If the following conditions are true, you should be fine clearing the Hub database and starting over:
users specified in config file, or login using an external authentication provider (Google, GitHub, LDAP, etc)
user servers are stopped during upgrade
don’t mind causing users to login again after upgrade
Start JupyterHub¶
Once the database upgrade is completed, start the jupyterhub
process again.
Log-in and start the server to make sure things work as expected.
Check the logs for any errors or deprecation warnings. You might have to update your
jupyterhub_config.py
file to deal with any deprecated options.
Congratulations, your JupyterHub has been upgraded!
Changelog¶
For detailed changes from the prior release, click on the version number, and
its link will bring up a GitHub listing of changes. Use git log
on the
command line for details.
1.0¶
1.0.0 2019-05-03¶
JupyterHub 1.0 is a major milestone for JupyterHub. Huge thanks to the many people who have contributed to this release, whether it was through discussion, testing, documentation, or development.
Support TLS encryption and authentication of all internal communication. Spawners must implement
.move_certs
method to make certificates available to the notebook server if it is not local to the Hub.There is now full UI support for managing named servers. With named servers, each jupyterhub user may have access to more than one named server. For example, a professor may access a server named
research
and another namedteaching
.Authenticators can now expire and refresh authentication data by implementing
Authenticator.refresh_user(user)
. This allows things like OAuth data and access tokens to be refreshed. When used together withAuthenticator.refresh_pre_spawn = True
, auth refresh can be forced prior to Spawn, allowing the Authenticator to require that authentication data is fresh immediately before the user’s server is launched.
allow custom spawners, authenticators, and proxies to register themselves via ‘entry points’, enabling more convenient configuration such as:
c.JupyterHub.authenticator_class = 'github' c.JupyterHub.spawner_class = 'docker' c.JupyterHub.proxy_class = 'traefik_etcd'
Spawners are passed the tornado Handler object that requested their spawn (as
self.handler
), so they can do things like make decisions based on query arguments in the request.SimpleSpawner and DummyAuthenticator, which are useful for testing, have been merged into JupyterHub itself:
# For testing purposes only. Should not be used in production. c.JupyterHub.authenticator_class = 'dummy' c.JupyterHub.spawner_class = 'simple'
These classes are not appropriate for production use. Only testing.
Add health check endpoint at
/hub/health
Several prometheus metrics have been added (thanks to Outreachy applicants!)
A new API for registering user activity. To prepare for the addition of alternate proxy implementations, responsibility for tracking activity is taken away from the proxy and moved to the notebook server (which already has activity tracking features). Activity is now tracked by pushing it to the Hub from user servers instead of polling the proxy API.
Dynamic
options_form
callables may now return an empty string which will result in no options form being rendered.Spawner.user_options
is persisted to the database to be re-used, so that a server spawned once via the form can be re-spawned via the API with the same options.Added
c.PAMAuthenticator.pam_normalize_username
option for round-tripping usernames through PAM to retrieve the normalized form.Added
c.JupyterHub.named_server_limit_per_user
configuration to limit the number of named servers each user can have. The default is 0, for no limit.API requests to HubAuthenticated services (e.g. single-user servers) may pass a token in the
Authorization
header, matching authentication with the Hub API itself.Added
Authenticator.is_admin(handler, authentication)
method andAuthenticator.admin_groups
configuration for automatically determining that a member of a group should be considered an admin.New
c.Authenticator.post_auth_hook
configuration that can be any callable of the formasync def hook(authenticator, handler, authentication=None):
. This hook may transform the return value ofAuthenticator.authenticate()
and return a new authentication dictionary, e.g. specifying admin privileges, group membership, or custom white/blacklisting logic. This hook is called after existing normalization and whitelist checking.Spawner.options_from_form
may now be asyncAdded
JupyterHub.shutdown_on_logout
option to trigger shutdown of a user’s servers when they log out.When
Spawner.start
raises an Exception, a message can be passed on to the user if the exception has a.jupyterhub_message
attribute.
Authentication methods such as
check_whitelist
should now take an additionalauthentication
argument that will be a dictionary (default: None) of authentication data, as returned byAuthenticator.authenticate()
:def check_whitelist(self, username, authentication=None): ...
authentication
should have a default value of None for backward-compatibility with jupyterhub < 1.0.Prometheus metrics page is now authenticated. Any authenticated user may see the prometheus metrics. To disable prometheus authentication, set
JupyterHub.authenticate_prometheus = False
.Visits to
/user/:name
no longer trigger an implicit launch of the user’s server. Instead, a page is shown indicating that the server is not running with a link to request the spawn.API requests to
/user/:name
for a not-running server will have status 503 instead of 404.OAuth includes a confirmation page when attempting to visit another user’s server, so that users can choose to cancel authentication with the single-user server. Confirmation is still skipped when accessing your own server.
Various fixes to improve Windows compatibility (default Authenticator and Spawner still do not support Windows, but other Spawners may)
Fixed compatibility with Oracle db
Fewer redirects following a visit to the default
/
urlError when progress is requested before progress is ready
Error when API requests are made to a not-running server without authentication
Avoid logging database password on connect if password is specified in
JupyterHub.db_url
.
There have been several changes to the development process that shouldn’t
generally affect users of JupyterHub, but may affect contributors.
In general, see CONTRIBUTING.md
for contribution info or ask if you have questions.
JupyterHub has adopted
black
as a code autoformatter andpre-commit
as a tool for automatically running code formatting on commit. This is meant to make it easier to contribute to JupyterHub, so let us know if it’s having the opposite effect.JupyterHub has switched its test suite to using
pytest-asyncio
frompytest-tornado
.OAuth is now implemented internally using
oauthlib
instead ofpython-oauth2
. This should have no effect on behavior.
0.9¶
0.9.6 2019-04-01¶
JupyterHub 0.9.6 is a security release.
Fixes an Open Redirect vulnerability (CVE-2019-10255).
JupyterHub 0.9.5 included a partial fix for this issue.
0.9.4 2018-09-24¶
JupyterHub 0.9.4 is a small bugfix release.
Fixes an issue that required all running user servers to be restarted when performing an upgrade from 0.8 to 0.9.
Fixes content-type for API endpoints back to
application/json
. It wastext/html
in 0.9.0-0.9.3.
0.9.3 2018-09-12¶
JupyterHub 0.9.3 contains small bugfixes and improvements
Fix token page and model handling of
expires_at
. This field was missing from the REST API model for tokens and could cause the token page to not renderAdd keep-alive to progress event stream to avoid proxies dropping the connection due to inactivity
Documentation and example improvements
Disable quit button when using notebook 5.6
Prototype new feature (may change prior to 1.0): pass requesting Handler to Spawners during start, accessible as
self.handler
0.9.2 2018-08-10¶
JupyterHub 0.9.2 contains small bugfixes and improvements.
Documentation and example improvements
Add
Spawner.consecutive_failure_limit
config for aborting the Hub if too many spawns fail in a row.Fix for handling SIGTERM when run with asyncio (tornado 5)
Windows compatibility fixes
0.9.1 2018-07-04¶
JupyterHub 0.9.1 contains a number of small bugfixes on top of 0.9.
Use a PID file for the proxy to decrease the likelihood that a leftover proxy process will prevent JupyterHub from restarting
c.LocalProcessSpawner.shell_cmd
is now configurableAPI requests to stopped servers (requests to the hub for
/user/:name/api/...
) fail with 404 rather than triggering a restart of the serverCompatibility fix for notebook 5.6.0 which will introduce further security checks for local connections
Managed services always use localhost to talk to the Hub if the Hub listening on all interfaces
When using a URL prefix, the Hub route will be
JupyterHub.base_url
instead of unconditionally/
additional fixes and improvements
0.9.0 2018-06-15¶
JupyterHub 0.9 is a major upgrade of JupyterHub. There are several changes to the database schema, so make sure to backup your database and run:
jupyterhub upgrade-db
after upgrading jupyterhub.
The biggest change for 0.9 is the switch to asyncio coroutines everywhere instead of tornado coroutines. Custom Spawners and Authenticators are still free to use tornado coroutines for async methods, as they will continue to work. As part of this upgrade, JupyterHub 0.9 drops support for Python < 3.5 and tornado < 5.0.
Require Python >= 3.5
Require tornado >= 5.0
Use asyncio coroutines throughout
Set status 409 for conflicting actions instead of 400, e.g. creating users or groups that already exist.
timestamps in REST API continue to be UTC, but now include ‘Z’ suffix to identify them as such.
REST API User model always includes
servers
dict, not just when named servers are enabled.server
info is no longer available to oauth identification endpoints, only user info and group membership.User.last_activity
may be None if a user has not been seen, rather than starting with the user creation time which is now separately stored asUser.created
.static resources are now found in
$PREFIX/share/jupyterhub
instead ofshare/jupyter/hub
for improved consistency.Deprecate
.extra_log_file
config. Use pipe redirection instead:jupyterhub &>> /var/log/jupyterhub.log
Add
JupyterHub.bind_url
config for setting the full bind URL of the proxy. Sets ip, port, base_url all at once.Add
JupyterHub.hub_bind_url
for setting the full host+port of the Hub.hub_bind_url
supports unix domain sockets, e.g.unix+http://%2Fsrv%2Fjupyterhub.sock
Deprecate
JupyterHub.hub_connect_port
config in favor ofJupyterHub.hub_connect_url
.hub_connect_ip
is not deprecated and can still be used in the common case where only the ip address of the hub differs from the bind ip.
Spawners can define a
.progress
method which should be an async generator. The generator should yield events of the form:{ "message": "some-state-message", "progress": 50, }
These messages will be shown with a progress bar on the spawn-pending page. The
async_generator
package can be used to make async generators compatible with Python 3.5.track activity of individual API tokens
new REST API for managing API tokens at
/hub/api/user/tokens[/token-id]
allow viewing/revoking tokens via token page
User creation time is available in the REST API as
User.created
Server start time is stored as
Server.started
Spawner.start
may return a URL for connecting to a notebook instead of(ip, port)
. This enables Spawners to launch servers that setup their own HTTPS.Optimize database performance by disabling sqlalchemy expire_on_commit by default.
Add
python -m jupyterhub.dbutil shell
entrypoint for quickly launching an IPython session connected to your JupyterHub database.Include
User.auth_state
in user model on single-user REST endpoints for admins only.Include
Server.state
in server model on REST endpoints for admins only.Add
Authenticator.blacklist
for blacklisting users instead of whitelisting.Pass
c.JupyterHub.tornado_settings['cookie_options']
down to Spawners so that cookie options (e.g.expires_days
) can be set globally for the whole application.SIGINFO (
ctrl-t
) handler showing the current status of all running threads, coroutines, and CPU/memory/FD consumption.Add async
Spawner.get_options_form
alternative to.options_form
, so it can be a coroutine.Add
JupyterHub.redirect_to_server
config to govern whether users should be sent to their server on login or the JupyterHub home page.html page templates can be more easily customized and extended.
Allow registering external OAuth clients for using the Hub as an OAuth provider.
Add basic prometheus metrics at
/hub/metrics
endpoint.Add session-id cookie, enabling immediate revocation of login tokens.
Authenticators may specify that users are admins by specifying the
admin
key when return the user model as a dict.Added “Start All” button to admin page for launching all user servers at once.
Services have an
info
field which is a dictionary. This is accessible via the REST API.JupyterHub.extra_handlers
allows defining additional tornado RequestHandlers attached to the Hub.API tokens may now expire. Expiry is available in the REST model as
expires_at
, and settable when creating API tokens by specifyingexpires_in
.
Remove green from theme to improve accessibility
Fix error when proxy deletion fails due to route already being deleted
clear
?redirects
from URL on successful launchdisable send2trash by default, which is rarely desirable for jupyterhub
Put PAM calls in a thread so they don’t block the main application in cases where PAM is slow (e.g. LDAP).
Remove implicit spawn from login handler, instead relying on subsequent request for
/user/:name
to trigger spawn.Fixed several inconsistencies for initial redirects, depending on whether server is running or not and whether the user is logged in or not.
Admin requests for
/user/:name
(when admin-access is enabled) launch the right server if it’s not running instead of redirecting to their own.Major performance improvement starting up JupyterHub with many users, especially when most are inactive.
Various fixes in race conditions and performance improvements with the default proxy.
Fixes for CORS headers
Stop setting
.form-control
on spawner form inputs unconditionally.Better recovery from database errors and database connection issues without having to restart the Hub.
Fix handling of
~
character in usernames.Fix jupyterhub startup when
getpass.getuser()
would fail, e.g. due to missing entry in passwd file in containers.
0.8¶
0.8.1 2017-11-07¶
JupyterHub 0.8.1 is a collection of bugfixes and small improvements on 0.8.
Run tornado with AsyncIO by default
Add
jupyterhub --upgrade-db
flag for automatically upgrading the database as part of startup. This is useful for cases where manually runningjupyterhub upgrade-db
as a separate step is unwieldy.Avoid creating backups of the database when no changes are to be made by
jupyterhub upgrade-db
.
Add some further validation to usernames -
/
is not allowed in usernames.Fix empty logout page when using auto_login
Fix autofill of username field in default login form.
Fix listing of users on the admin page who have not yet started their server.
Fix ever-growing traceback when re-raising Exceptions from spawn failures.
Remove use of deprecated
bower
for javascript client dependencies.
0.8.0 2017-10-03¶
JupyterHub 0.8 is a big release!
Perhaps the biggest change is the use of OAuth to negotiate authentication between the Hub and single-user services. Due to this change, it is important that the single-user server and Hub are both running the same version of JupyterHub. If you are using containers (e.g. via DockerSpawner or KubeSpawner), this means upgrading jupyterhub in your user images at the same time as the Hub. In most cases, a
pip install jupyterhub==version
in your Dockerfile is sufficient.
JupyterHub now defined a
Proxy
API for custom proxy implementations other than the default. The defaults are unchanged, but configuration of the proxy is now done on theConfigurableHTTPProxy
class instead of the top-level JupyterHub. TODO: docs for writing a custom proxy.Single-user servers and services (anything that uses HubAuth) can now accept token-authenticated requests via the Authentication header.
Authenticators can now store state in the Hub’s database. To do so, the
authenticate
method should return a dict of the form{ 'username': 'name', 'state': {} }
This data will be encrypted and requires
JUPYTERHUB_CRYPT_KEY
environment variable to be set and theAuthenticator.enable_auth_state
flag to be True. If these are not set, auth_state returned by the Authenticator will not be stored.There is preliminary support for multiple (named) servers per user in the REST API. Named servers can be created via API requests, but there is currently no UI for managing them.
Add
LocalProcessSpawner.popen_kwargs
andLocalProcessSpawner.shell_cmd
for customizing how user server processes are launched.Add
Authenticator.auto_login
flag for skipping the “Login with…” page explicitly.Add
JupyterHub.hub_connect_ip
configuration for the ip that should be used when connecting to the Hub. This is promoting (and deprecating)DockerSpawner.hub_ip_connect
for use by all Spawners.Add
Spawner.pre_spawn_hook(spawner)
hook for customizing pre-spawn events.Add
JupyterHub.active_server_limit
andJupyterHub.concurrent_spawn_limit
for limiting the total number of running user servers and the number of pending spawns, respectively.
more arguments to spawners are now passed via environment variables (
.get_env()
) rather than CLI arguments (.get_args()
)internally generated tokens no longer get extra hash rounds, significantly speeding up authentication. The hash rounds were deemed unnecessary because the tokens were already generated with high entropy.
JUPYTERHUB_API_TOKEN
env is available at all times, rather than being removed during single-user start. The token is now accessible to kernel processes, enabling user kernels to make authenticated API requests to Hub-authenticated services.Cookie secrets should be 32B hex instead of large base64 secrets.
pycurl is used by default, if available.
So many things fixed!
Collisions are checked when users are renamed
Fix bug where OAuth authenticators could not logout users due to being redirected right back through the login process.
If there are errors loading your config files, JupyterHub will refuse to start with an informative error. Previously, the bad config would be ignored and JupyterHub would launch with default configuration.
Raise 403 error on unauthorized user rather than redirect to login, which could cause redirect loop.
Set
httponly
on cookies because it’s prudent.Improve support for MySQL as the database backend
Many race conditions and performance problems under heavy load have been fixed.
Fix alembic tagging of database schema versions.
End support for Python 3.3
0.7¶
0.7.2 - 2017-01-09¶
Support service environment variables and defaults in
jupyterhub-singleuser
for easier deployment of notebook servers as a Service.Add
--group
parameter for deployingjupyterhub-singleuser
as a Service with group authentication.Include URL parameters when redirecting through
/user-redirect/
Fixed¶
Fix group authentication for HubAuthenticated services
0.7.1 - 2017-01-02¶
Spawner.will_resume
for signaling that a single-user server is paused instead of stopped. This is needed for cases likeDockerSpawner.remove_containers = False
, where the first API token is re-used for subsequent spawns.Warning on startup about single-character usernames, caused by common
set('string')
typo in config.
Removed spurious warning about empty
next_url
, which is AOK.
0.7.0 - 2016-12-2¶
Implement Services API #705
Add
/api/
and/api/info
endpoints #675Add documentation for JupyterLab, pySpark configuration, troubleshooting, and more.
Add logging of error if adding users already in database. #689
Add HubAuth class for authenticating with JupyterHub. This class can be used by any application, even outside tornado.
Add user groups.
Add
/hub/user-redirect/...
URL for redirecting users to a file on their own server.
0.6¶
0.6.1 - 2016-05-04¶
Bugfixes on 0.6:
statsd is an optional dependency, only needed if in use
Notice more quickly when servers have crashed
Better error pages for proxy errors
Add Stop All button to admin panel for stopping all servers at once
0.6.0 - 2016-04-25¶
JupyterHub has moved to a new
jupyterhub
namespace on GitHub and Docker. What wasjuptyer/jupyterhub
is nowjupyterhub/jupyterhub
, etc.jupyterhub/jupyterhub
image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A newjupyterhub/jupyterhub-onbuild
image does thisAdd statsd support, via
c.JupyterHub.statsd_{host,port,prefix}
Update to traitlets 4.1
@default
,@observe
APIs for traitsAllow disabling PAM sessions via
c.PAMAuthenticator.open_sessions = False
. This may be needed on SELinux-enabled systems, where our PAM session logic often does not work properlyAdd
Spawner.environment
configurable, for defining extra environment variables to load for single-user serversJupyterHub API tokens can be pregenerated and loaded via
JupyterHub.api_tokens
, a dict oftoken: username
.JupyterHub API tokens can be requested via the REST API, with a POST request to
/api/authorizations/token
. This can only be used if the Authenticator has a username and password.Various fixes for user URLs and redirects
0.5 - 2016-03-07¶
Single-user server must be run with Jupyter Notebook ≥ 4.0
Require
--no-ssl
confirmation to allow the Hub to be run without SSL (e.g. behind SSL termination in nginx)Add lengths to text fields for MySQL support
Add
Spawner.disable_user_config
for preventing user-owned configuration from modifying single-user servers.Fixes for MySQL support.
Add ability to run each user’s server on its own subdomain. Requires wildcard DNS and wildcard SSL to be feasible. Enable subdomains by setting
JupyterHub.subdomain_host = 'https://jupyterhub.domain.tld[:port]'
.Use
127.0.0.1
for local communication instead oflocalhost
, avoiding issues with DNS on some systems.Fix race that could add users to proxy prematurely if spawning is slow.
0.4¶
0.4.0 - 2016-02-01¶
Add
Spawner.user_options_form
for specifying an HTML form to present to users, allowing users to influence the spawning of their own servers.Add
Authenticator.pre_spawn_start
andAuthenticator.post_spawn_stop
hooks, so that Authenticators can do setup or teardown (e.g. passing credentials to Spawner, mounting data sources, etc.). These methods are typically used with custom Authenticator+Spawner pairs.0.4 will be the last JupyterHub release where single-user servers running IPython 3 is supported instead of Notebook ≥ 4.0.
0.3 - 2015-11-04¶
No longer make the user starting the Hub an admin
start PAM sessions on login
hooks for Authenticators to fire before spawners start and after they stop, allowing deeper interaction between Spawner/Authenticator pairs.
login redirect fixes
0.2 - 2015-07-12¶
Based on standalone traitlets instead of IPython.utils.traitlets
multiple users in admin panel
Fixes for usernames that require escaping
0.1 - 2015-03-07¶
First preview release
API Reference¶
The JupyterHub API¶
- Release
1.0.1dev
- Date
Sep 07, 2019
JupyterHub also provides a REST API for administration of the Hub and users. The documentation on Using JupyterHub’s REST API provides information on:
what you can do with the API
creating an API token
adding API tokens to the config files
making an API request programmatically using the requests library
learning more about JupyterHub’s API
The same JupyterHub API spec, as found here, is available in an interactive form here (on swagger’s petstore). The OpenAPI Initiative (fka Swagger™) is a project used to describe and document RESTful APIs.
JupyterHub API Reference:
Application configuration¶
Module: jupyterhub.app
¶
The multi-user notebook application
JupyterHub
¶-
class
jupyterhub.app.
JupyterHub
(**kwargs)¶ An Application for starting a Multi-User Jupyter Notebook server.
-
active_server_limit
c.JupyterHub.active_server_limit = Int(0)¶ Maximum number of concurrent servers that can be active at a time.
Setting this can limit the total resources your users can consume.
An active server is any server that’s not fully stopped. It is considered active from the time it has been requested until the time that it has completely stopped.
If this many user servers are active, users will not be able to launch new servers until a server is shutdown. Spawn requests will be rejected with a 429 error asking them to try again.
If set to 0, no limit is enforced.
-
active_user_window
c.JupyterHub.active_user_window = Int(1800)¶ Duration (in seconds) to determine the number of active users.
-
activity_resolution
c.JupyterHub.activity_resolution = Int(30)¶ Resolution (in seconds) for updating activity
If activity is registered that is less than activity_resolution seconds more recent than the current value, the new value will be ignored.
This avoids too many writes to the Hub database.
-
admin_access
c.JupyterHub.admin_access = Bool(False)¶ Grant admin users permission to access single-user servers.
Users should be properly informed if this is enabled.
-
admin_users
c.JupyterHub.admin_users = Set()¶ DEPRECATED since version 0.7.2, use Authenticator.admin_users instead.
-
allow_named_servers
c.JupyterHub.allow_named_servers = Bool(False)¶ Allow named single-user servers per user
-
answer_yes
c.JupyterHub.answer_yes = Bool(False)¶ Answer yes to any questions (e.g. confirm overwrite)
-
api_tokens
c.JupyterHub.api_tokens = Dict()¶ PENDING DEPRECATION: consider using services
Dict of token:username to be loaded into the database.
Allows ahead-of-time generation of API tokens for use by externally managed services, which authenticate as JupyterHub users.
Consider using services for general services that talk to the JupyterHub API.
-
authenticate_prometheus
c.JupyterHub.authenticate_prometheus = Bool(True)¶ Authentication for prometheus metrics
-
authenticator_class
c.JupyterHub.authenticator_class = EntryPointType(<class 'jupyterhub.auth.PAMAuthenticator'>)¶ Class for authenticating users.
This should be a subclass of
jupyterhub.auth.Authenticator
with an
authenticate()
method that:is a coroutine (asyncio or tornado)
returns username on success, None on failure
takes two arguments: (handler, data), where
handler
is the calling web.RequestHandler, anddata
is the POST form data from the login page.
Changed in version 1.0: authenticators may be registered via entry points, e.g.
c.JupyterHub.authenticator_class = 'pam'
Currently installed:
-
base_url
c.JupyterHub.base_url = URLPrefix('/')¶ The base URL of the entire application.
Add this to the beginning of all JupyterHub URLs. Use base_url to run JupyterHub within an existing website.
-
bind_url
c.JupyterHub.bind_url = Unicode('http://:8000')¶ The public facing URL of the whole JupyterHub application.
This is the address on which the proxy will bind. Sets protocol, ip, base_url
-
cleanup_proxy
c.JupyterHub.cleanup_proxy = Bool(True)¶ Whether to shutdown the proxy when the Hub shuts down.
Disable if you want to be able to teardown the Hub while leaving the proxy running.
Only valid if the proxy was starting by the Hub process.
If both this and cleanup_servers are False, sending SIGINT to the Hub will only shutdown the Hub, leaving everything else running.
The Hub should be able to resume from database state.
-
cleanup_servers
c.JupyterHub.cleanup_servers = Bool(True)¶ Whether to shutdown single-user servers when the Hub shuts down.
Disable if you want to be able to teardown the Hub while leaving the single-user servers running.
If both this and cleanup_proxy are False, sending SIGINT to the Hub will only shutdown the Hub, leaving everything else running.
The Hub should be able to resume from database state.
-
concurrent_spawn_limit
c.JupyterHub.concurrent_spawn_limit = Int(100)¶ Maximum number of concurrent users that can be spawning at a time.
Spawning lots of servers at the same time can cause performance problems for the Hub or the underlying spawning system. Set this limit to prevent bursts of logins from attempting to spawn too many servers at the same time.
This does not limit the number of total running servers. See active_server_limit for that.
If more than this many users attempt to spawn at a time, their requests will be rejected with a 429 error asking them to try again. Users will have to wait for some of the spawning services to finish starting before they can start their own.
If set to 0, no limit is enforced.
-
config_file
c.JupyterHub.config_file = Unicode('jupyterhub_config.py')¶ The config file to load
-
confirm_no_ssl
c.JupyterHub.confirm_no_ssl = Bool(False)¶ DEPRECATED: does nothing
Number of days for a login cookie to be valid. Default is two weeks.
The cookie secret to use to encrypt cookies.
Loaded from the JPY_COOKIE_SECRET env variable by default.
Should be exactly 256 bits (32 bytes).
File in which to store the cookie secret.
-
data_files_path
c.JupyterHub.data_files_path = Unicode('')¶ The location of jupyterhub data files (e.g. /usr/local/share/jupyterhub)
-
db_kwargs
c.JupyterHub.db_kwargs = Dict()¶ Include any kwargs to pass to the database connection. See sqlalchemy.create_engine for details.
-
db_url
c.JupyterHub.db_url = Unicode('sqlite:///jupyterhub.sqlite')¶ url for the database. e.g.
sqlite:///jupyterhub.sqlite
-
debug_db
c.JupyterHub.debug_db = Bool(False)¶ log all database transactions. This has A LOT of output
-
debug_proxy
c.JupyterHub.debug_proxy = Bool(False)¶ DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.debug
-
default_url
c.JupyterHub.default_url = Unicode('')¶ The default URL for users when they arrive (e.g. when user directs to “/”)
By default, redirects users to their own server.
Dict authority:dict(files). Specify the key, cert, and/or ca file for an authority. This is useful for externally managed proxies that wish to use internal_ssl.
The files dict has this format (you must specify at least a cert):
{ 'key': '/path/to/key.key', 'cert': '/path/to/cert.crt', 'ca': '/path/to/ca.crt' }
The authorities you can override: ‘hub-ca’, ‘notebooks-ca’, ‘proxy-api-ca’, ‘proxy-client-ca’, and ‘services-ca’.
Use with internal_ssl
-
extra_handlers
c.JupyterHub.extra_handlers = List()¶ Register extra tornado Handlers for jupyterhub.
Should be of the form
("<regex>", Handler)
The Hub prefix will be added, so
/my-page
will be served at/hub/my-page
.
-
extra_log_file
c.JupyterHub.extra_log_file = Unicode('')¶ DEPRECATED: use output redirection instead, e.g.
jupyterhub &>> /var/log/jupyterhub.log
-
extra_log_handlers
c.JupyterHub.extra_log_handlers = List()¶ Extra log handlers to set on JupyterHub logger
-
generate_certs
c.JupyterHub.generate_certs = Bool(False)¶ Generate certs used for internal ssl
-
generate_config
c.JupyterHub.generate_config = Bool(False)¶ Generate default config file
-
hub_bind_url
c.JupyterHub.hub_bind_url = Unicode('')¶ The URL on which the Hub will listen. This is a private URL for internal communication. Typically set in combination with hub_connect_url. If a unix socket, hub_connect_url must also be set.
For example:
“http://127.0.0.1:8081” “unix+http://%2Fsrv%2Fjupyterhub%2Fjupyterhub.sock”
New in version 0.9.
-
hub_connect_ip
c.JupyterHub.hub_connect_ip = Unicode('')¶ The ip or hostname for proxies and spawners to use for connecting to the Hub.
Use when the bind address (
hub_ip
) is 0.0.0.0 or otherwise different from the connect address.Default: when
hub_ip
is 0.0.0.0, usesocket.gethostname()
, otherwise usehub_ip
.Note: Some spawners or proxy implementations might not support hostnames. Check your spawner or proxy documentation to see if they have extra requirements.
New in version 0.8.
-
hub_connect_port
c.JupyterHub.hub_connect_port = Int(0)¶ DEPRECATED
Use hub_connect_url
New in version 0.8.
Deprecated since version 0.9: Use hub_connect_url
-
hub_connect_url
c.JupyterHub.hub_connect_url = Unicode('')¶ The URL for connecting to the Hub. Spawners, services, and the proxy will use this URL to talk to the Hub.
Only needs to be specified if the default hub URL is not connectable (e.g. using a unix+http:// bind url).
See also
JupyterHub.hub_connect_ip JupyterHub.hub_bind_url
New in version 0.9.
-
hub_ip
c.JupyterHub.hub_ip = Unicode('127.0.0.1')¶ The ip address for the Hub process to bind to.
By default, the hub listens on localhost only. This address must be accessible from the proxy and user servers. You may need to set this to a public ip or ‘’ for all interfaces if the proxy or user servers are in containers or on a different host.
See
hub_connect_ip
for cases where the bind and connect address should differ, orhub_bind_url
for setting the full bind URL.
-
hub_port
c.JupyterHub.hub_port = Int(8081)¶ The internal port for the Hub process.
This is the internal port of the hub itself. It should never be accessed directly. See JupyterHub.port for the public port to use when accessing jupyterhub. It is rare that this port should be set except in cases of port conflict.
See also
hub_ip
for the ip andhub_bind_url
for setting the full bind URL.
-
internal_certs_location
c.JupyterHub.internal_certs_location = Unicode('internal-ssl')¶ The location to store certificates automatically created by JupyterHub.
Use with internal_ssl
-
internal_ssl
c.JupyterHub.internal_ssl = Bool(False)¶ Enable SSL for all internal communication
This enables end-to-end encryption between all JupyterHub components. JupyterHub will automatically create the necessary certificate authority and sign notebook certificates as they’re created.
-
ip
c.JupyterHub.ip = Unicode('')¶ The public facing ip of the whole JupyterHub application (specifically referred to as the proxy).
This is the address on which the proxy will listen. The default is to listen on all interfaces. This is the only address through which JupyterHub should be accessed by users.
-
jinja_environment_options
c.JupyterHub.jinja_environment_options = Dict()¶ Supply extra arguments that will be passed to Jinja environment.
-
last_activity_interval
c.JupyterHub.last_activity_interval = Int(300)¶ Interval (in seconds) at which to update last-activity timestamps.
-
load_groups
c.JupyterHub.load_groups = Dict()¶ Dict of ‘group’: [‘usernames’] to load at startup.
This strictly adds groups and users to groups.
Loading one set of groups, then starting JupyterHub again with a different set will not remove users or groups from previous launches. That must be done through the API.
-
log_datefmt
c.JupyterHub.log_datefmt = Unicode('%Y-%m-%d %H:%M:%S')¶ The date format used by logging formatters for %(asctime)s
-
log_format
c.JupyterHub.log_format = Unicode('[%(name)s]%(highlevel)s %(message)s')¶ The Logging format template
-
log_level
c.JupyterHub.log_level = Enum(30)¶ Set the log level by value or name.
-
logo_file
c.JupyterHub.logo_file = Unicode('')¶ Specify path to a logo image to override the Jupyter logo in the banner.
-
named_server_limit_per_user
c.JupyterHub.named_server_limit_per_user = Int(0)¶ Maximum number of concurrent named servers that can be created by a user at a time.
Setting this can limit the total resources a user can consume.
If set to 0, no limit is enforced.
-
pid_file
c.JupyterHub.pid_file = Unicode('')¶ File to write PID Useful for daemonizing JupyterHub.
-
port
c.JupyterHub.port = Int(8000)¶ The public facing port of the proxy.
This is the port on which the proxy will listen. This is the only port through which JupyterHub should be accessed by users.
-
proxy_api_ip
c.JupyterHub.proxy_api_ip = Unicode('')¶ DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
-
proxy_api_port
c.JupyterHub.proxy_api_port = Int(0)¶ DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
-
proxy_auth_token
c.JupyterHub.proxy_auth_token = Unicode('')¶ DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.auth_token
-
proxy_check_interval
c.JupyterHub.proxy_check_interval = Int(30)¶ Interval (in seconds) at which to check if the proxy is running.
-
proxy_class
c.JupyterHub.proxy_class = EntryPointType(<class 'jupyterhub.proxy.ConfigurableHTTPProxy'>)¶ The class to use for configuring the JupyterHub proxy.
Should be a subclass of
jupyterhub.proxy.Proxy
.Changed in version 1.0: proxies may be registered via entry points, e.g.
c.JupyterHub.proxy_class = 'traefik'
Currently installed:
-
proxy_cmd
c.JupyterHub.proxy_cmd = Command()¶ DEPRECATED since version 0.8. Use ConfigurableHTTPProxy.command
-
recreate_internal_certs
c.JupyterHub.recreate_internal_certs = Bool(False)¶ Recreate all certificates used within JupyterHub on restart.
Note: enabling this feature requires restarting all notebook servers.
Use with internal_ssl
-
redirect_to_server
c.JupyterHub.redirect_to_server = Bool(True)¶ Redirect user to server (if running), instead of control panel.
-
reset_db
c.JupyterHub.reset_db = Bool(False)¶ Purge and reset the database.
-
service_check_interval
c.JupyterHub.service_check_interval = Int(60)¶ Interval (in seconds) at which to check connectivity of services with web endpoints.
-
service_tokens
c.JupyterHub.service_tokens = Dict()¶ Dict of token:servicename to be loaded into the database.
Allows ahead-of-time generation of API tokens for use by externally managed services.
-
services
c.JupyterHub.services = List()¶ List of service specification dictionaries.
A service
For instance:
services = [ { 'name': 'cull_idle', 'command': ['/path/to/cull_idle_servers.py'], }, { 'name': 'formgrader', 'url': 'http://127.0.0.1:1234', 'api_token': 'super-secret', 'environment': } ]
-
shutdown_on_logout
c.JupyterHub.shutdown_on_logout = Bool(False)¶ Shuts down all user servers on logout
-
spawner_class
c.JupyterHub.spawner_class = EntryPointType(<class 'jupyterhub.spawner.LocalProcessSpawner'>)¶ The class to use for spawning single-user servers.
Should be a subclass of
jupyterhub.spawner.Spawner
.Changed in version 1.0: spawners may be registered via entry points, e.g.
c.JupyterHub.spawner_class = 'localprocess'
Currently installed:
-
ssl_cert
c.JupyterHub.ssl_cert = Unicode('')¶ Path to SSL certificate file for the public facing interface of the proxy
When setting this, you should also set ssl_key
-
ssl_key
c.JupyterHub.ssl_key = Unicode('')¶ Path to SSL key file for the public facing interface of the proxy
When setting this, you should also set ssl_cert
-
statsd_host
c.JupyterHub.statsd_host = Unicode('')¶ Host to send statsd metrics to. An empty string (the default) disables sending metrics.
-
statsd_port
c.JupyterHub.statsd_port = Int(8125)¶ Port on which to send statsd metrics about the hub
-
statsd_prefix
c.JupyterHub.statsd_prefix = Unicode('jupyterhub')¶ Prefix to use for all metrics sent by jupyterhub to statsd
-
subdomain_host
c.JupyterHub.subdomain_host = Unicode('')¶ Run single-user servers on subdomains of this host.
This should be the full
https://hub.domain.tld[:port]
.Provides additional cross-site protections for javascript served by single-user servers.
Requires
<username>.hub.domain.tld
to resolve to the same host ashub.domain.tld
.In general, this is most easily achieved with wildcard DNS.
When using SSL (i.e. always) this also requires a wildcard SSL certificate.
-
template_paths
c.JupyterHub.template_paths = List()¶ Paths to search for jinja templates, before using the default templates.
-
template_vars
c.JupyterHub.template_vars = Dict()¶ Extra variables to be passed into jinja templates
-
tornado_settings
c.JupyterHub.tornado_settings = Dict()¶ Extra settings overrides to pass to the tornado application.
-
trust_user_provided_tokens
c.JupyterHub.trust_user_provided_tokens = Bool(False)¶ Trust user-provided tokens (via JupyterHub.service_tokens) to have good entropy.
If you are not inserting additional tokens via configuration file, this flag has no effect.
In JupyterHub 0.8, internally generated tokens do not pass through additional hashing because the hashing is costly and does not increase the entropy of already-good UUIDs.
User-provided tokens, on the other hand, are not trusted to have good entropy by default, and are passed through many rounds of hashing to stretch the entropy of the key (i.e. user-provided tokens are treated as passwords instead of random keys). These keys are more costly to check.
If your inserted tokens are generated by a good-quality mechanism, e.g.
openssl rand -hex 32
, then you can set this flag to True to reduce the cost of checking authentication tokens.
-
trusted_alt_names
c.JupyterHub.trusted_alt_names = List()¶ Names to include in the subject alternative name.
These names will be used for server name verification. This is useful if JupyterHub is being run behind a reverse proxy or services using ssl are on different hosts.
Use with internal_ssl
-
trusted_downstream_ips
c.JupyterHub.trusted_downstream_ips = List()¶ Downstream proxy IP addresses to trust.
This sets the list of IP addresses that are trusted and skipped when processing the
X-Forwarded-For
header. For example, if an external proxy is used for TLS termination, its IP address should be added to this list to ensure the correct client IP addresses are recorded in the logs instead of the proxy server’s IP address.
-
upgrade_db
c.JupyterHub.upgrade_db = Bool(False)¶ Upgrade the database automatically on start.
Only safe if database is regularly backed up. Only SQLite databases will be backed up to a local file automatically.
-
Authenticators¶
Module: jupyterhub.auth
¶
Base Authenticator class and the default PAM Authenticator
Authenticator
¶-
class
jupyterhub.auth.
Authenticator
(**kwargs)¶ Base class for implementing an authentication provider for JupyterHub
-
admin_users
c.Authenticator.admin_users = Set()¶ Set of users that will have admin rights on this JupyterHub.
- Admin users have extra privileges:
Use the admin panel to see list of users logged in
Add / remove users in some authenticators
Restart / halt the hub
Start / stop users’ single-user servers
Can access each individual users’ single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
-
auth_refresh_age
c.Authenticator.auth_refresh_age = Int(300)¶ The max age (in seconds) of authentication info before forcing a refresh of user auth info.
Refreshing auth info allows, e.g. requesting/re-validating auth tokens.
See
refresh_user()
for what happens when user auth info is refreshed (nothing by default).
-
auto_login
c.Authenticator.auto_login = Bool(False)¶ Automatically begin the login process
rather than starting with a “Login with…” link at
/hub/login
To work,
.login_url()
must give a URL other than the default/hub/login
, such as an oauth handler or another automatic login handler, registered with.get_handlers()
.New in version 0.8.
-
blacklist
c.Authenticator.blacklist = Set()¶ Blacklist of usernames that are not allowed to log in.
Use this with supported authenticators to restrict which users can not log in. This is an additional blacklist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
enable_auth_state
c.Authenticator.enable_auth_state = Bool(False)¶ Enable persisting auth_state (if available).
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.
If encryption is unavailable, auth_state cannot be persisted.
New in JupyterHub 0.8
-
post_auth_hook
c.Authenticator.post_auth_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.
This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.
This maybe a coroutine.
Example:
import os, pwd def my_hook(authenticator, handler, authentication): user_data = pwd.getpwnam(authentication['name']) spawn_data = { 'pw_data': user_data 'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid) } if authentication['auth_state'] is None: authentication['auth_state'] = {} authentication['auth_state']['spawn_data'] = spawn_data return authentication c.Authenticator.post_auth_hook = my_hook
-
refresh_pre_spawn
c.Authenticator.refresh_pre_spawn = Bool(False)¶ Force refresh of auth prior to spawn.
This forces
refresh_user()
to be called prior to launching a server, to ensure that auth state is up-to-date.This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.
If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.
-
username_map
c.Authenticator.username_map = Dict()¶ Dictionary mapping authenticator usernames to JupyterHub users.
Primarily used to normalize OAuth user names to local users.
-
username_pattern
c.Authenticator.username_pattern = Unicode('')¶ Regular expression pattern that all valid usernames must match.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
-
whitelist
c.Authenticator.whitelist = Set()¶ Whitelist of usernames that are allowed to log in.
Use this with supported authenticators to restrict which users can log in. This is an additional whitelist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
add_user
(user)¶ Hook called when a user is added to JupyterHub
- This is called:
When a user first authenticates
When the hub restarts, for all users.
This method may be a coroutine.
By default, this just adds the user to the whitelist.
Subclasses may do more extensive things, such as adding actual unix users, but they should call super to ensure the whitelist is updated.
Note that this should be idempotent, since it is called whenever the hub restarts for all users.
- Parameters
user (User) – The User wrapper object
-
admin_users
c.Authenticator.admin_users = Set() Set of users that will have admin rights on this JupyterHub.
- Admin users have extra privileges:
Use the admin panel to see list of users logged in
Add / remove users in some authenticators
Restart / halt the hub
Start / stop users’ single-user servers
Can access each individual users’ single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
-
auth_refresh_age
c.Authenticator.auth_refresh_age = Int(300) The max age (in seconds) of authentication info before forcing a refresh of user auth info.
Refreshing auth info allows, e.g. requesting/re-validating auth tokens.
See
refresh_user()
for what happens when user auth info is refreshed (nothing by default).
-
async
authenticate
(handler, data)¶ Authenticate a user with login form data
This must be a coroutine.
It must return the username on successful authentication, and return None on failed authentication.
Checking the whitelist is handled separately by the caller.
Changed in version 0.8: Allow
authenticate
to return a dict containing auth_state.- Parameters
handler (tornado.web.RequestHandler) – the current request handler
data (dict) – The formdata of the login form. The default form has ‘username’ and ‘password’ fields.
- Returns
The username of the authenticated user, or None if Authentication failed.
The Authenticator may return a dict instead, which MUST have a key
name
holding the username, and MAY have two optional keys set:auth_state
, a dictionary of of auth state that will be persisted; andadmin
, the admin setting value for the user.- Return type
-
auto_login
c.Authenticator.auto_login = Bool(False) Automatically begin the login process
rather than starting with a “Login with…” link at
/hub/login
To work,
.login_url()
must give a URL other than the default/hub/login
, such as an oauth handler or another automatic login handler, registered with.get_handlers()
.New in version 0.8.
-
blacklist
c.Authenticator.blacklist = Set() Blacklist of usernames that are not allowed to log in.
Use this with supported authenticators to restrict which users can not log in. This is an additional blacklist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
check_blacklist
(username, authentication=None)¶ Check if a username is blocked to authenticate based on blacklist configuration
Return True if username is allowed, False otherwise. No blacklist means any username is allowed.
Names are normalized before being checked against the blacklist.
Changed in version 1.0: Signature updated to accept authentication data as second argument
-
check_whitelist
(username, authentication=None)¶ Check if a username is allowed to authenticate based on whitelist configuration
Return True if username is allowed, False otherwise. No whitelist means any username is allowed.
Names are normalized before being checked against the whitelist.
Changed in version 1.0: Signature updated to accept authentication data and any future changes
-
delete_user
(user)¶ Hook called when a user is deleted
Removes the user from the whitelist. Subclasses should call super to ensure the whitelist is updated.
- Parameters
user (User) – The User wrapper object
-
enable_auth_state
c.Authenticator.enable_auth_state = Bool(False) Enable persisting auth_state (if available).
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.
If encryption is unavailable, auth_state cannot be persisted.
New in JupyterHub 0.8
-
async
get_authenticated_user
(handler, data)¶ Authenticate the user who is attempting to log in
Returns user dict if successful, None otherwise.
This calls
authenticate
, which should be overridden in subclasses, normalizes the username if any normalization should be done, and then validates the name in the whitelist.This is the outer API for authenticating a user. Subclasses should not override this method.
- The various stages can be overridden separately:
authenticate
turns formdata into a usernamenormalize_username
normalizes the usernamecheck_whitelist
checks against the user whitelist
Changed in version 0.8: return dict instead of username
-
get_handlers
(app)¶ Return any custom handlers the authenticator needs to register
Used in conjugation with
login_url
andlogout_url
.- Parameters
app (JupyterHub Application) – the application object, in case it needs to be accessed for info.
- Returns
list of
('/url', Handler)
tuples passed to tornado. The Hub prefix is added to any URLs.- Return type
handlers (list)
-
is_admin
(handler, authentication)¶ Authentication helper to determine a user’s admin status.
- Parameters
handler (tornado.web.RequestHandler) – the current request handler
authentication – The authetication dict generated by
authenticate
.
- Returns
The admin status of the user, or None if it could not be determined or should not change.
- Return type
admin_status (Bool or None)
-
login_url
(base_url)¶ Override this when registering a custom login handler
Generally used by authenticators that do not use simple form-based authentication.
The subclass overriding this is responsible for making sure there is a handler available to handle the URL returned from this method, using the
get_handlers
method.
-
logout_url
(base_url)¶ Override when registering a custom logout handler
The subclass overriding this is responsible for making sure there is a handler available to handle the URL returned from this method, using the
get_handlers
method.
-
normalize_username
(username)¶ Normalize the given username and return it
Override in subclasses if usernames need different normalization rules.
The default attempts to lowercase the username and apply
username_map
if it is set.
-
post_auth_hook
c.Authenticator.post_auth_hook = Any(None) An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.
This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.
This maybe a coroutine.
Example:
import os, pwd def my_hook(authenticator, handler, authentication): user_data = pwd.getpwnam(authentication['name']) spawn_data = { 'pw_data': user_data 'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid) } if authentication['auth_state'] is None: authentication['auth_state'] = {} authentication['auth_state']['spawn_data'] = spawn_data return authentication c.Authenticator.post_auth_hook = my_hook
-
post_spawn_stop
(user, spawner)¶ Hook called after stopping a user container
Can be used to do auth-related cleanup, e.g. closing PAM sessions.
-
pre_spawn_start
(user, spawner)¶ Hook called before spawning a user’s server
Can be used to do auth-related startup, e.g. opening PAM sessions.
-
refresh_pre_spawn
c.Authenticator.refresh_pre_spawn = Bool(False) Force refresh of auth prior to spawn.
This forces
refresh_user()
to be called prior to launching a server, to ensure that auth state is up-to-date.This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.
If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.
-
async
refresh_user
(user, handler=None)¶ Refresh auth data for a given user
Allows refreshing or invalidating auth data.
Only override if your authenticator needs to refresh its data about users once in a while.
- Parameters
- Returns
Return True if auth data for the user is up-to-date and no updates are required.
Return False if the user’s auth data has expired, and they should be required to login again.
Return a dict of auth data if some values should be updated. This dict should have the same structure as that returned by
authenticate()
when it returns a dict. Any fields present will refresh the value for the user. Any fields not present will be left unchanged. This can include updating.admin
or.auth_state
fields.- Return type
-
async
run_post_auth_hook
(handler, authentication)¶ Run the post_auth_hook if defined
- Parameters
handler (tornado.web.RequestHandler) – the current request handler
authentication (dict) – User authentication data dictionary. Contains the username (‘name’), admin status (‘admin’), and auth state dictionary (‘auth_state’).
- Returns
The hook must always return the authentication dict
- Return type
Authentication (dict)
-
username_map
c.Authenticator.username_map = Dict() Dictionary mapping authenticator usernames to JupyterHub users.
Primarily used to normalize OAuth user names to local users.
-
username_pattern
c.Authenticator.username_pattern = Unicode('') Regular expression pattern that all valid usernames must match.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
-
validate_username
(username)¶ Validate a normalized username
Return True if username is valid, False otherwise.
-
whitelist
c.Authenticator.whitelist = Set() Whitelist of usernames that are allowed to log in.
Use this with supported authenticators to restrict which users can log in. This is an additional whitelist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
LocalAuthenticator
¶-
class
jupyterhub.auth.
LocalAuthenticator
(**kwargs)¶ Base class for Authenticators that work with local Linux/UNIX users
Checks for local users, and can attempt to create them if they exist.
-
add_user_cmd
c.LocalAuthenticator.add_user_cmd = Command()¶ The command to use for creating users as a list of strings
For each element in the list, the string USERNAME will be replaced with the user’s username. The username will also be appended as the final argument.
For Linux, the default value is:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–disabled-password’]
To specify a custom home directory, set this to:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–home’, ‘/customhome/USERNAME’, ‘–disabled-password’]
This will run the command:
adduser -q –gecos “” –home /customhome/river –disabled-password river
when the user ‘river’ is created.
-
admin_users
c.LocalAuthenticator.admin_users = Set()¶ Set of users that will have admin rights on this JupyterHub.
- Admin users have extra privileges:
Use the admin panel to see list of users logged in
Add / remove users in some authenticators
Restart / halt the hub
Start / stop users’ single-user servers
Can access each individual users’ single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
-
auth_refresh_age
c.LocalAuthenticator.auth_refresh_age = Int(300)¶ The max age (in seconds) of authentication info before forcing a refresh of user auth info.
Refreshing auth info allows, e.g. requesting/re-validating auth tokens.
See
refresh_user()
for what happens when user auth info is refreshed (nothing by default).
-
auto_login
c.LocalAuthenticator.auto_login = Bool(False)¶ Automatically begin the login process
rather than starting with a “Login with…” link at
/hub/login
To work,
.login_url()
must give a URL other than the default/hub/login
, such as an oauth handler or another automatic login handler, registered with.get_handlers()
.New in version 0.8.
-
blacklist
c.LocalAuthenticator.blacklist = Set()¶ Blacklist of usernames that are not allowed to log in.
Use this with supported authenticators to restrict which users can not log in. This is an additional blacklist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
create_system_users
c.LocalAuthenticator.create_system_users = Bool(False)¶ If set to True, will attempt to create local system users if they do not exist already.
Supports Linux and BSD variants only.
-
enable_auth_state
c.LocalAuthenticator.enable_auth_state = Bool(False)¶ Enable persisting auth_state (if available).
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.
If encryption is unavailable, auth_state cannot be persisted.
New in JupyterHub 0.8
-
group_whitelist
c.LocalAuthenticator.group_whitelist = Set()¶ Whitelist all users from this UNIX group.
This makes the username whitelist ineffective.
-
post_auth_hook
c.LocalAuthenticator.post_auth_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.
This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.
This maybe a coroutine.
Example:
import os, pwd def my_hook(authenticator, handler, authentication): user_data = pwd.getpwnam(authentication['name']) spawn_data = { 'pw_data': user_data 'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid) } if authentication['auth_state'] is None: authentication['auth_state'] = {} authentication['auth_state']['spawn_data'] = spawn_data return authentication c.Authenticator.post_auth_hook = my_hook
-
refresh_pre_spawn
c.LocalAuthenticator.refresh_pre_spawn = Bool(False)¶ Force refresh of auth prior to spawn.
This forces
refresh_user()
to be called prior to launching a server, to ensure that auth state is up-to-date.This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.
If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.
-
username_map
c.LocalAuthenticator.username_map = Dict()¶ Dictionary mapping authenticator usernames to JupyterHub users.
Primarily used to normalize OAuth user names to local users.
-
username_pattern
c.LocalAuthenticator.username_pattern = Unicode('')¶ Regular expression pattern that all valid usernames must match.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
-
whitelist
c.LocalAuthenticator.whitelist = Set()¶ Whitelist of usernames that are allowed to log in.
Use this with supported authenticators to restrict which users can log in. This is an additional whitelist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
add_system_user
(user)¶ Create a new local UNIX user on the system.
Tested to work on FreeBSD and Linux, at least.
-
async
add_user
(user)¶ Hook called whenever a new user is added
If self.create_system_users, the user will attempt to be created if it doesn’t exist.
-
add_user_cmd
c.LocalAuthenticator.add_user_cmd = Command() The command to use for creating users as a list of strings
For each element in the list, the string USERNAME will be replaced with the user’s username. The username will also be appended as the final argument.
For Linux, the default value is:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–disabled-password’]
To specify a custom home directory, set this to:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–home’, ‘/customhome/USERNAME’, ‘–disabled-password’]
This will run the command:
adduser -q –gecos “” –home /customhome/river –disabled-password river
when the user ‘river’ is created.
-
check_group_whitelist
(username, authentication=None)¶ If group_whitelist is configured, check if authenticating user is part of group.
-
check_whitelist
(username, authentication=None)¶ Check if a username is allowed to authenticate based on whitelist configuration
Return True if username is allowed, False otherwise. No whitelist means any username is allowed.
Names are normalized before being checked against the whitelist.
Changed in version 1.0: Signature updated to accept authentication data and any future changes
-
create_system_users
c.LocalAuthenticator.create_system_users = Bool(False) If set to True, will attempt to create local system users if they do not exist already.
Supports Linux and BSD variants only.
-
group_whitelist
c.LocalAuthenticator.group_whitelist = Set() Whitelist all users from this UNIX group.
This makes the username whitelist ineffective.
-
system_user_exists
(user)¶ Check if the user exists on the system
-
PAMAuthenticator
¶-
class
jupyterhub.auth.
PAMAuthenticator
(**kwargs)¶ Authenticate local UNIX users with PAM
-
add_user_cmd
c.PAMAuthenticator.add_user_cmd = Command()¶ The command to use for creating users as a list of strings
For each element in the list, the string USERNAME will be replaced with the user’s username. The username will also be appended as the final argument.
For Linux, the default value is:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–disabled-password’]
To specify a custom home directory, set this to:
[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–home’, ‘/customhome/USERNAME’, ‘–disabled-password’]
This will run the command:
adduser -q –gecos “” –home /customhome/river –disabled-password river
when the user ‘river’ is created.
-
admin_groups
c.PAMAuthenticator.admin_groups = Set()¶ Authoritative list of user groups that determine admin access. Users not in these groups can still be granted admin status through admin_users.
White/blacklisting rules still apply.
-
admin_users
c.PAMAuthenticator.admin_users = Set()¶ Set of users that will have admin rights on this JupyterHub.
- Admin users have extra privileges:
Use the admin panel to see list of users logged in
Add / remove users in some authenticators
Restart / halt the hub
Start / stop users’ single-user servers
Can access each individual users’ single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
-
auth_refresh_age
c.PAMAuthenticator.auth_refresh_age = Int(300)¶ The max age (in seconds) of authentication info before forcing a refresh of user auth info.
Refreshing auth info allows, e.g. requesting/re-validating auth tokens.
See
refresh_user()
for what happens when user auth info is refreshed (nothing by default).
-
auto_login
c.PAMAuthenticator.auto_login = Bool(False)¶ Automatically begin the login process
rather than starting with a “Login with…” link at
/hub/login
To work,
.login_url()
must give a URL other than the default/hub/login
, such as an oauth handler or another automatic login handler, registered with.get_handlers()
.New in version 0.8.
-
blacklist
c.PAMAuthenticator.blacklist = Set()¶ Blacklist of usernames that are not allowed to log in.
Use this with supported authenticators to restrict which users can not log in. This is an additional blacklist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
check_account
c.PAMAuthenticator.check_account = Bool(True)¶ Whether to check the user’s account status via PAM during authentication.
The PAM account stack performs non-authentication based account management. It is typically used to restrict/permit access to a service and this step is needed to access the host’s user access control.
Disabling this can be dangerous as authenticated but unauthorized users may be granted access and, therefore, arbitrary execution on the system.
-
create_system_users
c.PAMAuthenticator.create_system_users = Bool(False)¶ If set to True, will attempt to create local system users if they do not exist already.
Supports Linux and BSD variants only.
-
enable_auth_state
c.PAMAuthenticator.enable_auth_state = Bool(False)¶ Enable persisting auth_state (if available).
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.
If encryption is unavailable, auth_state cannot be persisted.
New in JupyterHub 0.8
-
encoding
c.PAMAuthenticator.encoding = Unicode('utf8')¶ The text encoding to use when communicating with PAM
-
group_whitelist
c.PAMAuthenticator.group_whitelist = Set()¶ Whitelist all users from this UNIX group.
This makes the username whitelist ineffective.
-
open_sessions
c.PAMAuthenticator.open_sessions = Bool(True)¶ Whether to open a new PAM session when spawners are started.
This may trigger things like mounting shared filsystems, loading credentials, etc. depending on system configuration, but it does not always work.
If any errors are encountered when opening/closing PAM sessions, this is automatically set to False.
-
pam_normalize_username
c.PAMAuthenticator.pam_normalize_username = Bool(False)¶ Round-trip the username via PAM lookups to make sure it is unique
PAM can accept multiple usernames that map to the same user, for example DOMAINusername in some cases. To prevent this, convert username into uid, then back to uid to normalize.
-
post_auth_hook
c.PAMAuthenticator.post_auth_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.
This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.
This maybe a coroutine.
Example:
import os, pwd def my_hook(authenticator, handler, authentication): user_data = pwd.getpwnam(authentication['name']) spawn_data = { 'pw_data': user_data 'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid) } if authentication['auth_state'] is None: authentication['auth_state'] = {} authentication['auth_state']['spawn_data'] = spawn_data return authentication c.Authenticator.post_auth_hook = my_hook
-
refresh_pre_spawn
c.PAMAuthenticator.refresh_pre_spawn = Bool(False)¶ Force refresh of auth prior to spawn.
This forces
refresh_user()
to be called prior to launching a server, to ensure that auth state is up-to-date.This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.
If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.
-
service
c.PAMAuthenticator.service = Unicode('login')¶ The name of the PAM service to use for authentication
-
username_map
c.PAMAuthenticator.username_map = Dict()¶ Dictionary mapping authenticator usernames to JupyterHub users.
Primarily used to normalize OAuth user names to local users.
-
username_pattern
c.PAMAuthenticator.username_pattern = Unicode('')¶ Regular expression pattern that all valid usernames must match.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
-
whitelist
c.PAMAuthenticator.whitelist = Set()¶ Whitelist of usernames that are allowed to log in.
Use this with supported authenticators to restrict which users can log in. This is an additional whitelist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
DummyAuthenticator
¶-
class
jupyterhub.auth.
DummyAuthenticator
(**kwargs)¶ Dummy Authenticator for testing
By default, any username + password is allowed If a non-empty password is set, any username will be allowed if it logs in with that password.
New in version 1.0.
-
admin_users
c.DummyAuthenticator.admin_users = Set()¶ Set of users that will have admin rights on this JupyterHub.
- Admin users have extra privileges:
Use the admin panel to see list of users logged in
Add / remove users in some authenticators
Restart / halt the hub
Start / stop users’ single-user servers
Can access each individual users’ single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
-
auth_refresh_age
c.DummyAuthenticator.auth_refresh_age = Int(300)¶ The max age (in seconds) of authentication info before forcing a refresh of user auth info.
Refreshing auth info allows, e.g. requesting/re-validating auth tokens.
See
refresh_user()
for what happens when user auth info is refreshed (nothing by default).
-
auto_login
c.DummyAuthenticator.auto_login = Bool(False)¶ Automatically begin the login process
rather than starting with a “Login with…” link at
/hub/login
To work,
.login_url()
must give a URL other than the default/hub/login
, such as an oauth handler or another automatic login handler, registered with.get_handlers()
.New in version 0.8.
-
blacklist
c.DummyAuthenticator.blacklist = Set()¶ Blacklist of usernames that are not allowed to log in.
Use this with supported authenticators to restrict which users can not log in. This is an additional blacklist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
enable_auth_state
c.DummyAuthenticator.enable_auth_state = Bool(False)¶ Enable persisting auth_state (if available).
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.
If encryption is unavailable, auth_state cannot be persisted.
New in JupyterHub 0.8
-
password
c.DummyAuthenticator.password = Unicode('')¶ Set a global password for all users wanting to log in.
This allows users with any username to log in with the same static password.
-
post_auth_hook
c.DummyAuthenticator.post_auth_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.
This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.
This maybe a coroutine.
Example:
import os, pwd def my_hook(authenticator, handler, authentication): user_data = pwd.getpwnam(authentication['name']) spawn_data = { 'pw_data': user_data 'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid) } if authentication['auth_state'] is None: authentication['auth_state'] = {} authentication['auth_state']['spawn_data'] = spawn_data return authentication c.Authenticator.post_auth_hook = my_hook
-
refresh_pre_spawn
c.DummyAuthenticator.refresh_pre_spawn = Bool(False)¶ Force refresh of auth prior to spawn.
This forces
refresh_user()
to be called prior to launching a server, to ensure that auth state is up-to-date.This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.
If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.
-
username_map
c.DummyAuthenticator.username_map = Dict()¶ Dictionary mapping authenticator usernames to JupyterHub users.
Primarily used to normalize OAuth user names to local users.
-
username_pattern
c.DummyAuthenticator.username_pattern = Unicode('')¶ Regular expression pattern that all valid usernames must match.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
-
whitelist
c.DummyAuthenticator.whitelist = Set()¶ Whitelist of usernames that are allowed to log in.
Use this with supported authenticators to restrict which users can log in. This is an additional whitelist that further restricts users, beyond whatever restrictions the authenticator has in place.
If empty, does not perform any additional restriction.
-
Spawners¶
Module: jupyterhub.spawner
¶
Contains base Spawner class & default implementation
Spawner
¶-
class
jupyterhub.spawner.
Spawner
(**kwargs)¶ Base class for spawning single-user notebook servers.
Subclass this, and override the following methods:
load_state
get_state
start
stop
poll
As JupyterHub supports multiple users, an instance of the Spawner subclass is created for each user. If there are 20 JupyterHub users, there will be 20 instances of the subclass.
-
args
c.Spawner.args = List()¶ Extra arguments to be passed to the single-user server.
Some spawners allow shell-style expansion here, allowing you to use environment variables here. Most, including the default, do not. Consult the documentation for your spawner to verify!
-
cmd
c.Spawner.cmd = Command()¶ The command used for starting the single-user server.
Provide either a string or a list containing the path to the startup script command. Extra arguments, other than this path, should be provided via
args
.This is usually set if you want to start the single-user server in a different python environment (with virtualenv/conda) than JupyterHub itself.
Some spawners allow shell-style expansion here, allowing you to use environment variables. Most, including the default, do not. Consult the documentation for your spawner to verify!
-
consecutive_failure_limit
c.Spawner.consecutive_failure_limit = Int(0)¶ Maximum number of consecutive failures to allow before shutting down JupyterHub.
This helps JupyterHub recover from a certain class of problem preventing launch in contexts where the Hub is automatically restarted (e.g. systemd, docker, kubernetes).
A limit of 0 means no limit and consecutive failures will not be tracked.
-
cpu_guarantee
c.Spawner.cpu_guarantee = Float(None)¶ Minimum number of cpu-cores a single-user notebook server is guaranteed to have available.
If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
cpu_limit
c.Spawner.cpu_limit = Float(None)¶ Maximum number of cpu-cores a single-user notebook server is allowed to use.
If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.
The single-user notebook server will never be scheduled by the kernel to use more cpu-cores than this. There is no guarantee that it can access this many cpu-cores.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
debug
c.Spawner.debug = Bool(False)¶ Enable debug-logging of the single-user server
-
default_url
c.Spawner.default_url = Unicode('')¶ The URL the single-user server should start in.
{username}
will be expanded to the user’s usernameExample uses:
You can set
notebook_dir
to/
anddefault_url
to/tree/home/{username}
to allow people to navigate the whole filesystem from their notebook server, but still start in their home directory.Start with
/notebooks
instead of/tree
ifdefault_url
points to a notebook instead of a directory.You can set this to
/lab
to have JupyterLab start by default, rather than Jupyter Notebook.
-
disable_user_config
c.Spawner.disable_user_config = Bool(False)¶ Disable per-user configuration of single-user servers.
When starting the user’s single-user server, any config file found in the user’s $HOME directory will be ignored.
Note: a user could circumvent this if the user modifies their Python environment, such as when they have their own conda environments / virtualenvs / containers.
-
env_keep
c.Spawner.env_keep = List()¶ Whitelist of environment variables for the single-user server to inherit from the JupyterHub process.
This whitelist is used to ensure that sensitive information in the JupyterHub process’s environment (such as
CONFIGPROXY_AUTH_TOKEN
) is not passed to the single-user server’s process.
-
environment
c.Spawner.environment = Dict()¶ Extra environment variables to set for the single-user server’s process.
- Environment variables that end up in the single-user server’s process come from 3 sources:
This
environment
configurableThe JupyterHub process’ environment variables that are whitelisted in
env_keep
Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)
The
environment
configurable should be set by JupyterHub administrators to add installation specific environment variables. It is a dict where the key is the name of the environment variable, and the value can be a string or a callable. If it is a callable, it will be called with one parameter (the spawner instance), and should return a string fairly quickly (no blocking operations please!).Note that the spawner class’ interface is not guaranteed to be exactly same across upgrades, so if you are using the callable take care to verify it continues to work after upgrades!
-
http_timeout
c.Spawner.http_timeout = Int(30)¶ Timeout (in seconds) before giving up on a spawned HTTP server
Once a server has successfully been spawned, this is the amount of time we wait before assuming that the server is unable to accept connections.
-
ip
c.Spawner.ip = Unicode('')¶ The IP address (or hostname) the single-user server should listen on.
The JupyterHub proxy implementation should be able to send packets to this interface.
-
mem_guarantee
c.Spawner.mem_guarantee = ByteSpecification(None)¶ Minimum number of bytes a single-user notebook server is guaranteed to have available.
- Allows the following suffixes:
K -> Kilobytes
M -> Megabytes
G -> Gigabytes
T -> Terabytes
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
mem_limit
c.Spawner.mem_limit = ByteSpecification(None)¶ Maximum number of bytes a single-user notebook server is allowed to use.
- Allows the following suffixes:
K -> Kilobytes
M -> Megabytes
G -> Gigabytes
T -> Terabytes
If the single user server tries to allocate more memory than this, it will fail. There is no guarantee that the single-user notebook server will be able to allocate this much memory - only that it can not allocate more than this.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
notebook_dir
c.Spawner.notebook_dir = Unicode('')¶ Path to the notebook directory for the single-user server.
The user sees a file listing of this directory when the notebook interface is started. The current interface does not easily allow browsing beyond the subdirectories in this directory’s tree.
~
will be expanded to the home directory of the user, and {username} will be replaced with the name of the user.Note that this does not prevent users from accessing files outside of this path! They can do so with many other means.
-
options_form
c.Spawner.options_form = Union()¶ An HTML form for options a user can specify on launching their server.
The surrounding
<form>
element and the submit button are already provided.For example:
Set your key: <input name="key" val="default_key"></input> <br> Choose a letter: <select name="letter" multiple="true"> <option value="A">The letter A</option> <option value="B">The letter B</option> </select>
The data from this form submission will be passed on to your spawner in
self.user_options
Instead of a form snippet string, this could also be a callable that takes as one parameter the current spawner instance and returns a string. The callable will be called asynchronously if it returns a future, rather than a str. Note that the interface of the spawner class is not deemed stable across versions, so using this functionality might cause your JupyterHub upgrades to break.
-
poll_interval
c.Spawner.poll_interval = Int(30)¶ Interval (in seconds) on which to poll the spawner for single-user server’s status.
At every poll interval, each spawner’s
.poll
method is called, which checks if the single-user server is still running. If it isn’t running, then JupyterHub modifies its own state accordingly and removes appropriate routes from the configurable proxy.
-
port
c.Spawner.port = Int(0)¶ The port for single-user servers to listen on.
Defaults to
0
, which uses a randomly allocated port number each time.If set to a non-zero value, all Spawners will use the same port, which only makes sense if each server is on a different address, e.g. in containers.
New in version 0.7.
-
post_stop_hook
c.Spawner.post_stop_hook = Any(None)¶ An optional hook function that you can implement to do work after the spawner stops.
This can be set independent of any concrete spawner implementation.
-
pre_spawn_hook
c.Spawner.pre_spawn_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work before the spawner starts. For example, create a directory for your user or load initial content.
This can be set independent of any concrete spawner implementation.
This maybe a coroutine.
Example:
from subprocess import check_call def my_hook(spawner): username = spawner.user.name check_call(['./examples/bootstrap-script/bootstrap.sh', username]) c.Spawner.pre_spawn_hook = my_hook
-
ssl_alt_names
c.Spawner.ssl_alt_names = List()¶ List of SSL alt names
May be set in config if all spawners should have the same value(s), or set at runtime by Spawner that know their names.
-
ssl_alt_names_include_local
c.Spawner.ssl_alt_names_include_local = Bool(True)¶ Whether to include DNS:localhost, IP:127.0.0.1 in alt names
-
start_timeout
c.Spawner.start_timeout = Int(60)¶ Timeout (in seconds) before giving up on starting of single-user server.
This is the timeout for start to return, not the timeout for the server to respond. Callers of spawner.start will assume that startup has failed if it takes longer than this. start should return when the server process is started and its location is known.
-
async
create_certs
()¶ Create and set ownership for the certs to be used for internal ssl
- Keyword Arguments
- Returns
Path to cert files and CA
- Return type
This method creates certs for use with the singleuser notebook. It enables SSL and ensures that the notebook can perform bi-directional SSL auth with the hub (verification based on CA).
If the singleuser host has a name or ip other than localhost, an appropriate alternative name(s) must be passed for ssl verification by the hub to work. For example, for Jupyter hosts with an IP of 10.10.10.10 or DNS name of jupyter.example.com, this would be:
alt_names=[“IP:10.10.10.10”] alt_names=[“DNS:jupyter.example.com”]
respectively. The list can contain both the IP and DNS names to refer to the host by either IP or DNS name (note the
default_names
below).
-
format_string
(s)¶ Render a Python format string
Uses
Spawner.template_namespace()
to populate format namespace.
-
get_args
()¶ Return the arguments to be passed after self.cmd
Doesn’t expect shell expansion to happen.
-
get_env
()¶ Return the environment dict to use for the Spawner.
This applies things like
env_keep
, anything defined inSpawner.environment
, and adds the API token to the env.When overriding in subclasses, subclasses must call
super().get_env()
, extend the returned dict and return it.Use this to access the env in Spawner.start to allow extension in subclasses.
-
get_state
()¶ Save state of spawner into database.
A black box of extra state for custom spawners. The returned value of this is passed to
load_state
.Subclasses should call
super().get_state()
, augment the state returned from there, and return that state.- Returns
state – a JSONable dict of state
- Return type
-
async
move_certs
(paths)¶ Takes certificate paths and makes them available to the notebook server
- Parameters
paths (dict) – a list of paths for key, cert, and CA.
paths will be resolvable and readable by the Hub process, (These) –
not necessarily by the notebook server. (but) –
- Returns
a list (potentially altered) of paths for key, cert, and CA. These paths should be resolvable and readable by the notebook server to be launched.
- Return type
.move_certs
is called after certs for the singleuser notebook have been created by create_certs.By default, certs are created in a standard, central location defined by
internal_certs_location
. For a local, single-host deployment of JupyterHub, this should suffice. If, however, singleuser notebooks are spawned on other hosts,.move_certs
should be overridden to move these files appropriately. This could mean usingscp
to copy them to another host, moving them to a volume mounted in a docker container, or exporting them as a secret in kubernetes.
-
options_from_form
(form_data)¶ Interpret HTTP form data
Form data will always arrive as a dict of lists of strings. Override this function to understand single-values, numbers, etc.
This should coerce form data into the structure expected by self.user_options, which must be a dict, and should be JSON-serializeable, though it can contain bytes in addition to standard JSON data types.
This method should not have any side effects. Any handling of
user_options
should be done in.start()
to ensure consistent behavior across servers spawned via the API and form submission page.Instances will receive this data on self.user_options, after passing through this function, prior to
Spawner.start
.Changed in version 1.0: user_options are persisted in the JupyterHub database to be reused on subsequent spawns if no options are given. user_options is serialized to JSON as part of this persistence (with additional support for bytes in case of uploaded file data), and any non-bytes non-jsonable values will be replaced with None if the user_options are re-used.
-
async
poll
()¶ Check if the single-user process is running
- Returns
None if single-user process is running. Integer exit status (0 if unknown), if it is not running.
State transitions, behavior, and return response:
If the Spawner has not been initialized (neither loaded state, nor called start), it should behave as if it is not running (status=0).
If the Spawner has not finished starting, it should behave as if it is running (status=None).
Design assumptions about when
poll
may be called:On Hub launch:
poll
may be called beforestart
when state is loaded on Hub launch.poll
should return exit status 0 (unknown) if the Spawner has not been initialized viaload_state
orstart
.If
.start()
is async:poll
may be called during any yielded portions of thestart
process.poll
should return None whenstart
is yielded, indicating that thestart
process has not yet completed.
-
async
start
()¶ Start the single-user server
Changed in version 0.7: Return ip, port instead of setting on self.user.server directly.
-
async
stop
(now=False)¶ Stop the single-user server
If
now
is False (default), shutdown the server as gracefully as possible, e.g. starting with SIGINT, then SIGTERM, then SIGKILL. Ifnow
is True, terminate the server immediately.The coroutine should return when the single-user server process is no longer running.
Must be a coroutine.
-
template_namespace
()¶ Return the template namespace for format-string formatting.
Currently used on default_url and notebook_dir.
Subclasses may add items to the available namespace.
The default implementation includes:
{ 'username': user.name, 'base_url': users_base_url, }
- Returns
namespace for string formatting.
- Return type
ns (dict)
LocalProcessSpawner
¶-
class
jupyterhub.spawner.
LocalProcessSpawner
(**kwargs)¶ A Spawner that uses
subprocess.Popen
to start single-user servers as local processes.Requires local UNIX users matching the authenticated users to exist. Does not work on Windows.
This is the default spawner for JupyterHub.
Note: This spawner does not implement CPU / memory guarantees and limits.
-
args
c.LocalProcessSpawner.args = List()¶ Extra arguments to be passed to the single-user server.
Some spawners allow shell-style expansion here, allowing you to use environment variables here. Most, including the default, do not. Consult the documentation for your spawner to verify!
-
cmd
c.LocalProcessSpawner.cmd = Command()¶ The command used for starting the single-user server.
Provide either a string or a list containing the path to the startup script command. Extra arguments, other than this path, should be provided via
args
.This is usually set if you want to start the single-user server in a different python environment (with virtualenv/conda) than JupyterHub itself.
Some spawners allow shell-style expansion here, allowing you to use environment variables. Most, including the default, do not. Consult the documentation for your spawner to verify!
-
consecutive_failure_limit
c.LocalProcessSpawner.consecutive_failure_limit = Int(0)¶ Maximum number of consecutive failures to allow before shutting down JupyterHub.
This helps JupyterHub recover from a certain class of problem preventing launch in contexts where the Hub is automatically restarted (e.g. systemd, docker, kubernetes).
A limit of 0 means no limit and consecutive failures will not be tracked.
-
cpu_guarantee
c.LocalProcessSpawner.cpu_guarantee = Float(None)¶ Minimum number of cpu-cores a single-user notebook server is guaranteed to have available.
If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
cpu_limit
c.LocalProcessSpawner.cpu_limit = Float(None)¶ Maximum number of cpu-cores a single-user notebook server is allowed to use.
If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.
The single-user notebook server will never be scheduled by the kernel to use more cpu-cores than this. There is no guarantee that it can access this many cpu-cores.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
debug
c.LocalProcessSpawner.debug = Bool(False)¶ Enable debug-logging of the single-user server
-
default_url
c.LocalProcessSpawner.default_url = Unicode('')¶ The URL the single-user server should start in.
{username}
will be expanded to the user’s usernameExample uses:
You can set
notebook_dir
to/
anddefault_url
to/tree/home/{username}
to allow people to navigate the whole filesystem from their notebook server, but still start in their home directory.Start with
/notebooks
instead of/tree
ifdefault_url
points to a notebook instead of a directory.You can set this to
/lab
to have JupyterLab start by default, rather than Jupyter Notebook.
-
disable_user_config
c.LocalProcessSpawner.disable_user_config = Bool(False)¶ Disable per-user configuration of single-user servers.
When starting the user’s single-user server, any config file found in the user’s $HOME directory will be ignored.
Note: a user could circumvent this if the user modifies their Python environment, such as when they have their own conda environments / virtualenvs / containers.
-
env_keep
c.LocalProcessSpawner.env_keep = List()¶ Whitelist of environment variables for the single-user server to inherit from the JupyterHub process.
This whitelist is used to ensure that sensitive information in the JupyterHub process’s environment (such as
CONFIGPROXY_AUTH_TOKEN
) is not passed to the single-user server’s process.
-
environment
c.LocalProcessSpawner.environment = Dict()¶ Extra environment variables to set for the single-user server’s process.
- Environment variables that end up in the single-user server’s process come from 3 sources:
This
environment
configurableThe JupyterHub process’ environment variables that are whitelisted in
env_keep
Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)
The
environment
configurable should be set by JupyterHub administrators to add installation specific environment variables. It is a dict where the key is the name of the environment variable, and the value can be a string or a callable. If it is a callable, it will be called with one parameter (the spawner instance), and should return a string fairly quickly (no blocking operations please!).Note that the spawner class’ interface is not guaranteed to be exactly same across upgrades, so if you are using the callable take care to verify it continues to work after upgrades!
-
http_timeout
c.LocalProcessSpawner.http_timeout = Int(30)¶ Timeout (in seconds) before giving up on a spawned HTTP server
Once a server has successfully been spawned, this is the amount of time we wait before assuming that the server is unable to accept connections.
-
interrupt_timeout
c.LocalProcessSpawner.interrupt_timeout = Int(10)¶ Seconds to wait for single-user server process to halt after SIGINT.
If the process has not exited cleanly after this many seconds, a SIGTERM is sent.
-
ip
c.LocalProcessSpawner.ip = Unicode('')¶ The IP address (or hostname) the single-user server should listen on.
The JupyterHub proxy implementation should be able to send packets to this interface.
-
kill_timeout
c.LocalProcessSpawner.kill_timeout = Int(5)¶ Seconds to wait for process to halt after SIGKILL before giving up.
If the process does not exit cleanly after this many seconds of SIGKILL, it becomes a zombie process. The hub process will log a warning and then give up.
-
mem_guarantee
c.LocalProcessSpawner.mem_guarantee = ByteSpecification(None)¶ Minimum number of bytes a single-user notebook server is guaranteed to have available.
- Allows the following suffixes:
K -> Kilobytes
M -> Megabytes
G -> Gigabytes
T -> Terabytes
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
mem_limit
c.LocalProcessSpawner.mem_limit = ByteSpecification(None)¶ Maximum number of bytes a single-user notebook server is allowed to use.
- Allows the following suffixes:
K -> Kilobytes
M -> Megabytes
G -> Gigabytes
T -> Terabytes
If the single user server tries to allocate more memory than this, it will fail. There is no guarantee that the single-user notebook server will be able to allocate this much memory - only that it can not allocate more than this.
This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner,
LocalProcessSpawner
, does not implement this support. A custom spawner must add support for this setting for it to be enforced.
-
notebook_dir
c.LocalProcessSpawner.notebook_dir = Unicode('')¶ Path to the notebook directory for the single-user server.
The user sees a file listing of this directory when the notebook interface is started. The current interface does not easily allow browsing beyond the subdirectories in this directory’s tree.
~
will be expanded to the home directory of the user, and {username} will be replaced with the name of the user.Note that this does not prevent users from accessing files outside of this path! They can do so with many other means.
-
options_form
c.LocalProcessSpawner.options_form = Union()¶ An HTML form for options a user can specify on launching their server.
The surrounding
<form>
element and the submit button are already provided.For example:
Set your key: <input name="key" val="default_key"></input> <br> Choose a letter: <select name="letter" multiple="true"> <option value="A">The letter A</option> <option value="B">The letter B</option> </select>
The data from this form submission will be passed on to your spawner in
self.user_options
Instead of a form snippet string, this could also be a callable that takes as one parameter the current spawner instance and returns a string. The callable will be called asynchronously if it returns a future, rather than a str. Note that the interface of the spawner class is not deemed stable across versions, so using this functionality might cause your JupyterHub upgrades to break.
-
poll_interval
c.LocalProcessSpawner.poll_interval = Int(30)¶ Interval (in seconds) on which to poll the spawner for single-user server’s status.
At every poll interval, each spawner’s
.poll
method is called, which checks if the single-user server is still running. If it isn’t running, then JupyterHub modifies its own state accordingly and removes appropriate routes from the configurable proxy.
-
popen_kwargs
c.LocalProcessSpawner.popen_kwargs = Dict()¶ Extra keyword arguments to pass to Popen
when spawning single-user servers.
For example:
popen_kwargs = dict(shell=True)
-
port
c.LocalProcessSpawner.port = Int(0)¶ The port for single-user servers to listen on.
Defaults to
0
, which uses a randomly allocated port number each time.If set to a non-zero value, all Spawners will use the same port, which only makes sense if each server is on a different address, e.g. in containers.
New in version 0.7.
-
post_stop_hook
c.LocalProcessSpawner.post_stop_hook = Any(None)¶ An optional hook function that you can implement to do work after the spawner stops.
This can be set independent of any concrete spawner implementation.
-
pre_spawn_hook
c.LocalProcessSpawner.pre_spawn_hook = Any(None)¶ An optional hook function that you can implement to do some bootstrapping work before the spawner starts. For example, create a directory for your user or load initial content.
This can be set independent of any concrete spawner implementation.
This maybe a coroutine.
Example:
from subprocess import check_call def my_hook(spawner): username = spawner.user.name check_call(['./examples/bootstrap-script/bootstrap.sh', username]) c.Spawner.pre_spawn_hook = my_hook
-
shell_cmd
c.LocalProcessSpawner.shell_cmd = Command()¶ Specify a shell command to launch.
The single-user command will be appended to this list, so it sould end with
-c
(for bash) or equivalent.For example:
c.LocalProcessSpawner.shell_cmd = ['bash', '-l', '-c']
to launch with a bash login shell, which would set up the user’s own complete environment.
Warning
Using shell_cmd gives users control over PATH, etc., which could change what the jupyterhub-singleuser launch command does. Only use this for trusted users.
-
ssl_alt_names
c.LocalProcessSpawner.ssl_alt_names = List()¶ List of SSL alt names
May be set in config if all spawners should have the same value(s), or set at runtime by Spawner that know their names.
-
ssl_alt_names_include_local
c.LocalProcessSpawner.ssl_alt_names_include_local = Bool(True)¶ Whether to include DNS:localhost, IP:127.0.0.1 in alt names
-
start_timeout
c.LocalProcessSpawner.start_timeout = Int(60)¶ Timeout (in seconds) before giving up on starting of single-user server.
This is the timeout for start to return, not the timeout for the server to respond. Callers of spawner.start will assume that startup has failed if it takes longer than this. start should return when the server process is started and its location is known.
-
term_timeout
c.LocalProcessSpawner.term_timeout = Int(5)¶ Seconds to wait for single-user server process to halt after SIGTERM.
If the process does not exit cleanly after this many seconds of SIGTERM, a SIGKILL is sent.
-
Proxies¶
Module: jupyterhub.proxy
¶
API for JupyterHub’s proxy.
Custom proxy implementations can subclass Proxy
and register in JupyterHub config:
from mymodule import MyProxy
c.JupyterHub.proxy_class = MyProxy
Route Specification:
A routespec is a URL prefix ([host]/path/), e.g. ‘host.tld/path/’ for host-based routing or ‘/path/’ for default routing.
Route paths should be normalized to always start and end with ‘/’
Proxy
¶-
class
jupyterhub.proxy.
Proxy
(**kwargs)¶ Base class for configurable proxies that JupyterHub can use.
A proxy implementation should subclass this and must define the following methods:
get_all_routes()
return a dictionary of all JupyterHub-related routesadd_route()
adds a routedelete_route()
deletes a route
In addition to these, the following method(s) may need to be implemented:
start()
start the proxy, if it should be launched by the Hub instead of externally managed. If the proxy is externally managed, it should setshould_start
to False.
And the following method(s) are optional, but can be provided:
get_route()
gets a single route. There is a default implementation that extracts data fromget_all_routes()
, but implementations may choose to provide a more efficient implementation of fetching a single route.
-
should_start
c.Proxy.should_start = Bool(True)¶ Should the Hub start the proxy
If True, the Hub will start the proxy and stop it. Set to False if the proxy is managed externally, such as by systemd, docker, or another service manager.
-
async
add_all_services
(service_dict)¶ Update the proxy table from the database.
Used when loading up a new proxy.
-
async
add_all_users
(user_dict)¶ Update the proxy table from the database.
Used when loading up a new proxy.
-
add_hub_route
(hub)¶ Add the default route for the Hub
-
async
add_route
(routespec, target, data)¶ Add a route to the proxy.
Subclasses must define this method
- Parameters
routespec (str) – A URL prefix ([host]/path/) for which this route will be matched, e.g. host.name/path/
target (str) – A full URL that will be the target of this route.
data (dict) – A JSONable dict that will be associated with this route, and will be returned when retrieving information about this route.
Will raise an appropriate Exception (FIXME: find what?) if the route could not be added.
The proxy implementation should also have a way to associate the fact that a route came from JupyterHub.
-
async
add_service
(service, client=None)¶ Add a service’s server to the proxy table.
-
async
add_user
(user, server_name='', client=None)¶ Add a user’s server to the proxy table.
-
async
check_routes
(user_dict, service_dict, routes=None)¶ Check that all users are properly routed on the proxy.
-
async
delete_route
(routespec)¶ Delete a route with a given routespec if it exists.
Subclasses must define this method
-
async
delete_service
(service, client=None)¶ Remove a service’s server from the proxy table.
-
async
delete_user
(user, server_name='')¶ Remove a user’s server from the proxy table.
-
async
get_all_routes
()¶ Fetch and return all the routes associated by JupyterHub from the proxy.
Subclasses must define this method
Should return a dictionary of routes, where the keys are routespecs and each value is a dict of the form:
{ 'routespec': the route specification ([host]/path/) 'target': the target host URL (proto://host) for this route 'data': the attached data dict for this route (as specified in add_route) }
-
async
get_route
(routespec)¶ Return the route info for a given routespec.
- Parameters
routespec (str) – A URI that was used to add this route, e.g.
host.tld/path/
- Returns
dict with the following keys:
'routespec': The normalized route specification passed in to add_route ([host]/path/) 'target': The target host for this route (proto://host) 'data': The arbitrary data dict that was passed in by JupyterHub when adding this route.
None: if there are no routes matching the given routespec
- Return type
result (dict)
-
should_start
c.Proxy.should_start = Bool(True) Should the Hub start the proxy
If True, the Hub will start the proxy and stop it. Set to False if the proxy is managed externally, such as by systemd, docker, or another service manager.
-
start
()¶ Start the proxy.
Will be called during startup if should_start is True.
Subclasses must define this method if the proxy is to be started by the Hub
-
stop
()¶ Stop the proxy.
Will be called during teardown if should_start is True.
Subclasses must define this method if the proxy is to be started by the Hub
-
validate_routespec
(routespec)¶ Validate a routespec
Checks host value vs host-based routing.
Ensures trailing slash on path.
ConfigurableHTTPProxy
¶-
class
jupyterhub.proxy.
ConfigurableHTTPProxy
(**kwargs)¶ Proxy implementation for the default configurable-http-proxy.
This is the default proxy implementation for running the nodejs proxy
configurable-http-proxy
.If the proxy should not be run as a subprocess of the Hub, (e.g. in a separate container), set:
c.ConfigurableHTTPProxy.should_start = False
-
api_url
c.ConfigurableHTTPProxy.api_url = Unicode('')¶ The ip (or hostname) of the proxy’s API endpoint
-
auth_token
c.ConfigurableHTTPProxy.auth_token = Unicode('')¶ The Proxy auth token
Loaded from the CONFIGPROXY_AUTH_TOKEN env variable by default.
-
command
c.ConfigurableHTTPProxy.command = Command()¶ The command to start the proxy
-
concurrency
c.ConfigurableHTTPProxy.concurrency = Int(10)¶ The number of requests allowed to be concurrently outstanding to the proxy
Limiting this number avoids potential timeout errors by sending too many requests to update the proxy at once
-
debug
c.ConfigurableHTTPProxy.debug = Bool(False)¶ Add debug-level logging to the Proxy.
-
pid_file
c.ConfigurableHTTPProxy.pid_file = Unicode('jupyterhub-proxy.pid')¶ File in which to write the PID of the proxy process.
-
should_start
c.ConfigurableHTTPProxy.should_start = Bool(True)¶ Should the Hub start the proxy
If True, the Hub will start the proxy and stop it. Set to False if the proxy is managed externally, such as by systemd, docker, or another service manager.
-
api_url
c.ConfigurableHTTPProxy.api_url = Unicode('') The ip (or hostname) of the proxy’s API endpoint
-
auth_token
c.ConfigurableHTTPProxy.auth_token = Unicode('') The Proxy auth token
Loaded from the CONFIGPROXY_AUTH_TOKEN env variable by default.
-
command
c.ConfigurableHTTPProxy.command = Command() The command to start the proxy
-
debug
c.ConfigurableHTTPProxy.debug = Bool(False) Add debug-level logging to the Proxy.
-
Users¶
Module: jupyterhub.user
¶
UserDict
¶-
class
jupyterhub.user.
UserDict
(db_factory, settings)¶ Like defaultdict, but for users
Getting by a user id OR an orm.User instance returns a User wrapper around the orm user.
-
add
(orm_user)¶ Add a user to the UserDict
-
count_active_users
()¶ Count the number of user servers that are active/pending/ready
Returns dict with counts of active/pending/ready servers
-
delete
(key)¶ Delete a user from the cache and the database
-
Services¶
Module: jupyterhub.services.service
¶
A service is a process that talks to JupyterHub.
- Types of services:
- Managed:
managed by JupyterHub (always subprocess, no custom Spawners)
always a long-running process
managed services are restarted automatically if they exit unexpectedly
- Unmanaged:
managed by external service (docker, systemd, etc.)
do not need to be long-running processes, or processes at all
- URL: needs a route added to the proxy.
Public route will always be /services/service-name
url specified in config
if port is 0, Hub will select a port
- API access:
admin: tokens will have admin-access to the API
not admin: tokens will only have non-admin access (not much they can do other than defer to Hub for auth)
An externally managed service running on a URL:
{
'name': 'my-service',
'url': 'https://host:8888',
'admin': True,
'api_token': 'super-secret',
}
A hub-managed service with no URL:
{
'name': 'cull-idle',
'command': ['python', '/path/to/cull-idle']
'admin': True,
}
Service
¶-
class
jupyterhub.services.service.
Service
(**kwargs)¶ An object wrapping a service specification for Hub API consumers.
A service has inputs:
- name: str
the name of the service
- admin: bool(false)
whether the service should have administrative privileges
- url: str (None)
The URL where the service is/should be. If specified, the service will be added to the proxy at /services/:name
If a service is to be managed by the Hub, it has a few extra options:
- command: (str/Popen list)
Command for JupyterHub to spawn the service. Only use this if the service should be a subprocess. If command is not specified, it is assumed to be managed by a
- environment: dict
Additional environment variables for the service.
- user: str
The name of a system user to become. If unspecified, run as the same user as the Hub.
-
property
kind
¶ The name of the kind of service as a string
‘managed’ for managed services
‘external’ for external services
-
property
managed
¶ Am I managed by the Hub?
Services Authentication¶
Module: jupyterhub.services.auth
¶
Authenticating services with JupyterHub.
Cookies are sent to the Hub for verification. The Hub replies with a JSON model describing the authenticated user.
HubAuth
can be used in any application, even outside tornado.
HubAuthenticated
is a mixin class for tornado handlers that should
authenticate with the Hub.
HubAuth
¶-
class
jupyterhub.services.auth.
HubAuth
(**kwargs)¶ A class for authenticating with JupyterHub
This can be used by any application.
If using tornado, use via
HubAuthenticated
mixin. If using manually, use the.user_for_cookie(cookie_value)
method to identify the user corresponding to a given cookie value.The following config must be set:
api_token (token for authenticating with JupyterHub API), fetched from the JUPYTERHUB_API_TOKEN env by default.
The following config MAY be set:
api_url: the base URL of the Hub’s internal API, fetched from JUPYTERHUB_API_URL by default.
cookie_cache_max_age: the number of seconds responses from the Hub should be cached.
login_url (the public
/hub/login
URL of the Hub).cookie_name: the name of the cookie I should be using, if different from the default (unlikely).
-
api_token
c.HubAuth.api_token = Unicode('')¶ API key for accessing Hub API.
Generate with
jupyterhub token [username]
or add to JupyterHub.services config.
-
api_url
c.HubAuth.api_url = Unicode('http://127.0.0.1:8081/hub/api')¶ The base API URL of the Hub.
Typically
http://hub-ip:hub-port/hub/api
-
base_url
c.HubAuth.base_url = Unicode('/')¶ The base URL prefix of this application
e.g. /services/service-name/ or /user/name/
Default: get from JUPYTERHUB_SERVICE_PREFIX
-
cache_max_age
c.HubAuth.cache_max_age = Int(300)¶ The maximum time (in seconds) to cache the Hub’s responses for authentication.
A larger value reduces load on the Hub and occasional response lag. A smaller value reduces propagation time of changes on the Hub (rare).
Default: 300 (five minutes)
-
certfile
c.HubAuth.certfile = Unicode('')¶ The ssl cert to use for requests
Use with keyfile
-
client_ca
c.HubAuth.client_ca = Unicode('')¶ The ssl certificate authority to use to verify requests
Use with keyfile and certfile
The name of the cookie I should be looking for
Additional options to pass when setting cookies.
Can include things like
expires_days=None
for session-expiry orsecure=True
if served on HTTPS and default HTTPS discovery fails (e.g. behind some proxies).
-
hub_host
c.HubAuth.hub_host = Unicode('')¶ The public host of JupyterHub
Only used if JupyterHub is spreading servers across subdomains.
-
hub_prefix
c.HubAuth.hub_prefix = Unicode('/hub/')¶ The URL prefix for the Hub itself.
Typically /hub/
-
keyfile
c.HubAuth.keyfile = Unicode('')¶ The ssl key to use for requests
Use with certfile
-
login_url
c.HubAuth.login_url = Unicode('/hub/login')¶ The login URL to use
Typically /hub/login
-
api_token
c.HubAuth.api_token = Unicode('') API key for accessing Hub API.
Generate with
jupyterhub token [username]
or add to JupyterHub.services config.
-
api_url
c.HubAuth.api_url = Unicode('http://127.0.0.1:8081/hub/api') The base API URL of the Hub.
Typically
http://hub-ip:hub-port/hub/api
-
base_url
c.HubAuth.base_url = Unicode('/') The base URL prefix of this application
e.g. /services/service-name/ or /user/name/
Default: get from JUPYTERHUB_SERVICE_PREFIX
-
cache_max_age
c.HubAuth.cache_max_age = Int(300) The maximum time (in seconds) to cache the Hub’s responses for authentication.
A larger value reduces load on the Hub and occasional response lag. A smaller value reduces propagation time of changes on the Hub (rare).
Default: 300 (five minutes)
-
certfile
c.HubAuth.certfile = Unicode('') The ssl cert to use for requests
Use with keyfile
-
client_ca
c.HubAuth.client_ca = Unicode('') The ssl certificate authority to use to verify requests
Use with keyfile and certfile
-
cookie_name
c.HubAuth.cookie_name = Unicode('jupyterhub-services') The name of the cookie I should be looking for
-
cookie_options
c.HubAuth.cookie_options = Dict() Additional options to pass when setting cookies.
Can include things like
expires_days=None
for session-expiry orsecure=True
if served on HTTPS and default HTTPS discovery fails (e.g. behind some proxies).
-
get_session_id
(handler)¶ Get the jupyterhub session id
from the jupyterhub-session-id cookie.
-
get_token
(handler)¶ Get the user token from a request
in URL parameters: ?token=<token>
in header: Authorization: token <token>
-
get_user
(handler)¶ Get the Hub user for a given tornado handler.
Checks cookie with the Hub to identify the current user.
- Parameters
handler (tornado.web.RequestHandler) – the current request handler
- Returns
The user model, if a user is identified, None if authentication fails.
The ‘name’ field contains the user’s name.
- Return type
user_model (dict)
-
hub_host
c.HubAuth.hub_host = Unicode('') The public host of JupyterHub
Only used if JupyterHub is spreading servers across subdomains.
-
hub_prefix
c.HubAuth.hub_prefix = Unicode('/hub/') The URL prefix for the Hub itself.
Typically /hub/
-
keyfile
c.HubAuth.keyfile = Unicode('') The ssl key to use for requests
Use with certfile
-
login_url
c.HubAuth.login_url = Unicode('/hub/login') The login URL to use
Typically /hub/login
Ask the Hub to identify the user for a given cookie.
- Parameters
- Returns
The user model, if a user is identified, None if authentication fails.
The ‘name’ field contains the user’s name.
- Return type
user_model (dict)
-
user_for_token
(token, use_cache=True, session_id='')¶ Ask the Hub to identify the user for a given token.
HubOAuth
¶-
class
jupyterhub.services.auth.
HubOAuth
(**kwargs)¶ HubAuth using OAuth for login instead of cookies set by the Hub.
-
api_token
c.HubOAuth.api_token = Unicode('')¶ API key for accessing Hub API.
Generate with
jupyterhub token [username]
or add to JupyterHub.services config.
-
api_url
c.HubOAuth.api_url = Unicode('http://127.0.0.1:8081/hub/api')¶ The base API URL of the Hub.
Typically
http://hub-ip:hub-port/hub/api
-
base_url
c.HubOAuth.base_url = Unicode('/')¶ The base URL prefix of this application
e.g. /services/service-name/ or /user/name/
Default: get from JUPYTERHUB_SERVICE_PREFIX
-
cache_max_age
c.HubOAuth.cache_max_age = Int(300)¶ The maximum time (in seconds) to cache the Hub’s responses for authentication.
A larger value reduces load on the Hub and occasional response lag. A smaller value reduces propagation time of changes on the Hub (rare).
Default: 300 (five minutes)
-
certfile
c.HubOAuth.certfile = Unicode('')¶ The ssl cert to use for requests
Use with keyfile
-
client_ca
c.HubOAuth.client_ca = Unicode('')¶ The ssl certificate authority to use to verify requests
Use with keyfile and certfile
Additional options to pass when setting cookies.
Can include things like
expires_days=None
for session-expiry orsecure=True
if served on HTTPS and default HTTPS discovery fails (e.g. behind some proxies).
-
hub_host
c.HubOAuth.hub_host = Unicode('')¶ The public host of JupyterHub
Only used if JupyterHub is spreading servers across subdomains.
-
hub_prefix
c.HubOAuth.hub_prefix = Unicode('/hub/')¶ The URL prefix for the Hub itself.
Typically /hub/
-
keyfile
c.HubOAuth.keyfile = Unicode('')¶ The ssl key to use for requests
Use with certfile
-
login_url
c.HubOAuth.login_url = Unicode('/hub/login')¶ The login URL to use
Typically /hub/login
The URL to redirect to when starting the OAuth process
-
oauth_client_id
c.HubOAuth.oauth_client_id = Unicode('')¶ The OAuth client ID for this application.
Use JUPYTERHUB_CLIENT_ID by default.
-
oauth_redirect_uri
c.HubOAuth.oauth_redirect_uri = Unicode('')¶ OAuth redirect URI
Should generally be /base_url/oauth_callback
-
oauth_token_url
c.HubOAuth.oauth_token_url = Unicode('')¶ The URL for requesting an OAuth token from JupyterHub
Clear the OAuth cookie
Use OAuth client_id for cookie name
because we don’t want to use the same cookie name across OAuth clients.
-
generate_state
(next_url=None, **extra_state)¶ Generate a state string, given a next_url redirect target
- Parameters
(str) (next_url) –
- Returns
state (str)
- Return type
The base64-encoded state string.
-
get_next_url
(b64_state='')¶ Get the next_url for redirection, given an encoded OAuth state
Get the cookie name for oauth state, given an encoded OAuth state
Cookie name is stored in the state itself because the cookie name is randomized to deal with races between concurrent oauth sequences.
-
oauth_authorization_url
c.HubOAuth.oauth_authorization_url = Unicode('/hub/api/oauth2/authorize') The URL to redirect to when starting the OAuth process
-
oauth_client_id
c.HubOAuth.oauth_client_id = Unicode('') The OAuth client ID for this application.
Use JUPYTERHUB_CLIENT_ID by default.
-
oauth_redirect_uri
c.HubOAuth.oauth_redirect_uri = Unicode('') OAuth redirect URI
Should generally be /base_url/oauth_callback
-
oauth_token_url
c.HubOAuth.oauth_token_url = Unicode('') The URL for requesting an OAuth token from JupyterHub
Set a cookie recording OAuth result
Generate an OAuth state and store it in a cookie
- Parameters
(RequestHandler) (handler) –
(str) (next_url) –
- Returns
state (str)
- Return type
The OAuth state that has been stored in the cookie (url safe, base64-encoded)
The cookie name for storing OAuth state
This cookie is only live for the duration of the OAuth handshake.
-
HubAuthenticated
¶-
class
jupyterhub.services.auth.
HubAuthenticated
¶ Mixin for tornado handlers that are authenticated with JupyterHub
A handler that mixes this in must have the following attributes/properties:
.hub_auth: A HubAuth instance
.hub_users: A set of usernames to allow. If left unspecified or None, username will not be checked.
.hub_groups: A set of group names to allow. If left unspecified or None, groups will not be checked.
Examples:
class MyHandler(HubAuthenticated, web.RequestHandler): hub_users = {'inara', 'mal'} def initialize(self, hub_auth): self.hub_auth = hub_auth @web.authenticated def get(self): ...
-
property
allow_all
¶ Property indicating that all successfully identified user or service should be allowed.
-
check_hub_user
(model)¶ Check whether Hub-authenticated user or service should be allowed.
Returns the input if the user should be allowed, None otherwise.
Override if you want to check anything other than the username’s presence in hub_users list.
-
get_current_user
()¶ Tornado’s authentication method
- Returns
The user model, if a user is identified, None if authentication fails.
- Return type
user_model (dict)
-
get_login_url
()¶ Return the Hub’s login URL
HubOAuthenticated
¶-
class
jupyterhub.services.auth.
HubOAuthenticated
¶ Simple subclass of HubAuthenticated using OAuth instead of old shared cookies
HubOAuthCallbackHandler
¶-
class
jupyterhub.services.auth.
HubOAuthCallbackHandler
(application: tornado.web.Application, request: tornado.httputil.HTTPServerRequest, **kwargs)¶ OAuth Callback handler
Finishes the OAuth flow, setting a cookie to record the user’s info.
Should be registered at
SERVICE_PREFIX/oauth_callback
Troubleshooting¶
Troubleshooting¶
When troubleshooting, you may see unexpected behaviors or receive an error message. This section provide links for identifying the cause of the problem and how to resolve it.
JupyterHub proxy fails to start
sudospawner fails to run
What is the default behavior when none of the lists (admin, whitelist, group whitelist) are set?
JupyterHub Docker container not accessible at localhost
500 error after spawning my single-user server
Use a chained SSL certificate
Install JupyterHub without a network connection
I want access to the whole filesystem, but still default users to their home directory
How do I increase the number of pySpark executors on YARN?
How do I use JupyterLab’s prerelease version with JupyterHub?
How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
How do I set up rotating daily logs?
Toree integration with HDFS rack awareness script
Where do I find Docker images and Dockerfiles related to JupyterHub?
Behavior¶
JupyterHub proxy fails to start¶
If you have tried to start the JupyterHub proxy and it fails to start:
check if the JupyterHub IP configuration setting is
c.JupyterHub.ip = '*'
; if it is, tryc.JupyterHub.ip = ''
Try starting with
jupyterhub --ip=0.0.0.0
Note: If this occurs on Ubuntu/Debian, check that the you are using a recent version of node. Some versions of Ubuntu/Debian come with a version of node that is very old, and it is necessary to update node.
sudospawner fails to run¶
If the sudospawner script is not found in the path, sudospawner will not run. To avoid this, specify sudospawner’s absolute path. For example, start jupyterhub with:
jupyterhub --SudoSpawner.sudospawner_path='/absolute/path/to/sudospawner'
or add:
c.SudoSpawner.sudospawner_path = '/absolute/path/to/sudospawner'
to the config file, jupyterhub_config.py
.
What is the default behavior when none of the lists (admin, whitelist, group whitelist) are set?¶
When nothing is given for these lists, there will be no admins, and all users who can authenticate on the system (i.e. all the unix users on the server with a password) will be allowed to start a server. The whitelist lets you limit this to a particular set of users, and the admin_users lets you specify who among them may use the admin interface (not necessary, unless you need to do things like inspect other users’ servers, or modify the userlist at runtime).
JupyterHub Docker container not accessible at localhost¶
Even though the command to start your Docker container exposes port 8000
(docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
),
it is possible that the IP address itself is not accessible/visible. As a result
when you try http://localhost:8000 in your browser, you are unable to connect
even though the container is running properly. One workaround is to explicitly
tell Jupyterhub to start at 0.0.0.0
which is visible to everyone. Try this
command:
docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000
Errors¶
500 error after spawning my single-user server¶
You receive a 500 error when accessing the URL /user/<your_name>/...
.
This is often seen when your single-user server cannot verify your user cookie
with the Hub.
There are two likely reasons for this:
The single-user server cannot connect to the Hub’s API (networking configuration problems)
The single-user server cannot authenticate its requests (invalid token)
The main symptom is a failure to load any page served by the single-user
server, met with a 500 error. This is typically the first page at /user/<your_name>
after logging in or clicking “Start my server”. When a single-user notebook server
receives a request, the notebook server makes an API request to the Hub to
check if the cookie corresponds to the right user. This request is logged.
If everything is working, the response logged will be similar to this:
200 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 6.10ms
You should see a similar 200 message, as above, in the Hub log when you first visit your single-user notebook server. If you don’t see this message in the log, it may mean that your single-user notebook server isn’t connecting to your Hub.
If you see 403 (forbidden) like this, it’s a token problem:
403 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 4.14ms
Check the logs of the single-user notebook server, which may have more detailed information on the cause.
If you make an API request and it is not received by the server, you likely
have a network configuration issue. Often, this happens when the Hub is only
listening on 127.0.0.1 (default) and the single-user servers are not on the
same ‘machine’ (can be physically remote, or in a docker container or VM). The
fix for this case is to make sure that c.JupyterHub.hub_ip
is an address
that all single-user servers can connect to, e.g.:
c.JupyterHub.hub_ip = '10.0.0.1'
If you receive a 403 error, the API token for the single-user server is likely invalid. Commonly, the 403 error is caused by resetting the JupyterHub database (either removing jupyterhub.sqlite or some other action) while leaving single-user servers running. This happens most frequently when using DockerSpawner, because Docker’s default behavior is to stop/start containers which resets the JupyterHub database, rather than destroying and recreating the container every time. This means that the same API token is used by the server for its whole life, until the container is rebuilt.
The fix for this Docker case is to remove any Docker containers seeing this issue (typically all containers created before a certain point in time):
docker rm -f jupyter-name
After this, when you start your server via JupyterHub, it will build a new container. If this was the underlying cause of the issue, you should see your server again.
How do I…?¶
Use a chained SSL certificate¶
Some certificate providers, i.e. Entrust, may provide you with a chained certificate that contains multiple files. If you are using a chained certificate you will need to concatenate the individual files by appending the chain cert and root cert to your host cert:
cat your_host.crt chain.crt root.crt > your_host-chained.crt
You would then set in your jupyterhub_config.py
file the ssl_key
and
ssl_cert
as follows:
c.JupyterHub.ssl_cert = your_host-chained.crt
c.JupyterHub.ssl_key = your_host.key
Your certificate provider gives you the following files: example_host.crt
,
Entrust_L1Kroot.txt
and Entrust_Root.txt
.
Concatenate the files appending the chain cert and root cert to your host cert:
cat example_host.crt Entrust_L1Kroot.txt Entrust_Root.txt > example_host-chained.crt
You would then use the example_host-chained.crt
as the value for
JupyterHub’s ssl_cert
. You may pass this value as a command line option
when starting JupyterHub or more conveniently set the ssl_cert
variable in
JupyterHub’s configuration file, jupyterhub_config.py
. In jupyterhub_config.py
,
set:
c.JupyterHub.ssl_cert = /path/to/example_host-chained.crt
c.JupyterHub.ssl_key = /path/to/example_host.key
where ssl_cert
is example-chained.crt and ssl_key to your private key.
Then restart JupyterHub.
See also JupyterHub SSL encryption.
Install JupyterHub without a network connection¶
Both conda and pip can be used without a network connection. You can make your own repository (directory) of conda packages and/or wheels, and then install from there instead of the internet.
For instance, you can install JupyterHub with pip and configurable-http-proxy with npmbox:
python3 -m pip wheel jupyterhub
npmbox configurable-http-proxy
I want access to the whole filesystem, but still default users to their home directory¶
Setting the following in jupyterhub_config.py
will configure access to
the entire filesystem and set the default to the user’s home directory.
c.Spawner.notebook_dir = '/'
c.Spawner.default_url = '/home/%U' # %U will be replaced with the username
How do I increase the number of pySpark executors on YARN?¶
From the command line, pySpark executors can be configured using a command similar to this one:
pyspark --total-executor-cores 2 --executor-memory 1G
Cloudera documentation for configuring spark on YARN applications provides additional information. The pySpark configuration documentation is also helpful for programmatic configuration examples.
How do I use JupyterLab’s prerelease version with JupyterHub?¶
While JupyterLab is still under active development, we have had users ask about how to try out JupyterLab with JupyterHub.
You need to install and enable the JupyterLab extension system-wide,
then you can change the default URL to /lab
.
For instance:
python3 -m pip install jupyterlab
jupyter serverextension enable --py jupyterlab --sys-prefix
The important thing is that jupyterlab is installed and enabled in the single-user notebook server environment. For system users, this means system-wide, as indicated above. For Docker containers, it means inside the single-user docker image, etc.
In jupyterhub_config.py
, configure the Spawner to tell the single-user
notebook servers to default to JupyterLab:
c.Spawner.default_url = '/lab'
How do I set up JupyterHub for a workshop (when users are not known ahead of time)?¶
Set up JupyterHub using OAuthenticator for GitHub authentication
Configure whitelist to be an empty list in
jupyterhub_config.py
Configure admin list to have workshop leaders be listed with administrator privileges.
Users will need a GitHub account to login and be authenticated by the Hub.
How do I set up rotating daily logs?¶
You can do this with logrotate,
or pipe to logger
to use syslog instead of directly to a file.
For example, with this logrotate config file:
/var/log/jupyterhub.log {
copytruncate
daily
}
and run this daily by putting a script in /etc/cron.daily/
:
logrotate /path/to/above-config
Or use syslog:
jupyterhub | logger -t jupyterhub
Troubleshooting commands¶
The following commands provide additional detail about installed packages, versions, and system information that may be helpful when troubleshooting a JupyterHub deployment. The commands are:
System and deployment information
jupyter troubleshooting
Kernel information
jupyter kernelspec list
Debug logs when running JupyterHub
jupyterhub --debug
Toree integration with HDFS rack awareness script¶
The Apache Toree kernel will an issue, when running with JupyterHub, if the standard HDFS rack awareness script is used. This will materialize in the logs as a repeated WARN:
16/11/29 16:24:20 WARN ScriptBasedMapping: Exception running /etc/hadoop/conf/topology_script.py some.ip.address
ExitCodeException exitCode=1: File "/etc/hadoop/conf/topology_script.py", line 63
print rack
^
SyntaxError: Missing parentheses in call to 'print'
at `org.apache.hadoop.util.Shell.runCommand(Shell.java:576)`
In order to resolve this issue, there are two potential options.
Update HDFS core-site.xml, so the parameter “net.topology.script.file.name” points to a custom script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point to a python two installation (e.g. /usr/bin/python).
In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH).
About JupyterHub¶
Contributors¶
Project Jupyter thanks the following people for their help and contribution on JupyterHub:
adelcast
Analect
anderbubble
anikitml
ankitksharma
apetresc
athornton
barrachri
BerserkerTroll
betatim
Carreau
cfournie
charnpreetsingh
chicovenancio
cikao
ckald
cmoscardi
consideRatio
cqzlxl
CRegenschein
cwaldbieser
danielballen
danoventa
daradib
darky2004
datapolitan
dblockow-d2dcrc
DeepHorizons
DerekHeldtWerle
dhirschfeld
dietmarw
dingc3
dmartzol
DominicFollettSmith
dsblank
dtaniwaki
echarles
ellisonbg
emmanuel
evanlinde
Fokko
fperez
franga2000
GladysNalvarte
glenak1911
gweis
iamed18
jamescurtin
JamiesHQ
JasonJWilliamsNY
jbweston
jdavidheiser
jencabral
jhamrick
jkinkead
johnkpark
josephtate
jzf2101
karfai
kinuax
KrishnaPG
kroq-gar78
ksolan
mbmilligan
mgeplf
minrk
mistercrunch
Mistobaan
mpacer
mwmarkland
ndly
nthiery
nxg
ObiWahn
ozancaglayan
paccorsi
parente
PeterDaveHello
peterruppel
phill84
pjamason
prasadkatti
rafael-ladislau
rcthomas
rgbkrk
rkdarst
robnagler
rschroll
ryanlovett
sangramga
Scrypy
schon
shreddd
Siecje
smiller5678
spoorthyv
ssanderson
summerswallow
syutbai
takluyver
temogen
ThomasMChen
Thoralf Gutierrez
timfreund
TimShawver
tklever
Todd-Z-Li
toobaz
tsaeger
tschaume
vilhelmen
whitead
willingc
YannBrrd
yuvipanda
zoltan-fedor
zonca
A Gallery of JupyterHub Deployments¶
A JupyterHub Community Resource
We’ve compiled this list of JupyterHub deployments to help the community see the breadth and growth of JupyterHub’s use in education, research, and high performance computing.
Please submit pull requests to update information or to add new institutions or uses.
Academic Institutions, Research Labs, and Supercomputer Centers¶
University of California Berkeley¶
University of California Davis¶
Although not technically a JupyterHub deployment, this tutorial setup may be helpful to others in the Jupyter community.
Thank you C. Titus Brown for sharing this with the Software Carpentry mailing list.
* I started a big Amazon machine;
* I installed Docker and built a custom image containing my software of
interest;
* I ran multiple containers, one connected to port 8000, one on 8001,
etc. and gave each student a different port;
* students could connect in and use the Terminal program in Jupyter to
execute commands, and could upload/download files via the Jupyter
console interface;
* in theory I could have used notebooks too, but for this I didn’t have
need.
I am aware that JupyterHub can probably do all of this including manage
the containers, but I’m still a bit shy of diving into that; this was
fairly straightforward, gave me disposable containers that were isolated
for each individual student, and worked almost flawlessly. Should be
easy to do with RStudio too.
Cal Poly San Luis Obispo¶
jupyterhub-deploy-teaching based on work by Brian Granger for Cal Poly’s Data Science 301 Course
Clemson University¶
Advanced Computing
University of Colorado Boulder¶
(CU Research Computing) CURC
-
Slurm job dispatched on Crestone compute cluster
log troubleshooting
Profiles in IPython Clusters tab
-
Earth Lab at CU
George Washington University¶
Jupyter Hub with university single-sign-on. Deployed early 2017.
HTCondor¶
University of Illinois¶
https://datascience.business.illinois.edu (currently down; checked 04/26/19)
IllustrisTNG Simulation Project¶
MIT and Lincoln Labs¶
https://supercloud.mit.edu/
Michigan State University¶
University of Minnesota¶
University of Missouri¶
https://dsa.missouri.edu/faq/
Paderborn University¶
-
nbgraderutils: Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing.
Penn State University¶
Press release: “New open-source web apps available for students and faculty” (but Hub is currently down; checked 04/26/19)
University of Rochester CIRC¶
JupyterHub Userguide - Slurm, beehive
University of California San Diego¶
San Diego Supercomputer Center - Andrea Zonca
Educational Technology Services - Paul Jamason
TACC University of Texas¶
Texas A&M¶
Kristen Thyng - Oceanography
Service Providers¶
AWS¶
Google Cloud Platform¶
Everware¶
Everware Reproducible and reusable science powered by jupyterhub and docker. Like nbviewer, but executable. CERN, Geneva website
Microsoft Azure¶
https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-linux-dsvm-intro
Rackspace Carina¶
https://getcarina.com/blog/learning-how-to-whale/
http://carolynvanslyck.com/talk/carina/jupyterhub/#/
Hadoop¶
Miscellaneous¶
https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1
https://groups.google.com/forum/#!topic/jupyter/nkPSEeMr8c0 Mailing list UT deployment
JupyterHub setup on Centos https://gist.github.com/johnrc/604971f7d41ebf12370bf5729bf3e0a4
Deploy JupyterHub to Docker Swarm https://jupyterhub.surge.sh/#/welcome
http://www.laketide.com/building-your-lab-part-3/
http://estrellita.hatenablog.com/entry/2015/07/31/083202
http://www.walkingrandomly.com/?p=5734
https://wrdrd.com/docs/consulting/education-technology
https://bitbucket.org/jackhale/fenics-jupyter
Network Technology Spark Cluster on OpenStack with Multi-User Jupyter Notebook