mirror of
https://github.com/caddyserver/caddy.git
synced 2025-05-24 02:02:26 -04:00
Created v2: Documentation (markdown)
parent
fb3a9aed0f
commit
225c3e8b37
629
v2:-Documentation.md
Normal file
629
v2:-Documentation.md
Normal file
@ -0,0 +1,629 @@
|
||||
This is a copy of the document we built very quickly for early testers of Caddy 2. It is a lot to take in but should help orient you! (Will be revising and improving these pretty rapidly I imagine.)
|
||||
|
||||
Here's the CLI at a glance:
|
||||
|
||||
$ caddy start --config "path/to/caddy.json"
|
||||
|
||||
Starts the Caddy process, optionally bootstrapped with an
|
||||
initial config file. Blocks until server is successfully
|
||||
running (or fails to run), then returns.
|
||||
|
||||
$ caddy run --config "path/to/caddy.json"
|
||||
|
||||
Like start, but blocks indefinitely.
|
||||
|
||||
$ caddy stop
|
||||
|
||||
Stops the running Caddy process. (Note: this will stop
|
||||
any process named the same as the executable file.)
|
||||
|
||||
$ caddy version
|
||||
|
||||
Prints the version.
|
||||
|
||||
$ caddy list-modules
|
||||
|
||||
Prints the modules that are installed.
|
||||
|
||||
$ caddy environ
|
||||
|
||||
Prints the environment as seen by caddy.
|
||||
|
||||
After starting Caddy, you can set/update its configuration by POSTing a
|
||||
new JSON payload to it, for example:
|
||||
|
||||
$ curl -X POST \
|
||||
-d @caddy.json \
|
||||
-H "Content-Type: application/json" \
|
||||
"http://localhost:2019/load"
|
||||
|
||||
To configure Caddy, you need to know how Caddy 2 is structured:
|
||||
|
||||
```
|
||||
{
|
||||
"apps": {
|
||||
"tls": {...},
|
||||
"http": {
|
||||
"servers": {
|
||||
"my_server": {
|
||||
...
|
||||
},
|
||||
"your_server": {
|
||||
...
|
||||
},
|
||||
...
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
At the top level of the config, there are process-wide options such as storage to use,
|
||||
etc. Then there are "apps". Apps are like server types in Caddy 1. The HTTP server is
|
||||
an app. In it, you define a list of servers which you name. Each server has listeners,
|
||||
routes, and other configuration.
|
||||
|
||||
There are two apps currently: "tls" and "http". We will discuss the "http" app first.
|
||||
|
||||
|
||||
|
||||
HTTP App
|
||||
=========
|
||||
|
||||
Routes are not your traditional notion of routes (i.e. "GET /foo/bar" -> someFunc).
|
||||
Routes in Caddy 2 are much more powerful. They are given in an ordered list, and each
|
||||
route has three parts: match, apply, respond. All parts are optional. Match is the "if"
|
||||
statement of each route. Each route takes a list of matcher sets. A matcher set is
|
||||
comprised of matchers of various types. A matcher may be comprised of multiple values.
|
||||
The boolean logic of request matching goes like this:
|
||||
|
||||
- Matcher sets are OR'ed (first matching matcher set is sufficient)
|
||||
- Matchers within a site are AND'ed (all matchers in the set must match)
|
||||
- Values within a specific matcher are OR'ed (but this could vary depending on
|
||||
the matcher; some don't allow multiple values)
|
||||
|
||||
This design enables moderately complex logic such as:
|
||||
|
||||
IF (Host = "example.com") OR (Host = "sub.example.com" AND Path = "/foo/bar")
|
||||
|
||||
The expressions in parentheses are matcher sets. Even more advanced logic can be
|
||||
expressed through the Starlark expression matcher.
|
||||
|
||||
If a request matches a route, the route's middleware are applied to the request.
|
||||
Unlike Caddy 1, middleware in Caddy 2 are chained in the order you specify, rather than
|
||||
a hard-coded order. (So be careful!) Then a responder, if defined, is what actually
|
||||
writes the response.
|
||||
|
||||
All matching routes cascade on top of each other to create a "composite route" that is
|
||||
customized for each request. Crucially, if multiple responders match a request, only the
|
||||
first responder is used; the rest are ignored. This way it is impossible to corrupt the
|
||||
response with multiple writes solely by configuration (a common bug in Caddy 1).
|
||||
|
||||
A good rule of thumb for building routes: keep middleware that deal with the request
|
||||
near the beginning, and middleware that deal with the response near the end. Generally,
|
||||
this will help ensure you put things in the right order (e.g. the encode middleware
|
||||
must wrap the response writer, but you wouldn't want to execute templates on a
|
||||
compressed bitstream, so you'd put the templates middleware later in the chain).
|
||||
|
||||
If a route returns an error, the error along with its recommended status code are
|
||||
bubbled back to the HTTP server which executes a separate error route, if specified.
|
||||
The error routes work exactly like the normal routes, making error handling very
|
||||
powerful and expressive.
|
||||
|
||||
There is more to routing such as grouping routes for exclusivity (i.e. radio buttons
|
||||
instead of checkboxes); making routes terminal so they don't match any more later in
|
||||
the list; and rehandling, which is like an internal redirect that restarts handling
|
||||
the (likely modified) request. You can also omit matchers in a route to have it match
|
||||
all requests. Phew! Lots to know.
|
||||
|
||||
Now then. The contents of caddy.json are up to you. Here's a contrived example to
|
||||
demonstrate the fields you can use:
|
||||
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 80,
|
||||
"https_port": 443,
|
||||
"grace_period": "10s",
|
||||
"servers": {
|
||||
"myserver": {
|
||||
"listen": [":8080"],
|
||||
"routes": [
|
||||
{
|
||||
"match": [{
|
||||
"host": ["example.com"],
|
||||
"path": ["/foo/bar", "*.ext"],
|
||||
"path_regexp": {
|
||||
"name": "myre",
|
||||
"pattern": "/foo/(.*)/bar"
|
||||
},
|
||||
"method": ["GET"],
|
||||
"query": {"param": ["value"]},
|
||||
"header": {"Field": ["foo"]},
|
||||
"header_regexp": {
|
||||
"Field": {
|
||||
"pattern": "foo(.*)-bar",
|
||||
"name": "other"
|
||||
},
|
||||
},
|
||||
"protocol": "grpc",
|
||||
"not": {
|
||||
"path": ["/foo/bar"],
|
||||
"...": "(any matchers in here will be negated)
|
||||
},
|
||||
"remote_ip": {
|
||||
"ranges": ["127.0.0.1", "192.168.0.1/24"]
|
||||
},
|
||||
"starlark_expr": "req.host == 'foo.com' || (req.host != 'example.com' && req.host != 'sub.example.com')"
|
||||
}],
|
||||
"apply": [
|
||||
{
|
||||
"middleware": "rewrite",
|
||||
"method": "FOO",
|
||||
"uri": "/test/abc"
|
||||
},
|
||||
{
|
||||
"middleware": "headers",
|
||||
"response": {
|
||||
"set": {
|
||||
"Foo": ["bar"],
|
||||
"Regexp": ["{http.matchers.path_regexp.myre.0}"]
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"respond": {
|
||||
"responder": "static",
|
||||
"body": "Booyah: {http.request.method} {http.request.uri} Foo: {http.response.header.foo}"
|
||||
},
|
||||
"group": "exclusive",
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [ ... ]
|
||||
},
|
||||
"tls_connection_policies": [
|
||||
{
|
||||
"match": {
|
||||
"host": ["example.com"]
|
||||
},
|
||||
"alpn": ["..."],
|
||||
"cipher_suites": ["..."],
|
||||
"certificate_selection": {
|
||||
"policy": "enterprise",
|
||||
"subject.organization": "O1",
|
||||
"tag": "company1"
|
||||
}
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"disabled": false,
|
||||
"disable_redirects": false,
|
||||
"skip": ["exclude", "these", "domains"],
|
||||
"skip_certificates": ["doesn't provision certs for these domains but still does redirects"]
|
||||
},
|
||||
"max_rehandles": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
You can update the config any time by POSTing the updated payload to that endpoint. Try it!
|
||||
|
||||
Fun fact: the enterprise version allows GET/POST/PUT/PATCH/DELETE to any path within your
|
||||
config structure to mutate (or get) only that part. For example:
|
||||
|
||||
PUT /config/apps/http/servers/myserver/routes/0/match/hosts "foo.example.com"
|
||||
|
||||
would add "foo.example.com" to the host matcher for the first route in myserver. This makes
|
||||
Caddy's config truly dynamic, even for hand-crafted changes on-the-fly.
|
||||
|
||||
Here is some makeshift docs for what you can do. In general, we are showing all parameters
|
||||
that are available, but you can often omit parameters that you don't want or need to use.
|
||||
|
||||
Middleware:
|
||||
|
||||
- headers
|
||||
|
||||
{
|
||||
"middleware": "headers",
|
||||
"request": {
|
||||
"set": {
|
||||
"Field": ["overwrites"]
|
||||
},
|
||||
"add": {
|
||||
"Field": ["appends"]
|
||||
},
|
||||
"delete": ["Goodbye-Field"]
|
||||
},
|
||||
"response": {
|
||||
"set": {
|
||||
"Field": ["overwrites"]
|
||||
},
|
||||
"add": {
|
||||
"Field": ["appends"]
|
||||
},
|
||||
"delete": ["Goodbye-Field"],
|
||||
"deferred": true,
|
||||
"require": {
|
||||
"status_code": [2, 301],
|
||||
"headers": {
|
||||
"Foo": ["bar"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Changes to headers are applied immediately, except for the
|
||||
response headers when "deferred" is true or when "required"
|
||||
is set. In those cases, the changes are applied when the
|
||||
headers are written to the response. Note that deferred
|
||||
changes do not take effect if an error occurs later in the
|
||||
middleware chain. The "require" property allows you to
|
||||
conditionally manipulate response headers based on the
|
||||
response to be written.
|
||||
|
||||
|
||||
|
||||
- rewrite
|
||||
|
||||
{
|
||||
"middleware": "rewrite",
|
||||
"method": "FOO",
|
||||
"uri": "/new/path?param=val",
|
||||
"rehandle": true
|
||||
}
|
||||
|
||||
Rewrites the request URI or method. If "rehandle" is true,
|
||||
the request is rehandled after the rewrite (as if it had
|
||||
originally been received like that).
|
||||
|
||||
|
||||
- markdown
|
||||
|
||||
{
|
||||
"middleware": "markdown"
|
||||
}
|
||||
|
||||
Does nothing so far: very plain/simple markdown rendering
|
||||
using Blackfriday. Will be configurable. But unlike Caddy 1,
|
||||
this already allows rendering *any* response body as
|
||||
Markdown, whether it be from a proxied upstream or a static
|
||||
file server. This needs a lot more testing and development.
|
||||
|
||||
|
||||
- request_body
|
||||
|
||||
{
|
||||
"middleware": "request_body",
|
||||
"max_size": 1048576
|
||||
}
|
||||
|
||||
Limits the size of the request body if read by a later
|
||||
handler.
|
||||
|
||||
|
||||
- encode
|
||||
|
||||
{
|
||||
"middleware": "encode",
|
||||
"encodings": {
|
||||
"gzip": {"level": 5},
|
||||
"zstd": {}
|
||||
},
|
||||
"minimum_length": 512
|
||||
}
|
||||
|
||||
Compresses responses on-the-fly.
|
||||
|
||||
- templates
|
||||
|
||||
{
|
||||
"middleware": "templates",
|
||||
"file_root": "/var/www/mysite",
|
||||
"mime_types": ["text/html", "text/plain", "text/markdown"],
|
||||
"delimiters": ["{{", "}}"]
|
||||
}
|
||||
|
||||
Interprets the response as a template body, then
|
||||
executes the template and writes the response to
|
||||
the client. Template functions will be documented
|
||||
soon. There are functions to include other files,
|
||||
make sub-requests (virtual HTTP requests), render
|
||||
Markdown, manipulate headers, access the request
|
||||
fields, manipulate strings, do math, work with
|
||||
data structures, and more.
|
||||
|
||||
|
||||
Responders:
|
||||
|
||||
- reverse_proxy
|
||||
|
||||
"respond": {
|
||||
"responder": "reverse_proxy",
|
||||
"try_interval": "20s",
|
||||
"load_balance_type": "round_robin",
|
||||
"upstreams": [
|
||||
{
|
||||
"host": "http://localhost:8080",
|
||||
"fast_health_check_dur": "100ms",
|
||||
"health_check_dur": "10s"
|
||||
},
|
||||
{
|
||||
"host": "http://localhost:8081",
|
||||
"health_check_dur": "2s"
|
||||
},
|
||||
{
|
||||
"host": "http://localhost:8082",
|
||||
"health_check_path": "health"
|
||||
},
|
||||
{
|
||||
"host": "http://localhost:8083",
|
||||
"circuit_breaker": {
|
||||
"type": "status_ratio",
|
||||
"threshold": 0.5
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
|
||||
- file_server
|
||||
|
||||
"respond": {
|
||||
"responder": "file_server",
|
||||
"root": "/path/to/site/root",
|
||||
"hide": ["/pretend/these/don't/exist"],
|
||||
"index_names": ["index.html", "index.txt"],
|
||||
"files": ["try", "these", "files"],
|
||||
"selection_policy": "largest_size",
|
||||
"rehandle": true,
|
||||
"fallback": [
|
||||
// more routes!
|
||||
],
|
||||
"browse": {
|
||||
"template_file": "optional.tpl"
|
||||
}
|
||||
}
|
||||
|
||||
The file server uses the request URI to build the target file
|
||||
path by appending it to the root path.
|
||||
|
||||
The "files" parameter is like nginx's "try_files" except
|
||||
you can specify how to select one from that list. The default
|
||||
is "first_existing" but there is also "largest_size",
|
||||
"smallest_size", and "most_recently_modified".
|
||||
|
||||
If "rehandle" is true and the request was mapped to a different
|
||||
file than the URI path originally pointed to, the request will
|
||||
be sent for rehandling (internal redirect). This includes using
|
||||
an index file when a directory was requested or using "files" to
|
||||
try a file different than the URI path.
|
||||
|
||||
If no files were found to handle the request, "fallback" is
|
||||
compiled and executed. These are routes just like what you're
|
||||
used to defining.
|
||||
|
||||
If "browse" is specified, directory browsing will be enabled. It
|
||||
should honor the "hide" list. To use the default template, just
|
||||
leave it empty {}.
|
||||
|
||||
The "hide" list can also use globular patterns like "*.hidden"
|
||||
or "/foo/*/bar".
|
||||
|
||||
|
||||
- static
|
||||
|
||||
"respond": {
|
||||
"responder": "static",
|
||||
"status_code": 307,
|
||||
"status_code_str": "{http.error.status_code}",
|
||||
"headers": {
|
||||
"Location": ["https://example.com/foo"]
|
||||
},
|
||||
"body": "Response body",
|
||||
"close": true
|
||||
}
|
||||
|
||||
Responds to the request with a static/hard-coded response.
|
||||
The status code can be expressed either as an integer or
|
||||
a string. Expressing it as a string allows you to use
|
||||
a placeholder (variable). TODO: Should we just consolidate
|
||||
it so it's always a string (we can convert "301" to an int)?
|
||||
Only one representation should be used, not both.
|
||||
|
||||
You can set response headers.
|
||||
|
||||
You can also specify the response body.
|
||||
|
||||
And if "close" is true, the connection with the client will
|
||||
be closed after responding.
|
||||
|
||||
This is a great way to do HTTP redirects.
|
||||
|
||||
|
||||
You can use placeholders (variables) in many values, as well.
|
||||
Depending on the context, these may be available:
|
||||
|
||||
{system.hostname}
|
||||
{system.os}
|
||||
{system.arch}
|
||||
{system.slash}
|
||||
{http.request.hostport}
|
||||
{http.request.host}
|
||||
{http.request.host.labels.N} (N is the number; i.e. with foo.example.com, 2 is "foo" and 1 is "example")
|
||||
{http.request.port}
|
||||
{http.request.scheme}
|
||||
{http.request.uri}
|
||||
{http.request.uri.path}
|
||||
{http.request.uri.path.file}
|
||||
{http.request.uri.path.dir}
|
||||
{http.request.uri.query.param}
|
||||
{http.request.header.field-name} (lower-cased field name)
|
||||
{http.request.cookie.cookie_name}
|
||||
{http.response.header.field-name} (lower-cased field name)
|
||||
{http.request.host}
|
||||
{http.request.host}
|
||||
|
||||
If using regexp matchers, capture groups (both named and numeric) are available as well:
|
||||
|
||||
{http.matchers.path_regexp.pattern_name.capture_group_name}
|
||||
|
||||
(replace pattern_name with the name you gave the pattern, and replace capture_group_name
|
||||
with the name or index number of the capture group).
|
||||
|
||||
Placeholder performance needs to be improved. Looking into this.
|
||||
|
||||
Listeners can be defined as follows:
|
||||
|
||||
network/host:port-range
|
||||
|
||||
For example:
|
||||
|
||||
:8080
|
||||
127.0.0.1:8080
|
||||
localhost:8080
|
||||
localhost:8080-8085
|
||||
tcp/localhost:8080
|
||||
udp/localhost:9005
|
||||
unix//path/to/socket
|
||||
|
||||
|
||||
Oh! That reminds me: if you use a "host" matcher in your HTTP routes, Caddy 2
|
||||
will use that to enable automatic HTTPS. Tada!
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
TLS App
|
||||
=========
|
||||
|
||||
Caddy's TLS app is an immensely powerful way to configure your server's security and
|
||||
privacy policies. It enables you to load TLS certificates into the cache so they can
|
||||
be used to complete TLS handshakes. You can customize how certificates are managed or
|
||||
automated, and you can even configure how TLS session tickets are handled.
|
||||
|
||||
Most users will not need to configure the TLS app at all, since HTTPS is automatic
|
||||
and on by default.
|
||||
|
||||
TLS is structured like this:
|
||||
|
||||
"tls": {
|
||||
"certificates": {},
|
||||
"automation": {},
|
||||
"session_tickets": {},
|
||||
}
|
||||
|
||||
|
||||
Here is a contrived example showing all fields:
|
||||
|
||||
"tls": {
|
||||
"certificates": {
|
||||
"load_files": [
|
||||
{"certificate": "cert.pem", "key": "key.pem", "format": "pem"}
|
||||
],
|
||||
"load_folders": ["/var/all_my_certs"],
|
||||
"load_pem": [
|
||||
{
|
||||
"certificate": "-----BEGIN CERTIFICATE-----\nMIIFNTCCBB2gAw...",
|
||||
"key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCA..."
|
||||
}
|
||||
],
|
||||
"automate": ["example.com", "example.net"]
|
||||
},
|
||||
"automation": {
|
||||
"policies": [
|
||||
{
|
||||
"hosts": ["example.com"],
|
||||
"management": {
|
||||
"module": "acme",
|
||||
"ca": "https://acme-endpoint-here/",
|
||||
"email": "foo@bar.com"
|
||||
"key_type": "p256",
|
||||
"acme_timeout": "1m",
|
||||
"must_staple": false,
|
||||
"challenges": {
|
||||
"http": {
|
||||
"disabled": false,
|
||||
"alternate_port": 2080
|
||||
},
|
||||
"tls-alpn": {
|
||||
"disabled": false,
|
||||
"alternate_port": 2443
|
||||
},
|
||||
"dns": {
|
||||
"provider": "cloudflare",
|
||||
"auth_email": "me@mine.com",
|
||||
"auth_key": "foobar1234",
|
||||
"ttl": 3600,
|
||||
"propogation_timeout": "5m"
|
||||
}
|
||||
},
|
||||
"on_demand": false,
|
||||
"storage": { ... }
|
||||
}
|
||||
}
|
||||
],
|
||||
"on_demand": {
|
||||
"rate_limit": {
|
||||
"interval": "1m",
|
||||
"burst": 3
|
||||
},
|
||||
"ask": "http://localhost:8123/cert_allowed"
|
||||
}
|
||||
},
|
||||
"session_tickets": {
|
||||
"disabled": false,
|
||||
"max_keys": 4,
|
||||
"key_source": {
|
||||
"provider": "distributed",
|
||||
"storage": { ... }
|
||||
},
|
||||
"disable_rotation": false,
|
||||
"rotation_interval": "12h"
|
||||
}
|
||||
}
|
||||
|
||||
As you can see, there are 4 ways to load certificates:
|
||||
|
||||
- from individual files
|
||||
- from folders
|
||||
- from hard-coded/in-memory values (enterprise feature)
|
||||
- from automated management
|
||||
|
||||
All loaded certificates get pooled into the same cache and may be used to complete
|
||||
TLS handshakes for the relevant server names (SNI). Certificates loaded manually
|
||||
(anything other than "automate") are not automatically managed and will have to
|
||||
be refreshed manually before they expire.
|
||||
|
||||
Automation policies are an expressive way to configure how automated certificates
|
||||
should be managed. You can optionally switch the policies based on hostnames (SANs),
|
||||
but omitting that will cause that policy to be applied to all certificates. The first
|
||||
matching policy is used.
|
||||
|
||||
Right now the only manangement module is "acme" which uses Let's Encrypt by default.
|
||||
There are many fields you can configure, including the ability to customize or toggle
|
||||
each challenge type, enabling On-Demand TLS (which defers certificate management until
|
||||
TLS-handshake-time), or customizing the storage unit for those certificates only.
|
||||
|
||||
You can also configure On-Demand TLS, which for arbitrary hostnames, should have some
|
||||
restrictions in place to prevent abuse. You can customize rate limits or a URL to
|
||||
be queried to ask if a domain name can get a certificate. The URL will be augmented
|
||||
with the "domain" query string parameter which specifies the hostname in question. The
|
||||
endpoint must return a 200 OK if a certificate is allowed; anything else will cause it
|
||||
to be denied. Redirects are not followed.
|
||||
|
||||
Finally, you can customize TLS session ticket ephemeral keys (STEKs). This includes
|
||||
rotation, and source. Enterprise users can have distributed STEKs, which improves TLS
|
||||
performance over a cluster of load-balanced instances, since TLS sessions can always be
|
||||
resumed no matter the load balancing.
|
||||
|
||||
|
||||
FIN.
|
Loading…
x
Reference in New Issue
Block a user