Compare commits
55 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
5a57515c6b | ||
|
3490b9d35d | ||
|
dd0a20a668 | ||
|
d13cef6ff6 | ||
|
9f149977d6 | ||
|
e65544f414 | ||
|
8010758b29 | ||
|
75d92c6967 | ||
|
0f67cc7818 | ||
|
f428a752a5 | ||
|
3fe3784b6c | ||
|
e4d63331cf | ||
|
8ae521db64 | ||
|
24fde36b20 | ||
|
7c55a4fd0c | ||
|
7b1c0a97f7 | ||
|
8392846bd4 | ||
|
07db6a2df1 | ||
|
cc9bb4b1f8 | ||
|
de91b99639 | ||
|
c582ea5ff0 | ||
|
b5de37e722 | ||
|
ee9032f0bf | ||
|
11a68ce7f9 | ||
|
0dbac0af0d | ||
|
9541ee4cf6 | ||
|
2958a67ce5 | ||
|
eebbf6ebbb | ||
|
0133178b84 | ||
|
9af5ba34ae | ||
|
9b24e13418 | ||
|
8fd880a3f1 | ||
|
89700666b9 | ||
|
343e0547df | ||
|
a7d5e6ce4f | ||
|
d342ae68d8 | ||
|
a87cd3fc2c | ||
|
28a4d65b38 | ||
|
253dbfab94 | ||
|
727f79f432 | ||
|
6a56eb480b | ||
|
155d0900bb | ||
|
a59a165cd7 | ||
|
5d540be81b | ||
|
05f2449d84 | ||
|
44fa364cdd | ||
|
04a25b841f | ||
|
efbbff671c | ||
|
27c2c721ed | ||
|
abe16d4480 | ||
|
445bb8189b | ||
|
ff7dfdcd43 | ||
|
5e662c9dbd | ||
|
78d60b3651 | ||
|
7a7992a639 |
@@ -1,3 +1,5 @@
|
||||
dist/
|
||||
vendor/
|
||||
!dist/traefik
|
||||
site/
|
||||
**/*.test
|
||||
|
2
.gitattributes
vendored
@@ -1 +1 @@
|
||||
# vendor/github.com/go-acme/lego/providers/dns/cloudxns/cloudxns.go eol=crlf
|
||||
glide.lock binary
|
24
.github/CODEOWNERS
vendored
@@ -1,24 +0,0 @@
|
||||
provider/kubernetes/** @containous/kubernetes
|
||||
provider/rancher/** @containous/rancher
|
||||
provider/marathon/** @containous/marathon
|
||||
provider/docker/** @containous/docker
|
||||
|
||||
docs/user-guide/kubernetes.md @containous/kubernetes
|
||||
docs/user-guide/marathon.md @containous/marathon
|
||||
docs/user-guide/swarm.md @containous/docker
|
||||
docs/user-guide/swarm-mode.md @containous/docker
|
||||
|
||||
docs/configuration/backends/docker.md @containous/docker
|
||||
docs/configuration/backends/kubernetes.md @containous/kubernetes
|
||||
docs/configuration/backends/marathon.md @containous/marathon
|
||||
docs/configuration/backends/rancher.md @containous/rancher
|
||||
|
||||
examples/k8s/ @containous/kubernetes
|
||||
examples/compose-k8s.yaml @containous/kubernetes
|
||||
examples/k8s.namespace.yaml @containous/kubernetes
|
||||
examples/compose-rancher.yml @containous/rancher
|
||||
examples/compose-marathon.yml @containous/marathon
|
||||
|
||||
vendor/github.com/gambol99/go-marathon @containous/marathon
|
||||
vendor/github.com/rancher @containous/rancher
|
||||
vendor/k8s.io/ @containous/kubernetes
|
146
.github/CONTRIBUTING.md
vendored
Normal file
@@ -0,0 +1,146 @@
|
||||
# Contributing
|
||||
|
||||
### Building
|
||||
|
||||
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` and `glide` (Method 2) in order to build traefik.
|
||||
|
||||
#### Method 1: Using `Docker` and `Makefile`
|
||||
|
||||
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
|
||||
|
||||
```bash
|
||||
$ make binary
|
||||
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
|
||||
Sending build context to Docker daemon 295.3 MB
|
||||
Step 0 : FROM golang:1.7
|
||||
---> 8c6473912976
|
||||
Step 1 : RUN go get github.com/Masterminds/glide
|
||||
[...]
|
||||
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/emile/dev/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: binary (in .)
|
||||
|
||||
$ ls dist/
|
||||
traefik*
|
||||
```
|
||||
|
||||
#### Method 2: Using `go` and `glide`
|
||||
|
||||
###### Setting up your `go` environment
|
||||
|
||||
- You need `go` v1.7+
|
||||
- It is recommended you clone Træfɪk into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
|
||||
- This will allow your `GOPATH` and `PATH` variable to be set to `~/go` via:
|
||||
```
|
||||
$ export GOPATH=~/go
|
||||
$ export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
This can be verified via `$ go env`
|
||||
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
|
||||
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
|
||||
|
||||
###### Setting up your `glide` environment
|
||||
|
||||
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
|
||||
|
||||
The idea behind `glide` is the following :
|
||||
|
||||
- when checkout(ing) a project, run `$ glide install -v` from the cloned directory to install
|
||||
(`go get …`) the dependencies in your `GOPATH`.
|
||||
- if you need another dependency, import and use it in
|
||||
the source, and run `$ glide get github.com/Masterminds/cookoo` to save it in
|
||||
`vendor` and add it to your `glide.yaml`.
|
||||
|
||||
```bash
|
||||
$ glide install --strip-vendor
|
||||
# generate (Only required to integrate other components such as web dashboard)
|
||||
$ go generate
|
||||
# Standard go build
|
||||
$ go build
|
||||
# Using gox to build multiple platform
|
||||
$ gox "linux darwin" "386 amd64 arm" \
|
||||
-output="dist/traefik_{{.OS}}-{{.Arch}}"
|
||||
# run other commands like tests
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
##### Method 1: `Docker` and `make`
|
||||
|
||||
You can run unit tests using the `test-unit` target and the
|
||||
integration test using the `test-integration` target.
|
||||
|
||||
```bash
|
||||
$ make test-unit
|
||||
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
|
||||
# […]
|
||||
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/vincent/src/github/vdemeester/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: test-unit (in .)
|
||||
+ go test -cover -coverprofile=cover.out .
|
||||
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
|
||||
|
||||
Test success
|
||||
```
|
||||
|
||||
For development purposes, you can specify which tests to run by using:
|
||||
```
|
||||
# Run every tests in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite" make test-integration
|
||||
|
||||
# Run the test "MyTest" in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
|
||||
|
||||
# Run every tests starting with "My", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
|
||||
|
||||
# Run every tests ending with "Test", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
|
||||
```
|
||||
|
||||
More: https://labix.org/gocheck
|
||||
|
||||
##### Method 2: `go` and `glide`
|
||||
|
||||
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
|
||||
```
|
||||
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
|
||||
```
|
||||
- Note that `$ go test ./...` will run all tests (including the ones in the vendor directory for the dependencies that glide have fetched). If you only want to run the tests for traefik use `$ go test $(glide novendor)` instead.
|
||||
|
||||
|
||||
### Documentation
|
||||
|
||||
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
|
||||
|
||||
First make sure you have python and pip installed
|
||||
|
||||
```
|
||||
$ python --version
|
||||
Python 2.7.2
|
||||
$ pip --version
|
||||
pip 1.5.2
|
||||
```
|
||||
|
||||
Then install mkdocs with pip
|
||||
|
||||
```
|
||||
$ pip install mkdocs
|
||||
```
|
||||
|
||||
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
|
||||
|
||||
```
|
||||
$ mkdocs serve
|
||||
INFO - Building documentation...
|
||||
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
|
||||
INFO - Cleaning site directory
|
||||
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
|
||||
[I 160505 22:31:24 handlers:59] Start watching changes
|
||||
[I 160505 22:31:24 handlers:61] Start detecting changes
|
||||
```
|
13
.github/ISSUE_TEMPLATE
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
### What version of Traefik are you using (`traefik version`)?
|
||||
|
||||
|
||||
### What is your environment & configuration (arguments, toml...)?
|
||||
|
||||
|
||||
### What did you do?
|
||||
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
### What did you see instead?
|
82
.github/ISSUE_TEMPLATE.md
vendored
@@ -1,82 +0,0 @@
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, please refer to one of the following:
|
||||
|
||||
- the Traefik community forum: https://community.containo.us/
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
<!--
|
||||
If you intend to ask a support question: DO NOT FILE AN ISSUE.
|
||||
-->
|
||||
|
||||
### Did you try using a 1.7.x configuration for the version 2.0?
|
||||
|
||||
- [ ] Yes
|
||||
- [ ] No
|
||||
|
||||
<!--
|
||||
|
||||
If you just checked the "Yes" box, be aware that this is probably not a bug. The configurations between 1.X and 2.X are NOT compatible. Please have a look here https://docs.traefik.io/v2.0/getting-started/configuration-overview/.
|
||||
|
||||
-->
|
||||
|
||||
### What did you do?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- The title should be short and descriptive.
|
||||
- Explain the conditions which led you to report this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
||||
|
||||
|
||||
### Output of `traefik version`: (_What version of Traefik are you using?_)
|
||||
|
||||
<!--
|
||||
For the Traefik Docker image:
|
||||
docker run [IMAGE] version
|
||||
ex: docker run traefik version
|
||||
|
||||
For the alpine Traefik Docker image:
|
||||
docker run [IMAGE] traefik version
|
||||
ex: docker run traefik traefik version
|
||||
-->
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
|
||||
|
||||
```toml
|
||||
# (paste your configuration here)
|
||||
```
|
||||
<!--
|
||||
Add more configuration information here.
|
||||
-->
|
||||
|
||||
|
||||
### If applicable, please paste the log output at DEBUG level (`--log.level=DEBUG` switch)
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
87
.github/ISSUE_TEMPLATE/Bug_report.md
vendored
@@ -1,87 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, please refer to one of the following:
|
||||
|
||||
- the Traefik community forum: https://community.containo.us/
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
Bug
|
||||
|
||||
### Did you try using a 1.7.x configuration for the version 2.0?
|
||||
|
||||
- [ ] Yes
|
||||
- [ ] No
|
||||
|
||||
<!--
|
||||
|
||||
If you just checked the "Yes" box, be aware that this is probably not a bug. The configurations between 1.X and 2.X are NOT compatible. Please have a look here https://docs.traefik.io/v2.0/getting-started/configuration-overview/.
|
||||
|
||||
-->
|
||||
|
||||
### What did you do?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD BUG REPORT?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- The title should be short and descriptive.
|
||||
- Explain the conditions which led you to report this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
||||
|
||||
|
||||
### Output of `traefik version`: (_What version of Traefik are you using?_)
|
||||
|
||||
<!--
|
||||
For the Traefik Docker image:
|
||||
docker run [IMAGE] version
|
||||
ex: docker run traefik version
|
||||
|
||||
For the alpine Traefik Docker image:
|
||||
docker run [IMAGE] traefik version
|
||||
ex: docker run traefik traefik version
|
||||
-->
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
|
||||
|
||||
```toml
|
||||
# (paste your configuration here)
|
||||
```
|
||||
|
||||
<!--
|
||||
Add more configuration information here.
|
||||
-->
|
||||
|
||||
|
||||
### If applicable, please paste the log output in DEBUG level (`--log.level=DEBUG` switch)
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
35
.github/ISSUE_TEMPLATE/Feature_request.md
vendored
@@ -1,35 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, please refer to one of the following:
|
||||
|
||||
- the Traefik community forum: https://community.containo.us/
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
Feature
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- The title should be short and descriptive.
|
||||
- Explain the conditions which led you to report this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
36
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,36 +0,0 @@
|
||||
<!--
|
||||
PLEASE READ THIS MESSAGE.
|
||||
|
||||
HOW TO WRITE A GOOD PULL REQUEST?
|
||||
|
||||
- Make it small.
|
||||
- Do only one thing.
|
||||
- Avoid re-formatting.
|
||||
- Make sure the code builds.
|
||||
- Make sure all tests pass.
|
||||
- Add tests.
|
||||
- Write useful descriptions and titles.
|
||||
- Address review comments in terms of additional commits.
|
||||
- Do not amend/squash existing ones unless the PR is trivial.
|
||||
- Read the contributing guide: https://github.com/containous/traefik/blob/master/CONTRIBUTING.md.
|
||||
|
||||
-->
|
||||
|
||||
### What does this PR do?
|
||||
|
||||
<!-- A brief description of the change being made with this pull request. -->
|
||||
|
||||
|
||||
### Motivation
|
||||
|
||||
<!-- What inspired you to submit this pull request? -->
|
||||
|
||||
|
||||
### More
|
||||
|
||||
- [ ] Added/updated tests
|
||||
- [ ] Added/updated documentation
|
||||
|
||||
### Additional Notes
|
||||
|
||||
<!-- Anything else we should know when reviewing? -->
|
7
.github/PULL_REQUEST_TEMPLATE/mergeback.md
vendored
@@ -1,7 +0,0 @@
|
||||
### What does this PR do?
|
||||
|
||||
Merge v{{.Version}} into master
|
||||
|
||||
### Motivation
|
||||
|
||||
Be sync.
|
7
.github/PULL_REQUEST_TEMPLATE/release.md
vendored
@@ -1,7 +0,0 @@
|
||||
### What does this PR do?
|
||||
|
||||
Prepare release v{{.Version}}.
|
||||
|
||||
### Motivation
|
||||
|
||||
Create a new release.
|
26
.github/cpr.sh
vendored
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# git config --global alias.cpr '!sh .github/cpr.sh'
|
||||
|
||||
set -e # stop on error
|
||||
|
||||
usage="$(basename "$0") pr -- Checkout a Pull Request locally"
|
||||
|
||||
if [ "$#" -ne 1 ]; then
|
||||
echo "Illegal number of parameters"
|
||||
echo "$usage" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
|
||||
|
||||
set -x # echo on
|
||||
|
||||
initial=$(git rev-parse --abbrev-ref HEAD)
|
||||
pr=$1
|
||||
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
|
||||
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
|
||||
|
||||
git remote add $remote git@github.com:$remote/traefik.git
|
||||
git fetch $remote $branch
|
||||
git checkout -t $remote/$branch
|
27
.github/rmpr.sh
vendored
Executable file
@@ -0,0 +1,27 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# git config --global alias.rmpr '!sh .github/rmpr.sh'
|
||||
|
||||
set -e # stop on error
|
||||
|
||||
usage="$(basename "$0") pr -- remove a Pull Request local branch & remote"
|
||||
|
||||
if [ "$#" -ne 1 ]; then
|
||||
echo "Illegal number of parameters"
|
||||
echo "$usage" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
|
||||
|
||||
set -x # echo on
|
||||
|
||||
initial=$(git rev-parse --abbrev-ref HEAD)
|
||||
pr=$1
|
||||
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
|
||||
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
|
||||
|
||||
# clean
|
||||
git checkout $initial
|
||||
git branch -D $branch
|
||||
git remote remove $remote
|
36
.github/rpr.sh
vendored
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# git config --global alias.rpr '!sh .github/rpr.sh'
|
||||
|
||||
set -e # stop on error
|
||||
|
||||
usage="$(basename "$0") pr remote/branch -- rebase a Pull Request against a remote branch"
|
||||
|
||||
if [ "$#" -ne 2 ]; then
|
||||
echo "Illegal number of parameters"
|
||||
echo "$usage" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
|
||||
|
||||
set -x # echo on
|
||||
|
||||
initial=$(git rev-parse --abbrev-ref HEAD)
|
||||
pr=$1
|
||||
base=$2
|
||||
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
|
||||
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
|
||||
|
||||
clean ()
|
||||
{
|
||||
git checkout $initial
|
||||
.github/rmpr.sh $pr
|
||||
}
|
||||
|
||||
trap clean EXIT
|
||||
|
||||
.github/cpr.sh $pr
|
||||
|
||||
git rebase $base
|
||||
git push -f $remote $branch
|
27
.gitignore
vendored
@@ -1,18 +1,15 @@
|
||||
.idea/
|
||||
.intellij/
|
||||
*.iml
|
||||
.vscode/
|
||||
.DS_Store
|
||||
/dist
|
||||
/webui/.tmp/
|
||||
/site/
|
||||
/docs/site/
|
||||
/static/
|
||||
/autogen/
|
||||
/traefik
|
||||
/traefik.toml
|
||||
/traefik.yml
|
||||
gen.go
|
||||
.idea
|
||||
.intellij
|
||||
*.iml
|
||||
traefik
|
||||
traefik.toml
|
||||
*.test
|
||||
vendor/
|
||||
static/
|
||||
.vscode/
|
||||
site/
|
||||
*.log
|
||||
*.exe
|
||||
cover.out
|
||||
vendor/
|
||||
.DS_Store
|
||||
|
@@ -1,92 +0,0 @@
|
||||
[run]
|
||||
deadline = "10m"
|
||||
skip-files = []
|
||||
|
||||
[linters-settings]
|
||||
|
||||
[linters-settings.govet]
|
||||
check-shadowing = false
|
||||
|
||||
[linters-settings.golint]
|
||||
min-confidence = 0.0
|
||||
|
||||
[linters-settings.gocyclo]
|
||||
min-complexity = 14.0
|
||||
|
||||
[linters-settings.maligned]
|
||||
suggest-new = true
|
||||
|
||||
[linters-settings.goconst]
|
||||
min-len = 3.0
|
||||
min-occurrences = 4.0
|
||||
|
||||
[linters-settings.misspell]
|
||||
locale = "US"
|
||||
|
||||
[linters-settings.funlen]
|
||||
lines = 230 # default 60
|
||||
statements = 120 # default 40
|
||||
|
||||
[linters]
|
||||
enable-all = true
|
||||
disable = [
|
||||
"gocyclo", # FIXME must be fixed
|
||||
"gosec",
|
||||
"dupl",
|
||||
"maligned",
|
||||
"lll",
|
||||
"unparam",
|
||||
"prealloc",
|
||||
"scopelint",
|
||||
"gochecknoinits",
|
||||
"gochecknoglobals",
|
||||
# "godox", # manage TODO FIXME ## wait for https://github.com/golangci/golangci-lint/issues/337
|
||||
"bodyclose", # Too many false-positive and panics.
|
||||
# "stylecheck", # skip because report issues related to some generated files. ## wait for https://github.com/golangci/golangci-lint/issues/337
|
||||
]
|
||||
|
||||
[issues]
|
||||
exclude-use-default = false
|
||||
max-per-linter = 0
|
||||
max-same-issues = 0
|
||||
exclude = [
|
||||
"SA1019: http.CloseNotifier is deprecated: the CloseNotifier interface predates Go's context package. New code should use Request.Context instead.", # FIXME must be fixed
|
||||
"Error return value of .((os\\.)?std(out|err)\\..*|.*Close|.*Flush|os\\.Remove(All)?|.*printf?|os\\.(Un)?Setenv). is not checked",
|
||||
"should have a package comment, unless it's in another file for this package",
|
||||
]
|
||||
[[issues.exclude-rules]]
|
||||
path = "(.+)_test.go"
|
||||
linters = ["goconst", "funlen"]
|
||||
[[issues.exclude-rules]]
|
||||
path = "integration/.+_test.go"
|
||||
text = "Error return value of `cmd\\.Process\\.Kill` is not checked"
|
||||
[[issues.exclude-rules]]
|
||||
path = "integration/(consul_catalog_test|constraint_test).go"
|
||||
text = "Error return value of `(s.deregisterService|s.deregisterAgentService)` is not checked"
|
||||
[[issues.exclude-rules]]
|
||||
path = "integration/grpc_test.go"
|
||||
text = "Error return value of `closer` is not checked"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/h2c/h2c.go"
|
||||
text = "Error return value of `rw.Write` is not checked"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/middlewares/recovery/recovery.go"
|
||||
text = "`logger` can be `github.com/stretchr/testify/assert.TestingT`"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/provider/docker/builder_test.go"
|
||||
text = "(U1000: func )?`(.+)` is unused"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/provider/kubernetes/builder_(endpoint|service)_test.go"
|
||||
text = "(U1000: func )?`(.+)` is unused"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/config/parser/.+_test.go"
|
||||
text = "U1000: field `(foo|fuu)` is unused"
|
||||
[[issues.exclude-rules]]
|
||||
path = "pkg/server/service/bufferpool.go"
|
||||
text = "SA6002: argument should be pointer-like to avoid allocations"
|
||||
[[issues.exclude-rules]]
|
||||
path = "cmd/configuration.go"
|
||||
text = "string `traefik` has (\\d) occurrences, make it a constant"
|
||||
[[issues.exclude-rules]] # FIXME must be fixed
|
||||
path = "cmd/context.go"
|
||||
text = "S1000: should use a simple channel send/receive instead of `select` with a single case"
|
@@ -1,58 +0,0 @@
|
||||
project_name: traefik
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- go generate
|
||||
|
||||
builds:
|
||||
- binary: traefik
|
||||
|
||||
main: ./cmd/traefik/traefik.go
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
ldflags:
|
||||
- -s -w -X github.com/containous/traefik/v2/pkg/version.Version={{.Version}} -X github.com/containous/traefik/v2/pkg/version.Codename={{.Env.CODENAME}} -X github.com/containous/traefik/v2/pkg/version.BuildDate={{.Date}}
|
||||
|
||||
goos:
|
||||
- linux
|
||||
- darwin
|
||||
- windows
|
||||
- freebsd
|
||||
- openbsd
|
||||
goarch:
|
||||
- amd64
|
||||
- 386
|
||||
- arm
|
||||
- arm64
|
||||
- ppc64le
|
||||
goarm:
|
||||
- 7
|
||||
- 6
|
||||
- 5
|
||||
ignore:
|
||||
- goos: darwin
|
||||
goarch: 386
|
||||
- goos: openbsd
|
||||
goarch: arm
|
||||
- goos: freebsd
|
||||
goarch: arm
|
||||
|
||||
changelog:
|
||||
skip: true
|
||||
|
||||
archives:
|
||||
- id: traefik
|
||||
name_template: '{{ .ProjectName }}_v{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}'
|
||||
format: tar.gz
|
||||
format_overrides:
|
||||
- goos: windows
|
||||
format: zip
|
||||
files:
|
||||
- LICENSE.md
|
||||
- CHANGELOG.md
|
||||
|
||||
checksum:
|
||||
name_template: "{{ .ProjectName }}_v{{ .Version }}_checksums.txt"
|
||||
|
||||
release:
|
||||
disable: true
|
10
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
- repo: git://github.com/pre-commit/pre-commit-hooks
|
||||
sha: 44e1753f98b0da305332abe26856c3e621c5c439
|
||||
hooks:
|
||||
- id: detect-private-key
|
||||
- repo: git://github.com/containous/pre-commit-hooks
|
||||
sha: 35e641b5107671e94102b0ce909648559e568d61
|
||||
hooks:
|
||||
- id: goFmt
|
||||
- id: goLint
|
||||
- id: goErrcheck
|
@@ -1,4 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
sudo rm -rf static
|
@@ -1,20 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
curl -O https://dl.google.com/go/go"${GO_VERSION}".linux-amd64.tar.gz
|
||||
|
||||
tar -xvf go"${GO_VERSION}".linux-amd64.tar.gz
|
||||
rm -rf go"${GO_VERSION}".linux-amd64.tar.gz
|
||||
|
||||
sudo mkdir -p /usr/local/golang/"${GO_VERSION}"/go
|
||||
sudo mv go /usr/local/golang/"${GO_VERSION}"/
|
||||
|
||||
sudo rm /usr/local/bin/go
|
||||
sudo chmod +x /usr/local/golang/"${GO_VERSION}"/go/bin/go
|
||||
sudo ln -s /usr/local/golang/"${GO_VERSION}"/go/bin/go /usr/local/bin/go
|
||||
|
||||
export GOROOT="/usr/local/golang/${GO_VERSION}/go"
|
||||
export GOTOOLDIR="/usr/local/golang/${GO_VERSION}/go/pkg/tool/linux_amd64"
|
||||
|
||||
go version
|
@@ -1,6 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
if [ -n "$SHOULD_TEST" ]; then ci_retry make pull-images; fi
|
||||
|
||||
if [ -n "$SHOULD_TEST" ]; then ci_retry make test-integration; fi
|
@@ -1,8 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
ci_retry make validate
|
||||
|
||||
if [ -n "$SHOULD_TEST" ]; then ci_retry make test-unit; fi
|
||||
|
||||
if [ -n "$SHOULD_TEST" ]; then make -j"${N_MAKE_JOBS}" crossbinary-default-parallel; fi
|
@@ -1,35 +0,0 @@
|
||||
# For personnal CI
|
||||
# mv /home/runner/workspace/src/github.com/<username>/ /home/runner/workspace/src/github.com/containous/
|
||||
# cd /home/runner/workspace/src/github.com/containous/traefik/
|
||||
for s in apache2 cassandra elasticsearch memcached mysql mongod postgresql sphinxsearch rethinkdb rabbitmq-server redis-server; do sudo service $s stop; done
|
||||
sudo swapoff -a
|
||||
sudo dd if=/dev/zero of=/swapfile bs=1M count=3072
|
||||
sudo mkswap /swapfile
|
||||
sudo swapon /swapfile
|
||||
sudo rm -rf /home/runner/.rbenv
|
||||
sudo rm -rf /usr/local/golang/{1.4.3,1.5.4,1.6.4,1.7.6,1.8.6,1.9.7,1.10.3,1.11}
|
||||
#export DOCKER_VERSION=18.06.3
|
||||
source .semaphoreci/vars
|
||||
if [ -z "${PULL_REQUEST_NUMBER}" ]; then SHOULD_TEST="-*-"; else TEMP_STORAGE=$(curl --silent https://patch-diff.githubusercontent.com/raw/containous/traefik/pull/${PULL_REQUEST_NUMBER}.diff | patch --dry-run -p1 -R || true); fi
|
||||
echo ${SHOULD_TEST}
|
||||
if [ -n "$TEMP_STORAGE" ]; then SHOULD_TEST=$(echo "$TEMP_STORAGE" | grep -Ev '(.md|.yaml|.yml)' || :); fi
|
||||
echo ${TEMP_STORAGE}
|
||||
echo ${SHOULD_TEST}
|
||||
#if [ -n "$SHOULD_TEST" ]; then sudo -E apt-get -yq update; fi
|
||||
#if [ -n "$SHOULD_TEST" ]; then sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*; fi
|
||||
if [ -n "$SHOULD_TEST" ]; then docker version; fi
|
||||
export GO_VERSION=1.12
|
||||
if [ -f "./go.mod" ]; then GO_VERSION="$(grep '^go .*' go.mod | awk '{print $2}')"; export GO_VERSION; fi
|
||||
#if [ "${GO_VERSION}" == '1.13' ]; then export GO_VERSION=1.13rc2; fi
|
||||
echo "Selected Go version: ${GO_VERSION}"
|
||||
|
||||
if [ -f "./.semaphoreci/golang.sh" ]; then ./.semaphoreci/golang.sh; fi
|
||||
if [ -f "./.semaphoreci/golang.sh" ]; then export GOROOT="/usr/local/golang/${GO_VERSION}/go"; fi
|
||||
if [ -f "./.semaphoreci/golang.sh" ]; then export GOTOOLDIR="/usr/local/golang/${GO_VERSION}/go/pkg/tool/linux_amd64"; fi
|
||||
go version
|
||||
|
||||
if [ -f "./go.mod" ]; then export GO111MODULE=on; fi
|
||||
if [ -f "./go.mod" ]; then export GOPROXY=https://proxy.golang.org; fi
|
||||
if [ -f "./go.mod" ]; then go mod download; fi
|
||||
|
||||
df
|
@@ -1,36 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
export REPO='containous/traefik'
|
||||
|
||||
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
|
||||
then
|
||||
export VERSION
|
||||
else
|
||||
export VERSION=''
|
||||
fi
|
||||
|
||||
export CODENAME=montdor
|
||||
|
||||
export N_MAKE_JOBS=2
|
||||
|
||||
|
||||
function ci_retry {
|
||||
|
||||
local NRETRY=3
|
||||
local NSLEEP=5
|
||||
local n=0
|
||||
|
||||
until [ $n -ge $NRETRY ]
|
||||
do
|
||||
"$@" && break
|
||||
n=$((n+1))
|
||||
echo "${*} failed, attempt ${n}/${NRETRY}"
|
||||
sleep $NSLEEP
|
||||
done
|
||||
|
||||
[ $n -lt $NRETRY ]
|
||||
|
||||
}
|
||||
|
||||
export -f ci_retry
|
67
.travis.yml
@@ -1,39 +1,51 @@
|
||||
sudo: required
|
||||
dist: trusty
|
||||
|
||||
git:
|
||||
depth: false
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
env:
|
||||
global:
|
||||
- REPO=$TRAVIS_REPO_SLUG
|
||||
- VERSION=$TRAVIS_TAG
|
||||
- CODENAME=montdor
|
||||
- GO111MODULE=on
|
||||
- secure: btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg=
|
||||
- REPO: $TRAVIS_REPO_SLUG
|
||||
- VERSION: $TRAVIS_TAG
|
||||
- CODENAME: morbier
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
include:
|
||||
- env: DOCKER_VERSION=1.10.3
|
||||
- env: DOCKER_VERSION=1.12.6
|
||||
|
||||
before_install:
|
||||
- sudo -E apt-get -yq update
|
||||
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-engine=${DOCKER_VERSION}*
|
||||
install:
|
||||
- docker version
|
||||
- pip install --user -r requirements.txt
|
||||
before_script:
|
||||
- make validate
|
||||
- make binary
|
||||
script:
|
||||
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
|
||||
- if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then make docs; fi
|
||||
|
||||
- travis_retry make test-unit
|
||||
- travis_retry make test-integration
|
||||
after_failure:
|
||||
- docker ps
|
||||
after_success:
|
||||
- make crossbinary
|
||||
- make image
|
||||
before_deploy:
|
||||
- >
|
||||
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
|
||||
export BEFORE_DEPLOY_RUN=1;
|
||||
sudo -E apt-get -yq update;
|
||||
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
|
||||
docker version;
|
||||
make build-image;
|
||||
if [ "$TRAVIS_TAG" ]; then
|
||||
make release-packages;
|
||||
fi;
|
||||
curl -sfL https://raw.githubusercontent.com/containous/structor/master/godownloader.sh | bash -s -- -b "${GOPATH}/bin" ${STRUCTOR_VERSION}
|
||||
structor -o containous -r traefik --dockerfile-url="https://raw.githubusercontent.com/containous/traefik/v1.7/docs.Dockerfile" --menu.js-url="https://raw.githubusercontent.com/containous/structor/master/traefik-menu.js.gotmpl" --rqts-url="https://raw.githubusercontent.com/containous/structor/master/requirements-override.txt" --force-edit-url --exp-branch=master --debug;
|
||||
fi
|
||||
|
||||
- mkdocs build --clean
|
||||
- tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .
|
||||
deploy:
|
||||
- provider: pages
|
||||
edge: true
|
||||
github_token: ${GITHUB_TOKEN}
|
||||
local_dir: site
|
||||
skip_cleanup: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
tags: true
|
||||
- provider: releases
|
||||
api_key: ${GITHUB_TOKEN}
|
||||
file: dist/traefik*
|
||||
@@ -48,11 +60,8 @@ deploy:
|
||||
on:
|
||||
repo: containous/traefik
|
||||
tags: true
|
||||
- provider: pages
|
||||
edge: false
|
||||
github_token: ${GITHUB_TOKEN}
|
||||
local_dir: site
|
||||
- provider: script
|
||||
script: sh script/deploy-docker.sh
|
||||
skip_cleanup: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
all_branches: true
|
||||
|
BIN
.travis/traefik.id_rsa.enc
Normal file
3197
CHANGELOG.md
@@ -2,11 +2,17 @@
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience,nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
In the interest of fostering an open and welcoming environment, we as
|
||||
contributors and maintainers pledge to making participation in our project and
|
||||
our community a harassment-free experience for everyone, regardless of age, body
|
||||
size, disability, ethnicity, gender identity and expression, level of experience,
|
||||
nationality, personal appearance, race, religion, or sexual identity and
|
||||
orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
Examples of behavior that contributes to creating a positive environment
|
||||
include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
@@ -16,36 +22,53 @@ Examples of behavior that contributes to creating a positive environment include
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or advances
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
||||
advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
* Publishing others' private information, such as a physical or electronic
|
||||
address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
Project maintainers are responsible for clarifying the standards of acceptable
|
||||
behavior and are expected to take appropriate and fair corrective action in
|
||||
response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||
permanently any contributor for other behaviors that they deem inappropriate,
|
||||
threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community.
|
||||
Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
|
||||
Representation of a project may be further defined and clarified by project maintainers.
|
||||
This Code of Conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community. Examples of
|
||||
representing a project or community include using an official project e-mail
|
||||
address, posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event. Representation of a project may be
|
||||
further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at contact@containo.us
|
||||
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
|
||||
The project team is obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting the project team at contact@containo.us
|
||||
All complaints will be reviewed and investigated and will result in a response that
|
||||
is deemed necessary and appropriate to the circumstances. The project team is
|
||||
obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||
Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good
|
||||
faith may face temporary or permanent repercussions as determined by other
|
||||
members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
||||
available at [http://contributor-covenant.org/version/1/4][version]
|
||||
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
||||
|
@@ -1,3 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
See <https://docs.traefik.io/v2.0/contributing/thank-you/>.
|
@@ -2,5 +2,4 @@ FROM scratch
|
||||
COPY script/ca-certificates.crt /etc/ssl/certs/
|
||||
COPY dist/traefik /
|
||||
EXPOSE 80
|
||||
VOLUME ["/tmp"]
|
||||
ENTRYPOINT ["/traefik"]
|
||||
|
@@ -1,6 +1,6 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016-2018 Containous SAS
|
||||
Copyright (c) 2016 Containous SAS, Emile Vauge, emile@vauge.com
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
179
Makefile
@@ -1,22 +1,4 @@
|
||||
.PHONY: all docs docs-serve
|
||||
|
||||
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
|
||||
|
||||
TAG_NAME := $(shell git tag -l --contains HEAD)
|
||||
SHA := $(shell git rev-parse HEAD)
|
||||
VERSION_GIT := $(if $(TAG_NAME),$(TAG_NAME),$(SHA))
|
||||
VERSION := $(if $(VERSION),$(VERSION),$(VERSION_GIT))
|
||||
|
||||
BIND_DIR := "dist"
|
||||
|
||||
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
|
||||
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(subst /,-,$(GIT_BRANCH)))
|
||||
|
||||
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
|
||||
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
|
||||
|
||||
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -e "TEST_CONTAINER=1" -v "/var/run/docker.sock:/var/run/docker.sock")
|
||||
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
|
||||
.PHONY: all
|
||||
|
||||
TRAEFIK_ENVS := \
|
||||
-e OS_ARCH_ARG \
|
||||
@@ -24,128 +6,81 @@ TRAEFIK_ENVS := \
|
||||
-e TESTFLAGS \
|
||||
-e VERBOSE \
|
||||
-e VERSION \
|
||||
-e CODENAME \
|
||||
-e TESTDIRS \
|
||||
-e CI \
|
||||
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
|
||||
-e CODENAME
|
||||
|
||||
SRCS = $(shell git ls-files '*.go' | grep -v '^external/')
|
||||
|
||||
BIND_DIR := "dist"
|
||||
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
|
||||
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
|
||||
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
|
||||
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
|
||||
|
||||
PRE_TARGET ?= build-dev-image
|
||||
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
|
||||
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
|
||||
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
|
||||
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
|
||||
|
||||
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
|
||||
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
|
||||
|
||||
print-%: ; @echo $*=$($*)
|
||||
|
||||
default: binary
|
||||
|
||||
## Build Dev Docker image
|
||||
build-dev-image: dist
|
||||
all: generate-webui build ## validate all checks, build linux binary, run all tests\ncross non-linux binaries
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh
|
||||
|
||||
binary: generate-webui build ## build the linux binary
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary
|
||||
|
||||
crossbinary: generate-webui build ## cross build the non-linux binaries
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate crossbinary
|
||||
|
||||
test: build ## run the unit and integration tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
|
||||
|
||||
test-unit: build ## run the unit tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit
|
||||
|
||||
test-integration: build ## run the integration tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-integration
|
||||
|
||||
validate: build ## validate gofmt, golint and go vet
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell
|
||||
|
||||
build: dist
|
||||
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
|
||||
|
||||
## Build Dev Docker image without cache
|
||||
build-dev-image-no-cache: dist
|
||||
build-webui:
|
||||
docker build -t traefik-webui -f webui/Dockerfile webui
|
||||
|
||||
build-no-cache: dist
|
||||
docker build --no-cache -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
|
||||
|
||||
## Create the "dist" directory
|
||||
shell: build ## start a shell inside the build env
|
||||
$(DOCKER_RUN_TRAEFIK) /bin/bash
|
||||
|
||||
image: build ## build a docker traefik image
|
||||
docker build -t $(TRAEFIK_IMAGE) .
|
||||
|
||||
dist:
|
||||
mkdir dist
|
||||
|
||||
## Build WebUI Docker image
|
||||
build-webui-image:
|
||||
docker build -t traefik-webui -f webui/Dockerfile webui
|
||||
run-dev:
|
||||
go generate
|
||||
go build
|
||||
./traefik
|
||||
|
||||
## Generate WebUI
|
||||
generate-webui: build-webui-image
|
||||
generate-webui: build-webui
|
||||
if [ ! -d "static" ]; then \
|
||||
mkdir -p static; \
|
||||
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui npm run build:nc; \
|
||||
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui chown -R $(shell id -u):$(shell id -g) ../static; \
|
||||
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui npm run build; \
|
||||
echo 'For more informations show `webui/readme.md`' > $$PWD/static/DONT-EDIT-FILES-IN-THIS-DIRECTORY.md; \
|
||||
fi
|
||||
|
||||
## Build the linux binary
|
||||
binary: generate-webui $(PRE_TARGET)
|
||||
$(if $(PRE_TARGET),$(DOCKER_RUN_TRAEFIK)) ./script/make.sh generate binary
|
||||
lint:
|
||||
script/validate-golint
|
||||
|
||||
## Build the binary for the standard plaforms (linux, darwin, windows)
|
||||
crossbinary-default: generate-webui build-dev-image
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-default
|
||||
|
||||
## Build the binary for the standard plaforms (linux, darwin, windows) in parallel
|
||||
crossbinary-default-parallel:
|
||||
$(MAKE) generate-webui
|
||||
$(MAKE) build-dev-image crossbinary-default
|
||||
|
||||
## Run the unit and integration tests
|
||||
test: build-dev-image
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
|
||||
|
||||
## Run the unit tests
|
||||
test-unit: $(PRE_TARGET)
|
||||
$(if $(PRE_TARGET),$(DOCKER_RUN_TRAEFIK)) ./script/make.sh generate test-unit
|
||||
|
||||
## Pull all images for integration tests
|
||||
pull-images:
|
||||
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml | awk '{print $$2}' | sort | uniq | xargs -P 6 -n 1 docker pull
|
||||
|
||||
## Run the integration tests
|
||||
test-integration: $(PRE_TARGET)
|
||||
$(if $(PRE_TARGET),$(DOCKER_RUN_TRAEFIK),TEST_CONTAINER=1) ./script/make.sh generate binary test-integration
|
||||
TEST_HOST=1 ./script/make.sh test-integration
|
||||
|
||||
## Validate code and docs
|
||||
validate-files: $(PRE_TARGET)
|
||||
$(if $(PRE_TARGET),$(DOCKER_RUN_TRAEFIK)) ./script/make.sh generate validate-lint validate-misspell
|
||||
bash $(CURDIR)/script/validate-shell-script.sh
|
||||
|
||||
## Validate code, docs, and vendor
|
||||
validate: $(PRE_TARGET)
|
||||
$(if $(PRE_TARGET),$(DOCKER_RUN_TRAEFIK)) ./script/make.sh generate validate-lint validate-misspell validate-vendor
|
||||
bash $(CURDIR)/script/validate-shell-script.sh
|
||||
|
||||
## Clean up static directory and build a Docker Traefik image
|
||||
build-image: binary
|
||||
rm -rf static
|
||||
docker build -t $(TRAEFIK_IMAGE) .
|
||||
|
||||
## Build a Docker Traefik image
|
||||
build-image-dirty: binary
|
||||
docker build -t $(TRAEFIK_IMAGE) .
|
||||
|
||||
## Start a shell inside the build env
|
||||
shell: build-dev-image
|
||||
$(DOCKER_RUN_TRAEFIK) /bin/bash
|
||||
|
||||
## Build documentation site
|
||||
docs:
|
||||
make -C ./docs docs
|
||||
|
||||
## Serve the documentation site localy
|
||||
docs-serve:
|
||||
make -C ./docs docs-serve
|
||||
|
||||
## Generate CRD clientset
|
||||
generate-crd:
|
||||
./script/update-generated-crd-code.sh
|
||||
|
||||
## Create packages for the release
|
||||
release-packages: generate-webui build-dev-image
|
||||
rm -rf dist
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) goreleaser release --skip-publish
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) tar cfz dist/traefik-${VERSION}.src.tar.gz \
|
||||
--exclude-vcs \
|
||||
--exclude .idea \
|
||||
--exclude .travis \
|
||||
--exclude .semaphoreci \
|
||||
--exclude .github \
|
||||
--exclude dist .
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) chown -R $(shell id -u):$(shell id -g) dist/
|
||||
|
||||
## Format the Code
|
||||
fmt:
|
||||
gofmt -s -l -w $(SRCS)
|
||||
|
||||
run-dev:
|
||||
go generate
|
||||
GO111MODULE=on go build ./cmd/traefik
|
||||
./traefik
|
||||
help: ## this help
|
||||
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
|
||||
|
216
README.md
@@ -1,169 +1,169 @@
|
||||
|
||||
<p align="center">
|
||||
<img src="docs/content/assets/img/traefik.logo.png" alt="Traefik" title="Traefik" />
|
||||
<img src="docs/img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
|
||||
</p>
|
||||
|
||||
[](https://semaphoreci.com/containous/traefik)
|
||||
[](https://travis-ci.org/containous/traefik)
|
||||
[](https://docs.traefik.io)
|
||||
[](http://goreportcard.com/report/containous/traefik)
|
||||
[](https://microbadger.com/images/traefik)
|
||||
[](https://github.com/containous/traefik/blob/master/LICENSE.md)
|
||||
[](https://community.containo.us/)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefik)
|
||||
[](https://traefik.herokuapp.com)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefikproxy)
|
||||
|
||||
|
||||
Traefik (pronounced _traffic_) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
|
||||
Traefik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
|
||||
Pointing Traefik at your orchestrator should be the _only_ configuration step you need.
|
||||
|
||||
---
|
||||
|
||||
. **[Overview](#overview)** .
|
||||
**[Features](#features)** .
|
||||
**[Supported backends](#supported-backends)** .
|
||||
**[Quickstart](#quickstart)** .
|
||||
**[Web UI](#web-ui)** .
|
||||
**[Documentation](#documentation)** .
|
||||
|
||||
. **[Support](#support)** .
|
||||
**[Release cycle](#release-cycle)** .
|
||||
**[Contributing](#contributing)** .
|
||||
**[Maintainers](#maintainers)** .
|
||||
**[Credits](#credits)** .
|
||||
|
||||
---
|
||||
|
||||
:warning: Please be aware that the old configurations for Traefik v1.X are NOT compatible with the v2.X config as of now. If you're running v2, please ensure you are using a [v2 configuration](https://docs.traefik.io/).
|
||||
Træfɪk (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Kubernetes](http://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Mesos](https://github.com/apache/mesos), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Eureka](https://github.com/Netflix/eureka), Rest API, file...) to manage its configuration automatically and dynamically.
|
||||
|
||||
## Overview
|
||||
|
||||
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
|
||||
Now you want users to access these microservices, and you need a reverse proxy.
|
||||
Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
|
||||
If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
|
||||
|
||||
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice.
|
||||
In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
|
||||
- domain `api.domain.com` will point the microservice `api` in your private network
|
||||
- path `domain.com/web` will point the microservice `web` in your private network
|
||||
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
|
||||
|
||||
**This is when Traefik can help you!**
|
||||
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
|
||||
|
||||
Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
|
||||
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
|
||||
|
||||
**Run Traefik and let it do the work for you!**
|
||||
_(But if you'd rather configure some of your routes manually, Traefik supports that too!)_
|
||||
Here enters Træfɪk.
|
||||
|
||||

|
||||
|
||||
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
|
||||
Routes to your services will be created instantly.
|
||||
|
||||
Run it and forget it!
|
||||
|
||||
|
||||
|
||||

|
||||
|
||||
## Features
|
||||
|
||||
- Continuously updates its configuration (No restarts!)
|
||||
- Supports multiple load balancing algorithms
|
||||
- Provides HTTPS to your microservices by leveraging [Let's Encrypt](https://letsencrypt.org) (wildcard certificates support)
|
||||
- Circuit breakers, retry
|
||||
- See the magic through its clean web UI
|
||||
- Websocket, HTTP/2, GRPC ready
|
||||
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
|
||||
- Keeps access logs (JSON, CLF)
|
||||
- Fast
|
||||
- Exposes a Rest API
|
||||
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
|
||||
|
||||
|
||||
## Supported Backends
|
||||
|
||||
- [Docker](https://docs.traefik.io/providers/docker/) / [Swarm mode](https://docs.traefik.io/providers/docker/)
|
||||
- [Kubernetes](https://docs.traefik.io/providers/kubernetes-crd/)
|
||||
- [Marathon](https://docs.traefik.io/providers/marathon/)
|
||||
- [Rancher](https://docs.traefik.io/providers/rancher/) (Metadata)
|
||||
- [File](https://docs.traefik.io/configuration/backends/file)
|
||||
- [It's fast](http://docs.traefik.io/benchmarks)
|
||||
- No dependency hell, single binary made with go
|
||||
- Rest API
|
||||
- Multiple backends supported: Docker, Swarm, Kubernetes, Marathon, Mesos, Consul, Etcd, and more to come
|
||||
- Watchers for backends, can listen for changes in backends to apply a new configuration automatically
|
||||
- Hot-reloading of configuration. No need to restart the process
|
||||
- Graceful shutdown http connections
|
||||
- Circuit breakers on backends
|
||||
- Round Robin, rebalancer load-balancers
|
||||
- Rest Metrics
|
||||
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
|
||||
- SSL backends support
|
||||
- SSL frontend support (with SNI)
|
||||
- Clean AngularJS Web UI
|
||||
- Websocket support
|
||||
- HTTP/2 support
|
||||
- Retry request if network error
|
||||
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
|
||||
- High Availability with cluster mode
|
||||
|
||||
## Quickstart
|
||||
|
||||
To get your hands on Traefik, you can use the [5-Minute Quickstart](http://docs.traefik.io/#the-traefik-quickstart-using-docker) in our documentation (you will need Docker).
|
||||
You can have a quick look at Træfɪk in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
||||
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Kubernetes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
|
||||
|
||||
[](http://www.youtube.com/watch?v=QvAz9mVx5TI)
|
||||
|
||||
## Web UI
|
||||
|
||||
You can access the simple HTML frontend of Traefik.
|
||||
You can access to a simple HTML frontend of Træfik.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
## Documentation
|
||||
## Plumbing
|
||||
|
||||
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
|
||||
A collection of contributions around Traefik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
|
||||
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun guys
|
||||
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
|
||||
- [Negroni](https://github.com/codegangsta/negroni): web middlewares made simple
|
||||
- [Manners](https://github.com/mailgun/manners): graceful shutdown of http.Handler servers
|
||||
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
|
||||
|
||||
:warning: If you're testing out v2, please ensure you are using the [v2 documentation](https://docs.traefik.io/).
|
||||
## Test it
|
||||
|
||||
## Support
|
||||
|
||||
To get community support, you can:
|
||||
- join the Traefik community forum: [](https://community.containo.us/)
|
||||
|
||||
If you need commercial support, please contact [Containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
||||
|
||||
## Download
|
||||
|
||||
- Grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
- The simple way: grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
|
||||
```shell
|
||||
./traefik --configFile=traefik.toml
|
||||
```
|
||||
|
||||
- Or use the official tiny Docker image and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
- Use the tiny Docker image:
|
||||
|
||||
```shell
|
||||
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
|
||||
```
|
||||
|
||||
- Or get the sources:
|
||||
- From sources:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/containous/traefik
|
||||
```
|
||||
|
||||
## Introductory Videos
|
||||
## Documentation
|
||||
|
||||
:warning: Please be aware that these videos are for v1.X. The old configurations for Traefik v1.X are NOT compatible with Traefik v2. If you're running v2, please ensure you are using a [v2 configuration](https://docs.traefik.io/).
|
||||
|
||||
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at GopherCon 2017.
|
||||
You will learn Traefik basics in less than 10 minutes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=RgudiksfL-k)
|
||||
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Traefik features and see some demos with Kubernetes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
## Maintainers
|
||||
|
||||
[Information about process and maintainers](docs/content/contributing/maintainers.md)
|
||||
You can find the complete documentation [here](https://docs.traefik.io).
|
||||
|
||||
## Contributing
|
||||
|
||||
If you'd like to contribute to the project, refer to the [contributing documentation](CONTRIBUTING.md).
|
||||
Please refer to [this section](.github/CONTRIBUTING.md).
|
||||
|
||||
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md).
|
||||
By participating in this project, you agree to abide by its terms.
|
||||
## Code Of Conduct
|
||||
|
||||
## Release Cycle
|
||||
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
|
||||
|
||||
- We release a new version (e.g. 1.1.0, 1.2.0, 1.3.0) every other month.
|
||||
- Release Candidates are available before the release (e.g. 1.1.0-rc1, 1.1.0-rc2, 1.1.0-rc3, 1.1.0-rc4, before 1.1.0)
|
||||
- Bug-fixes (e.g. 1.1.1, 1.1.2, 1.2.1, 1.2.3) are released as needed (no additional features are delivered in those versions, bug-fixes only)
|
||||
## Support
|
||||
|
||||
Each version is supported until the next one is released (e.g. 1.1.x will be supported until 1.2.0 is out)
|
||||
You can join [](https://traefik.herokuapp.com) to get basic support.
|
||||
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
||||
|
||||
We use [Semantic Versioning](http://semver.org/)
|
||||
## Træfɪk here and there
|
||||
|
||||
## Mailing lists
|
||||
These projects use Træfɪk internally. If your company uses Træfɪk, we would be glad to get your feedback :) Contact us on [](https://traefik.herokuapp.com)
|
||||
|
||||
- General announcements, new releases: mail at news+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/news)
|
||||
- Security announcements: mail at security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
|
||||
- Project [Mantl](https://mantl.io/) from Cisco
|
||||
|
||||

|
||||
> Mantl is a modern platform for rapidly deploying globally distributed services. A container orchestrator, docker, a network stack, something to pool your logs, something to monitor health, a sprinkle of service discovery and some automation.
|
||||
|
||||
- Project [Apollo](http://capgemini.github.io/devops/apollo/) from Cap Gemini
|
||||
|
||||

|
||||
> Apollo is an open source project to aid with building and deploying IAAS and PAAS services. It is particularly geared towards managing containerized applications across multiple hosts, and big data type workloads. Apollo leverages other open source components to provide basic mechanisms for deployment, maintenance, and scaling of infrastructure and applications.
|
||||
|
||||
## Partners
|
||||
|
||||
[](https://zenika.com)
|
||||
|
||||
Zenika is one of the leading providers of professional Open Source services and agile methodologies in
|
||||
Europe. We provide consulting, development, training and support for the world’s leading Open Source
|
||||
software products.
|
||||
|
||||
|
||||
[](https://aster.is)
|
||||
|
||||
Founded in 2014, Asteris creates next-generation infrastructure software for the modern datacenter. Asteris writes software that makes it easy for companies to implement continuous delivery and realtime data pipelines. We support the HashiCorp stack, along with Kubernetes, Apache Mesos, Spark and Kafka. We're core committers on mantl.io, consul-cli and mesos-consul.
|
||||
|
||||
## Maintainers
|
||||
|
||||
- Emile Vauge [@emilevauge](https://github.com/emilevauge)
|
||||
- Vincent Demeester [@vdemeester](https://github.com/vdemeester)
|
||||
- Russell Clare [@Russell-IO](https://github.com/Russell-IO)
|
||||
- Ed Robinson [@errm](https://github.com/errm)
|
||||
- Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
|
||||
- Manuel Laufenberg [@SantoDE](https://github.com/SantoDE)
|
||||
|
||||
## Credits
|
||||
|
||||
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo .
|
||||
|
||||
Traefik's logo is licensed under the Creative Commons 3.0 Attributions license.
|
||||
|
||||
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
|
||||
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).
|
||||
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo 
|
||||
|
245
acme/account.go
Normal file
@@ -0,0 +1,245 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"errors"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
// Account is used to store lets encrypt registration info
|
||||
type Account struct {
|
||||
Email string
|
||||
Registration *acme.RegistrationResource
|
||||
PrivateKey []byte
|
||||
DomainsCertificate DomainsCertificates
|
||||
ChallengeCerts map[string]*ChallengeCert
|
||||
}
|
||||
|
||||
// ChallengeCert stores a challenge certificate
|
||||
type ChallengeCert struct {
|
||||
Certificate []byte
|
||||
PrivateKey []byte
|
||||
certificate *tls.Certificate
|
||||
}
|
||||
|
||||
// Init inits acccount struct
|
||||
func (a *Account) Init() error {
|
||||
err := a.DomainsCertificate.Init()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, cert := range a.ChallengeCerts {
|
||||
if cert.certificate == nil {
|
||||
certificate, err := tls.X509KeyPair(cert.Certificate, cert.PrivateKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cert.certificate = &certificate
|
||||
}
|
||||
if cert.certificate.Leaf == nil {
|
||||
leaf, err := x509.ParseCertificate(cert.certificate.Certificate[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cert.certificate.Leaf = leaf
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewAccount creates an account
|
||||
func NewAccount(email string) (*Account, error) {
|
||||
// Create a user. New accounts need an email and private key to start
|
||||
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
domainsCerts := DomainsCertificates{Certs: []*DomainsCertificate{}}
|
||||
domainsCerts.Init()
|
||||
return &Account{
|
||||
Email: email,
|
||||
PrivateKey: x509.MarshalPKCS1PrivateKey(privateKey),
|
||||
DomainsCertificate: DomainsCertificates{Certs: domainsCerts.Certs},
|
||||
ChallengeCerts: map[string]*ChallengeCert{}}, nil
|
||||
}
|
||||
|
||||
// GetEmail returns email
|
||||
func (a *Account) GetEmail() string {
|
||||
return a.Email
|
||||
}
|
||||
|
||||
// GetRegistration returns lets encrypt registration resource
|
||||
func (a *Account) GetRegistration() *acme.RegistrationResource {
|
||||
return a.Registration
|
||||
}
|
||||
|
||||
// GetPrivateKey returns private key
|
||||
func (a *Account) GetPrivateKey() crypto.PrivateKey {
|
||||
if privateKey, err := x509.ParsePKCS1PrivateKey(a.PrivateKey); err == nil {
|
||||
return privateKey
|
||||
}
|
||||
log.Errorf("Cannot unmarshall private key %+v", a.PrivateKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Certificate is used to store certificate info
|
||||
type Certificate struct {
|
||||
Domain string
|
||||
CertURL string
|
||||
CertStableURL string
|
||||
PrivateKey []byte
|
||||
Certificate []byte
|
||||
}
|
||||
|
||||
// DomainsCertificates stores a certificate for multiple domains
|
||||
type DomainsCertificates struct {
|
||||
Certs []*DomainsCertificate
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) Len() int {
|
||||
return len(dc.Certs)
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) Swap(i, j int) {
|
||||
dc.Certs[i], dc.Certs[j] = dc.Certs[j], dc.Certs[i]
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) Less(i, j int) bool {
|
||||
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[j].Domains) {
|
||||
return dc.Certs[i].tlsCert.Leaf.NotAfter.After(dc.Certs[j].tlsCert.Leaf.NotAfter)
|
||||
}
|
||||
if dc.Certs[i].Domains.Main == dc.Certs[j].Domains.Main {
|
||||
return strings.Join(dc.Certs[i].Domains.SANs, ",") < strings.Join(dc.Certs[j].Domains.SANs, ",")
|
||||
}
|
||||
return dc.Certs[i].Domains.Main < dc.Certs[j].Domains.Main
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) removeDuplicates() {
|
||||
sort.Sort(dc)
|
||||
for i := 0; i < len(dc.Certs); i++ {
|
||||
for i2 := i + 1; i2 < len(dc.Certs); i2++ {
|
||||
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[i2].Domains) {
|
||||
// delete
|
||||
log.Warnf("Remove duplicate cert: %+v, expiration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
|
||||
dc.Certs = append(dc.Certs[:i2], dc.Certs[i2+1:]...)
|
||||
i2--
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Init inits DomainsCertificates
|
||||
func (dc *DomainsCertificates) Init() error {
|
||||
dc.lock.Lock()
|
||||
defer dc.lock.Unlock()
|
||||
for _, domainsCertificate := range dc.Certs {
|
||||
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
domainsCertificate.tlsCert = &tlsCert
|
||||
if domainsCertificate.tlsCert.Leaf == nil {
|
||||
leaf, err := x509.ParseCertificate(domainsCertificate.tlsCert.Certificate[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
domainsCertificate.tlsCert.Leaf = leaf
|
||||
}
|
||||
}
|
||||
dc.removeDuplicates()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain Domain) error {
|
||||
dc.lock.Lock()
|
||||
defer dc.lock.Unlock()
|
||||
|
||||
for _, domainsCertificate := range dc.Certs {
|
||||
if reflect.DeepEqual(domain, domainsCertificate.Domains) {
|
||||
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
domainsCertificate.Certificate = acmeCert
|
||||
domainsCertificate.tlsCert = &tlsCert
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return errors.New("Certificate to renew not found for domain " + domain.Main)
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
|
||||
dc.lock.Lock()
|
||||
defer dc.lock.Unlock()
|
||||
|
||||
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cert := DomainsCertificate{Domains: domain, Certificate: acmeCert, tlsCert: &tlsCert}
|
||||
dc.Certs = append(dc.Certs, &cert)
|
||||
return &cert, nil
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) getCertificateForDomain(domainToFind string) (*DomainsCertificate, bool) {
|
||||
dc.lock.RLock()
|
||||
defer dc.lock.RUnlock()
|
||||
for _, domainsCertificate := range dc.Certs {
|
||||
domains := []string{}
|
||||
domains = append(domains, domainsCertificate.Domains.Main)
|
||||
domains = append(domains, domainsCertificate.Domains.SANs...)
|
||||
for _, domain := range domains {
|
||||
if domain == domainToFind {
|
||||
return domainsCertificate, true
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate, bool) {
|
||||
dc.lock.RLock()
|
||||
defer dc.lock.RUnlock()
|
||||
for _, domainsCertificate := range dc.Certs {
|
||||
if reflect.DeepEqual(domainToFind, domainsCertificate.Domains) {
|
||||
return domainsCertificate, true
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// DomainsCertificate contains a certificate for multiple domains
|
||||
type DomainsCertificate struct {
|
||||
Domains Domain
|
||||
Certificate *Certificate
|
||||
tlsCert *tls.Certificate
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificate) needRenew() bool {
|
||||
for _, c := range dc.tlsCert.Certificate {
|
||||
crt, err := x509.ParseCertificate(c)
|
||||
if err != nil {
|
||||
// If there's an error, we assume the cert is broken, and needs update
|
||||
return true
|
||||
}
|
||||
// <= 30 days left, renew certificate
|
||||
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 30 * time.Hour))) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
607
acme/acme.go
Normal file
@@ -0,0 +1,607 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
fmtlog "log"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/ty/fun"
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/eapache/channels"
|
||||
"github.com/xenolf/lego/acme"
|
||||
"github.com/xenolf/lego/providers/dns"
|
||||
)
|
||||
|
||||
var (
|
||||
// OSCPMustStaple enables OSCP stapling as from https://github.com/xenolf/lego/issues/270
|
||||
OSCPMustStaple = false
|
||||
)
|
||||
|
||||
// ACME allows to connect to lets encrypt and retrieve certs
|
||||
type ACME struct {
|
||||
Email string `description:"Email address used for registration"`
|
||||
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
|
||||
Storage string `description:"File or key used for certificates storage."`
|
||||
StorageFile string // deprecated
|
||||
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
|
||||
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
|
||||
CAServer string `description:"CA server to use."`
|
||||
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
|
||||
DNSProvider string `description:"Use a DNS based challenge provider rather than HTTPS."`
|
||||
DelayDontCheckDNS int `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
|
||||
ACMELogging bool `description:"Enable debug logging of ACME actions."`
|
||||
client *acme.Client
|
||||
defaultCertificate *tls.Certificate
|
||||
store cluster.Store
|
||||
challengeProvider *challengeProvider
|
||||
checkOnDemandDomain func(domain string) bool
|
||||
jobs *channels.InfiniteChannel
|
||||
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
|
||||
}
|
||||
|
||||
//Domains parse []Domain
|
||||
type Domains []Domain
|
||||
|
||||
//Set []Domain
|
||||
func (ds *Domains) Set(str string) error {
|
||||
fargs := func(c rune) bool {
|
||||
return c == ',' || c == ';'
|
||||
}
|
||||
// get function
|
||||
slice := strings.FieldsFunc(str, fargs)
|
||||
if len(slice) < 1 {
|
||||
return fmt.Errorf("Parse error ACME.Domain. Imposible to parse %s", str)
|
||||
}
|
||||
d := Domain{
|
||||
Main: slice[0],
|
||||
SANs: []string{},
|
||||
}
|
||||
if len(slice) > 1 {
|
||||
d.SANs = slice[1:]
|
||||
}
|
||||
*ds = append(*ds, d)
|
||||
return nil
|
||||
}
|
||||
|
||||
//Get []Domain
|
||||
func (ds *Domains) Get() interface{} { return []Domain(*ds) }
|
||||
|
||||
//String returns []Domain in string
|
||||
func (ds *Domains) String() string { return fmt.Sprintf("%+v", *ds) }
|
||||
|
||||
//SetValue sets []Domain into the parser
|
||||
func (ds *Domains) SetValue(val interface{}) {
|
||||
*ds = Domains(val.([]Domain))
|
||||
}
|
||||
|
||||
// Domain holds a domain name with SANs
|
||||
type Domain struct {
|
||||
Main string
|
||||
SANs []string
|
||||
}
|
||||
|
||||
func (a *ACME) init() error {
|
||||
if a.ACMELogging {
|
||||
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
|
||||
} else {
|
||||
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
}
|
||||
// no certificates in TLS config, so we add a default one
|
||||
cert, err := generateDefaultCertificate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.defaultCertificate = cert
|
||||
// TODO: to remove in the futurs
|
||||
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
|
||||
log.Warnf("ACME.StorageFile is deprecated, use ACME.Storage instead")
|
||||
a.Storage = a.StorageFile
|
||||
}
|
||||
a.jobs = channels.NewInfiniteChannel()
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
|
||||
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
|
||||
err := a.init()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(a.Storage) == 0 {
|
||||
return errors.New("Empty Store, please provide a key for certs storage")
|
||||
}
|
||||
a.checkOnDemandDomain = checkOnDemandDomain
|
||||
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
|
||||
tlsConfig.GetCertificate = a.getCertificate
|
||||
a.TLSConfig = tlsConfig
|
||||
listener := func(object cluster.Object) error {
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
if !leadership.IsLeader() {
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
log.Errorf("Error building ACME client %+v: %s", object, err.Error())
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
datastore, err := cluster.NewDataStore(
|
||||
leadership.Pool.Ctx(),
|
||||
staert.KvSource{
|
||||
Store: leadership.Store,
|
||||
Prefix: a.Storage,
|
||||
},
|
||||
&Account{},
|
||||
listener)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
a.store = datastore
|
||||
a.challengeProvider = &challengeProvider{store: a.store}
|
||||
|
||||
ticker := time.NewTicker(24 * time.Hour)
|
||||
leadership.Pool.AddGoCtx(func(ctx context.Context) {
|
||||
log.Infof("Starting ACME renew job...")
|
||||
defer log.Infof("Stopped ACME renew job...")
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
a.renewCertificates()
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
leadership.AddListener(func(elected bool) error {
|
||||
if elected {
|
||||
object, err := a.store.Load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
var needRegister bool
|
||||
if account == nil || len(account.Email) == 0 {
|
||||
account, err = NewAccount(a.Email)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
needRegister = true
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if needRegister {
|
||||
// New users will need to register; be sure to save it
|
||||
log.Debugf("Register...")
|
||||
reg, err := a.client.Register()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
}
|
||||
// The client has a URL to the current Let's Encrypt Subscriber
|
||||
// Agreement. The user will need to agree to it.
|
||||
log.Debugf("AgreeToTOS...")
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
// Let's Encrypt Subscriber Agreement renew ?
|
||||
reg, err := a.client.QueryRegistration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
|
||||
}
|
||||
}
|
||||
err = transaction.Commit(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
a.retrieveCertificates()
|
||||
a.renewCertificates()
|
||||
a.runJobs()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateLocalConfig creates a tls.config using local ACME configuration
|
||||
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
|
||||
err := a.init()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(a.Storage) == 0 {
|
||||
return errors.New("Empty Store, please provide a filename for certs storage")
|
||||
}
|
||||
a.checkOnDemandDomain = checkOnDemandDomain
|
||||
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
|
||||
tlsConfig.GetCertificate = a.getCertificate
|
||||
a.TLSConfig = tlsConfig
|
||||
localStore := NewLocalStore(a.Storage)
|
||||
a.store = localStore
|
||||
a.challengeProvider = &challengeProvider{store: a.store}
|
||||
|
||||
var needRegister bool
|
||||
var account *Account
|
||||
|
||||
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
|
||||
log.Infof("Loading ACME Account...")
|
||||
// load account
|
||||
object, err := localStore.Load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account = object.(*Account)
|
||||
} else {
|
||||
log.Infof("Generating ACME Account...")
|
||||
account, err = NewAccount(a.Email)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
needRegister = true
|
||||
}
|
||||
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if needRegister {
|
||||
// New users will need to register; be sure to save it
|
||||
log.Infof("Register...")
|
||||
reg, err := a.client.Register()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
}
|
||||
|
||||
// The client has a URL to the current Let's Encrypt Subscriber
|
||||
// Agreement. The user will need to agree to it.
|
||||
log.Debugf("AgreeToTOS...")
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
// Let's Encrypt Subscriber Agreement renew ?
|
||||
reg, err := a.client.QueryRegistration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
|
||||
}
|
||||
}
|
||||
// save account
|
||||
transaction, _, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = transaction.Commit(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
a.retrieveCertificates()
|
||||
a.renewCertificates()
|
||||
a.runJobs()
|
||||
|
||||
ticker := time.NewTicker(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for range ticker.C {
|
||||
a.renewCertificates()
|
||||
}
|
||||
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
domain := types.CanonicalDomain(clientHello.ServerName)
|
||||
account := a.store.Get().(*Account)
|
||||
//use regex to test for wildcard certs that might have been added into TLSConfig
|
||||
for k := range a.TLSConfig.NameToCertificate {
|
||||
selector := "^" + strings.Replace(k, "*.", ".*\\.?", -1) + "$"
|
||||
match, _ := regexp.MatchString(selector, domain)
|
||||
if match {
|
||||
return a.TLSConfig.NameToCertificate[k], nil
|
||||
}
|
||||
}
|
||||
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
|
||||
log.Debugf("ACME got challenge %s", domain)
|
||||
return challengeCert, nil
|
||||
}
|
||||
if domainCert, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
|
||||
log.Debugf("ACME got domain cert %s", domain)
|
||||
return domainCert.tlsCert, nil
|
||||
}
|
||||
if a.OnDemand {
|
||||
if a.checkOnDemandDomain != nil && !a.checkOnDemandDomain(domain) {
|
||||
return nil, nil
|
||||
}
|
||||
return a.loadCertificateOnDemand(clientHello)
|
||||
}
|
||||
log.Debugf("ACME got nothing %s", domain)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (a *ACME) retrieveCertificates() {
|
||||
a.jobs.In() <- func() {
|
||||
log.Infof("Retrieving ACME certificates...")
|
||||
for _, domain := range a.Domains {
|
||||
// check if cert isn't already loaded
|
||||
account := a.store.Get().(*Account)
|
||||
if _, exists := account.DomainsCertificate.exists(domain); !exists {
|
||||
domains := []string{}
|
||||
domains = append(domains, domain.Main)
|
||||
domains = append(domains, domain.SANs...)
|
||||
certificateResource, err := a.getDomainsCertificates(domains)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting ACME certificate for domain %s: %s", domains, err.Error())
|
||||
continue
|
||||
}
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
log.Errorf("Error creating ACME store transaction from domain %s: %s", domain, err.Error())
|
||||
continue
|
||||
}
|
||||
account = object.(*Account)
|
||||
_, err = account.DomainsCertificate.addCertificateForDomains(certificateResource, domain)
|
||||
if err != nil {
|
||||
log.Errorf("Error adding ACME certificate for domain %s: %s", domains, err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
log.Infof("Retrieved ACME certificates")
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) renewCertificates() {
|
||||
a.jobs.In() <- func() {
|
||||
log.Debugf("Testing certificate renew...")
|
||||
account := a.store.Get().(*Account)
|
||||
for _, certificateResource := range account.DomainsCertificate.Certs {
|
||||
if certificateResource.needRenew() {
|
||||
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
|
||||
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
|
||||
Domain: certificateResource.Certificate.Domain,
|
||||
CertURL: certificateResource.Certificate.CertURL,
|
||||
CertStableURL: certificateResource.Certificate.CertStableURL,
|
||||
PrivateKey: certificateResource.Certificate.PrivateKey,
|
||||
Certificate: certificateResource.Certificate.Certificate,
|
||||
}, true, OSCPMustStaple)
|
||||
if err != nil {
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
|
||||
renewedACMECert := &Certificate{
|
||||
Domain: renewedCert.Domain,
|
||||
CertURL: renewedCert.CertURL,
|
||||
CertStableURL: renewedCert.CertStableURL,
|
||||
PrivateKey: renewedCert.PrivateKey,
|
||||
Certificate: renewedCert.Certificate,
|
||||
}
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
account = object.(*Account)
|
||||
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
|
||||
if err != nil {
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func dnsOverrideDelay(delay int) error {
|
||||
var err error
|
||||
if delay > 0 {
|
||||
log.Debugf("Delaying %d seconds rather than validating DNS propagation", delay)
|
||||
acme.PreCheckDNS = func(_, _ string) (bool, error) {
|
||||
time.Sleep(time.Duration(delay) * time.Second)
|
||||
return true, nil
|
||||
}
|
||||
} else if delay < 0 {
|
||||
err = fmt.Errorf("Invalid negative DelayDontCheckDNS: %d", delay)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
|
||||
log.Debugf("Building ACME client...")
|
||||
caServer := "https://acme-v01.api.letsencrypt.org/directory"
|
||||
if len(a.CAServer) > 0 {
|
||||
caServer = a.CAServer
|
||||
}
|
||||
client, err := acme.NewClient(caServer, account, acme.RSA4096)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(a.DNSProvider) > 0 {
|
||||
log.Debugf("Using DNS Challenge provider: %s", a.DNSProvider)
|
||||
|
||||
err = dnsOverrideDelay(a.DelayDontCheckDNS)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var provider acme.ChallengeProvider
|
||||
provider, err = dns.NewDNSChallengeProviderByName(a.DNSProvider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
|
||||
err = client.SetChallengeProvider(acme.DNS01, provider)
|
||||
} else {
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
|
||||
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
domain := types.CanonicalDomain(clientHello.ServerName)
|
||||
account := a.store.Get().(*Account)
|
||||
if certificateResource, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
|
||||
return certificateResource.tlsCert, nil
|
||||
}
|
||||
certificate, err := a.getDomainsCertificates([]string{domain})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Debugf("Got certificate on demand for domain %s", domain)
|
||||
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
account = object.(*Account)
|
||||
cert, err := account.DomainsCertificate.addCertificateForDomains(certificate, Domain{Main: domain})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cert.tlsCert, nil
|
||||
}
|
||||
|
||||
// LoadCertificateForDomains loads certificates from ACME for given domains
|
||||
func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
a.jobs.In() <- func() {
|
||||
log.Debugf("LoadCertificateForDomains %s...", domains)
|
||||
domains = fun.Map(types.CanonicalDomain, domains).([]string)
|
||||
operation := func() error {
|
||||
if a.client == nil {
|
||||
return fmt.Errorf("ACME client still not built")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error getting ACME client: %v, retrying in %s", err, time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 30 * time.Second
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting ACME client: %v", err)
|
||||
return
|
||||
}
|
||||
account := a.store.Get().(*Account)
|
||||
var domain Domain
|
||||
if len(domains) == 0 {
|
||||
// no domain
|
||||
return
|
||||
|
||||
} else if len(domains) > 1 {
|
||||
domain = Domain{Main: domains[0], SANs: domains[1:]}
|
||||
} else {
|
||||
domain = Domain{Main: domains[0]}
|
||||
}
|
||||
if _, exists := account.DomainsCertificate.exists(domain); exists {
|
||||
// domain already exists
|
||||
return
|
||||
}
|
||||
certificate, err := a.getDomainsCertificates(domains)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
log.Debugf("Got certificate for domains %+v", domains)
|
||||
transaction, object, err := a.store.Begin()
|
||||
|
||||
if err != nil {
|
||||
log.Errorf("Error creating transaction %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
account = object.(*Account)
|
||||
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
|
||||
if err != nil {
|
||||
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
log.Errorf("Error Saving ACME account %+v: %v", account, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
|
||||
domains = fun.Map(types.CanonicalDomain, domains).([]string)
|
||||
log.Debugf("Loading ACME certificates %s...", domains)
|
||||
bundle := true
|
||||
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
|
||||
if len(failures) > 0 {
|
||||
log.Error(failures)
|
||||
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
|
||||
}
|
||||
log.Debugf("Loaded ACME certificates %s", domains)
|
||||
return &Certificate{
|
||||
Domain: certificate.Domain,
|
||||
CertURL: certificate.CertURL,
|
||||
CertStableURL: certificate.CertStableURL,
|
||||
PrivateKey: certificate.PrivateKey,
|
||||
Certificate: certificate.Certificate,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (a *ACME) runJobs() {
|
||||
safe.Go(func() {
|
||||
for job := range a.jobs.Out() {
|
||||
function := job.(func())
|
||||
function()
|
||||
}
|
||||
})
|
||||
}
|
279
acme/acme_test.go
Normal file
@@ -0,0 +1,279 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
func TestDomainsSet(t *testing.T) {
|
||||
checkMap := map[string]Domains{
|
||||
"": {},
|
||||
"foo.com": {Domain{Main: "foo.com", SANs: []string{}}},
|
||||
"foo.com,bar.net": {Domain{Main: "foo.com", SANs: []string{"bar.net"}}},
|
||||
"foo.com,bar1.net,bar2.net,bar3.net": {Domain{Main: "foo.com", SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
|
||||
}
|
||||
for in, check := range checkMap {
|
||||
ds := Domains{}
|
||||
ds.Set(in)
|
||||
if !reflect.DeepEqual(check, ds) {
|
||||
t.Errorf("Expected %+v\nGot %+v", check, ds)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDomainsSetAppend(t *testing.T) {
|
||||
inSlice := []string{
|
||||
"",
|
||||
"foo1.com",
|
||||
"foo2.com,bar.net",
|
||||
"foo3.com,bar1.net,bar2.net,bar3.net",
|
||||
}
|
||||
checkSlice := []Domains{
|
||||
{},
|
||||
{
|
||||
Domain{
|
||||
Main: "foo1.com",
|
||||
SANs: []string{}}},
|
||||
{
|
||||
Domain{
|
||||
Main: "foo1.com",
|
||||
SANs: []string{}},
|
||||
Domain{
|
||||
Main: "foo2.com",
|
||||
SANs: []string{"bar.net"}}},
|
||||
{
|
||||
Domain{
|
||||
Main: "foo1.com",
|
||||
SANs: []string{}},
|
||||
Domain{
|
||||
Main: "foo2.com",
|
||||
SANs: []string{"bar.net"}},
|
||||
Domain{Main: "foo3.com",
|
||||
SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
|
||||
}
|
||||
ds := Domains{}
|
||||
for i, in := range inSlice {
|
||||
ds.Set(in)
|
||||
if !reflect.DeepEqual(checkSlice[i], ds) {
|
||||
t.Errorf("Expected %s %+v\nGot %+v", in, checkSlice[i], ds)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCertificatesRenew(t *testing.T) {
|
||||
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
|
||||
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
|
||||
domainsCertificates := DomainsCertificates{
|
||||
lock: sync.RWMutex{},
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo1.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo1.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo1Key,
|
||||
Certificate: foo1Cert,
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo2.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo2.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo2Key,
|
||||
Certificate: foo2Cert,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
|
||||
newCertificate := &Certificate{
|
||||
Domain: "foo1.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo1Key,
|
||||
Certificate: foo1Cert,
|
||||
}
|
||||
|
||||
err := domainsCertificates.renewCertificates(
|
||||
newCertificate,
|
||||
Domain{
|
||||
Main: "foo1.com",
|
||||
SANs: []string{}})
|
||||
if err != nil {
|
||||
t.Errorf("Error in renewCertificates :%v", err)
|
||||
}
|
||||
if len(domainsCertificates.Certs) != 2 {
|
||||
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
|
||||
}
|
||||
if !reflect.DeepEqual(domainsCertificates.Certs[0].Certificate, newCertificate) {
|
||||
t.Errorf("Expected new certificate %+v \nGot %+v", newCertificate, domainsCertificates.Certs[0].Certificate)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveDuplicates(t *testing.T) {
|
||||
now := time.Now()
|
||||
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
|
||||
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
|
||||
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
|
||||
barCert, barKey, _ := generateKeyPair("bar.com", now)
|
||||
domainsCertificates := DomainsCertificates{
|
||||
lock: sync.RWMutex{},
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo24Key,
|
||||
Certificate: foo24Cert,
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo48Key,
|
||||
Certificate: foo48Cert,
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: fooKey,
|
||||
Certificate: fooCert,
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "bar.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "bar.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: barKey,
|
||||
Certificate: barCert,
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: Domain{
|
||||
Main: "foo.com",
|
||||
SANs: []string{}},
|
||||
Certificate: &Certificate{
|
||||
Domain: "foo.com",
|
||||
CertURL: "url",
|
||||
CertStableURL: "url",
|
||||
PrivateKey: foo48Key,
|
||||
Certificate: foo48Cert,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
domainsCertificates.Init()
|
||||
|
||||
if len(domainsCertificates.Certs) != 2 {
|
||||
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
|
||||
}
|
||||
|
||||
for _, cert := range domainsCertificates.Certs {
|
||||
switch cert.Domains.Main {
|
||||
case "bar.com":
|
||||
continue
|
||||
case "foo.com":
|
||||
if !cert.tlsCert.Leaf.NotAfter.Equal(now.Add(48 * time.Hour).Truncate(1 * time.Second)) {
|
||||
t.Errorf("Bad expiration %s date for domain %+v, now %s", cert.tlsCert.Leaf.NotAfter.String(), cert, now.Add(48*time.Hour).Truncate(1*time.Second).String())
|
||||
}
|
||||
default:
|
||||
t.Errorf("Unknown domain %+v", cert)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoPreCheckOverride(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
err := dnsOverrideDelay(0)
|
||||
if err != nil {
|
||||
t.Errorf("Error in dnsOverrideDelay :%v", err)
|
||||
}
|
||||
if acme.PreCheckDNS != nil {
|
||||
t.Errorf("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSillyPreCheckOverride(t *testing.T) {
|
||||
err := dnsOverrideDelay(-5)
|
||||
if err == nil {
|
||||
t.Errorf("Missing expected error in dnsOverrideDelay!")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPreCheckOverride(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
err := dnsOverrideDelay(5)
|
||||
if err != nil {
|
||||
t.Errorf("Error in dnsOverrideDelay :%v", err)
|
||||
}
|
||||
if acme.PreCheckDNS == nil {
|
||||
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcmeClientCreation(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
// Lengthy setup to avoid external web requests - oh for easier golang testing!
|
||||
account := &Account{Email: "f@f"}
|
||||
account.PrivateKey, _ = base64.StdEncoding.DecodeString(`
|
||||
MIIBPAIBAAJBAMp2Ni92FfEur+CAvFkgC12LT4l9D53ApbBpDaXaJkzzks+KsLw9zyAxvlrfAyTCQ
|
||||
7tDnEnIltAXyQ0uOFUUdcMCAwEAAQJAK1FbipATZcT9cGVa5x7KD7usytftLW14heQUPXYNV80r/3
|
||||
lmnpvjL06dffRpwkYeN8DATQF/QOcy3NNNGDw/4QIhAPAKmiZFxA/qmRXsuU8Zhlzf16WrNZ68K64
|
||||
asn/h3qZrAiEA1+wFR3WXCPIolOvd7AHjfgcTKQNkoMPywU4FYUNQ1AkCIQDv8yk0qPjckD6HVCPJ
|
||||
llJh9MC0svjevGtNlxJoE3lmEQIhAKXy1wfZ32/XtcrnENPvi6lzxI0T94X7s5pP3aCoPPoJAiEAl
|
||||
cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Write([]byte(`{
|
||||
"new-authz": "https://foo/acme/new-authz",
|
||||
"new-cert": "https://foo/acme/new-cert",
|
||||
"new-reg": "https://foo/acme/new-reg",
|
||||
"revoke-cert": "https://foo/acme/revoke-cert"
|
||||
}`))
|
||||
}))
|
||||
defer ts.Close()
|
||||
a := ACME{DNSProvider: "manual", DelayDontCheckDNS: 10, CAServer: ts.URL}
|
||||
|
||||
client, err := a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
t.Errorf("Error in buildACMEClient: %v", err)
|
||||
}
|
||||
if client == nil {
|
||||
t.Errorf("No client from buildACMEClient!")
|
||||
}
|
||||
if acme.PreCheckDNS == nil {
|
||||
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
|
||||
}
|
||||
}
|
97
acme/challengeProvider.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
|
||||
|
||||
type challengeProvider struct {
|
||||
store cluster.Store
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
|
||||
log.Debugf("Challenge GetCertificate %s", domain)
|
||||
if !strings.HasSuffix(domain, ".acme.invalid") {
|
||||
return nil, false
|
||||
}
|
||||
c.lock.RLock()
|
||||
defer c.lock.RUnlock()
|
||||
account := c.store.Get().(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
return nil, false
|
||||
}
|
||||
account.Init()
|
||||
var result *tls.Certificate
|
||||
operation := func() error {
|
||||
for _, cert := range account.ChallengeCerts {
|
||||
for _, dns := range cert.certificate.Leaf.DNSNames {
|
||||
if domain == dns {
|
||||
result = cert.certificate
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("Cannot find challenge cert for domain %s", domain)
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting cert: %v", err)
|
||||
return nil, false
|
||||
}
|
||||
return result, true
|
||||
}
|
||||
|
||||
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge Present %s", domain)
|
||||
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
account.ChallengeCerts = map[string]*ChallengeCert{}
|
||||
}
|
||||
account.ChallengeCerts[domain] = &cert
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge CleanUp %s", domain)
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
delete(account.ChallengeCerts, domain)
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
|
||||
return 60 * time.Second, 5 * time.Second
|
||||
}
|
135
acme/crypto.go
Normal file
@@ -0,0 +1,135 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"encoding/hex"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"time"
|
||||
)
|
||||
|
||||
func generateDefaultCertificate() (*tls.Certificate, error) {
|
||||
randomBytes := make([]byte, 100)
|
||||
_, err := rand.Read(randomBytes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
zBytes := sha256.Sum256(randomBytes)
|
||||
z := hex.EncodeToString(zBytes[:sha256.Size])
|
||||
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
|
||||
|
||||
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &certificate, nil
|
||||
}
|
||||
|
||||
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
|
||||
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
|
||||
|
||||
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return certPEM, keyPEM, nil
|
||||
}
|
||||
|
||||
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
|
||||
derBytes, err := generateDerCert(privKey, expiration, domain)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}), nil
|
||||
}
|
||||
|
||||
func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain string) ([]byte, error) {
|
||||
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
|
||||
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if expiration.IsZero() {
|
||||
expiration = time.Now().Add(365)
|
||||
}
|
||||
|
||||
template := x509.Certificate{
|
||||
SerialNumber: serialNumber,
|
||||
Subject: pkix.Name{
|
||||
CommonName: "TRAEFIK DEFAULT CERT",
|
||||
},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: expiration,
|
||||
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment,
|
||||
BasicConstraintsValid: true,
|
||||
DNSNames: []string{domain},
|
||||
}
|
||||
|
||||
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
|
||||
}
|
||||
|
||||
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
|
||||
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
|
||||
// generate a new RSA key for the certificates
|
||||
var tempPrivKey crypto.PrivateKey
|
||||
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
|
||||
rsaPrivPEM := pemEncode(rsaPrivKey)
|
||||
|
||||
zBytes := sha256.Sum256([]byte(keyAuth))
|
||||
z := hex.EncodeToString(zBytes[:sha256.Size])
|
||||
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
|
||||
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
|
||||
}
|
||||
func pemEncode(data interface{}) []byte {
|
||||
var pemBlock *pem.Block
|
||||
switch key := data.(type) {
|
||||
case *ecdsa.PrivateKey:
|
||||
keyBytes, _ := x509.MarshalECPrivateKey(key)
|
||||
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
|
||||
case *rsa.PrivateKey:
|
||||
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
|
||||
break
|
||||
case *x509.CertificateRequest:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
|
||||
break
|
||||
case []byte:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(pemBlock)
|
||||
}
|
97
acme/localStore.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
)
|
||||
|
||||
var _ cluster.Store = (*LocalStore)(nil)
|
||||
|
||||
// LocalStore is a store using a file as storage
|
||||
type LocalStore struct {
|
||||
file string
|
||||
storageLock sync.RWMutex
|
||||
account *Account
|
||||
}
|
||||
|
||||
// NewLocalStore create a LocalStore
|
||||
func NewLocalStore(file string) *LocalStore {
|
||||
return &LocalStore{
|
||||
file: file,
|
||||
}
|
||||
}
|
||||
|
||||
// Get atomically a struct from the file storage
|
||||
func (s *LocalStore) Get() cluster.Object {
|
||||
s.storageLock.RLock()
|
||||
defer s.storageLock.RUnlock()
|
||||
return s.account
|
||||
}
|
||||
|
||||
// Load loads file into store
|
||||
func (s *LocalStore) Load() (cluster.Object, error) {
|
||||
s.storageLock.Lock()
|
||||
defer s.storageLock.Unlock()
|
||||
account := &Account{}
|
||||
|
||||
err := checkPermissions(s.file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f, err := os.Open(s.file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
file, err := ioutil.ReadAll(f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := json.Unmarshal(file, &account); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
account.Init()
|
||||
s.account = account
|
||||
log.Infof("Loaded ACME config from store %s", s.file)
|
||||
return account, nil
|
||||
}
|
||||
|
||||
// Begin creates a transaction with the KV store.
|
||||
func (s *LocalStore) Begin() (cluster.Transaction, cluster.Object, error) {
|
||||
s.storageLock.Lock()
|
||||
return &localTransaction{LocalStore: s}, s.account, nil
|
||||
}
|
||||
|
||||
var _ cluster.Transaction = (*localTransaction)(nil)
|
||||
|
||||
type localTransaction struct {
|
||||
*LocalStore
|
||||
dirty bool
|
||||
}
|
||||
|
||||
// Commit allows to set an object in the file storage
|
||||
func (t *localTransaction) Commit(object cluster.Object) error {
|
||||
t.LocalStore.account = object.(*Account)
|
||||
defer t.storageLock.Unlock()
|
||||
if t.dirty {
|
||||
return fmt.Errorf("transaction already used, please begin a new one")
|
||||
}
|
||||
|
||||
// write account to file
|
||||
data, err := json.MarshalIndent(object, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(t.file, data, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t.dirty = true
|
||||
return nil
|
||||
}
|
25
acme/localStore_unix.go
Normal file
@@ -0,0 +1,25 @@
|
||||
// +build !windows
|
||||
|
||||
package acme
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
// Check file permissions
|
||||
func checkPermissions(name string) error {
|
||||
f, err := os.Open(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
fi, err := f.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if fi.Mode().Perm()&0077 != 0 {
|
||||
return fmt.Errorf("permissions %o for %s are too open, please use 600", fi.Mode().Perm(), name)
|
||||
}
|
||||
return nil
|
||||
}
|
6
acme/localStore_windows.go
Normal file
@@ -0,0 +1,6 @@
|
||||
package acme
|
||||
|
||||
// Do not check file permissions on Windows right now
|
||||
func checkPermissions(name string) error {
|
||||
return nil
|
||||
}
|
34
adapters.go
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
Copyright
|
||||
*/
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
)
|
||||
|
||||
// OxyLogger implements oxy Logger interface with logrus.
|
||||
type OxyLogger struct {
|
||||
}
|
||||
|
||||
// Infof logs specified string as Debug level in logrus.
|
||||
func (oxylogger *OxyLogger) Infof(format string, args ...interface{}) {
|
||||
log.Debugf(format, args...)
|
||||
}
|
||||
|
||||
// Warningf logs specified string as Warning level in logrus.
|
||||
func (oxylogger *OxyLogger) Warningf(format string, args ...interface{}) {
|
||||
log.Warningf(format, args...)
|
||||
}
|
||||
|
||||
// Errorf logs specified string as Warningf level in logrus.
|
||||
func (oxylogger *OxyLogger) Errorf(format string, args ...interface{}) {
|
||||
log.Warningf(format, args...)
|
||||
}
|
||||
|
||||
func notFoundHandler(w http.ResponseWriter, r *http.Request) {
|
||||
http.NotFound(w, r)
|
||||
//templatesRenderer.HTML(w, http.StatusNotFound, "notFound", nil)
|
||||
}
|
@@ -1,37 +1,36 @@
|
||||
FROM golang:1.13-alpine
|
||||
FROM golang:1.7
|
||||
|
||||
RUN apk --update upgrade \
|
||||
&& apk --no-cache --no-progress add git mercurial bash gcc musl-dev curl tar ca-certificates tzdata \
|
||||
&& update-ca-certificates \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
RUN go get github.com/jteeuwen/go-bindata/... \
|
||||
&& go get github.com/golang/lint/golint \
|
||||
&& go get github.com/kisielk/errcheck \
|
||||
&& go get github.com/client9/misspell/cmd/misspell \
|
||||
&& go get github.com/mattfarina/glide-hash
|
||||
|
||||
# Which docker version to test on
|
||||
ARG DOCKER_VERSION=18.09.7
|
||||
ARG DOCKER_VERSION=1.10.3
|
||||
|
||||
|
||||
# Which glide version to test on
|
||||
ARG GLIDE_VERSION=v0.12.3
|
||||
|
||||
# Download glide
|
||||
RUN mkdir -p /usr/local/bin \
|
||||
&& curl -fL https://github.com/Masterminds/glide/releases/download/${GLIDE_VERSION}/glide-${GLIDE_VERSION}-linux-amd64.tar.gz \
|
||||
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
|
||||
|
||||
# Download docker
|
||||
RUN mkdir -p /usr/local/bin \
|
||||
&& curl -fL https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz \
|
||||
&& curl -fL https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz \
|
||||
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
|
||||
|
||||
# Download go-bindata binary to bin folder in $GOPATH
|
||||
RUN mkdir -p /usr/local/bin \
|
||||
&& curl -fsSL -o /usr/local/bin/go-bindata https://github.com/containous/go-bindata/releases/download/v1.0.0/go-bindata \
|
||||
&& chmod +x /usr/local/bin/go-bindata
|
||||
|
||||
# Download golangci-lint binary to bin folder in $GOPATH
|
||||
RUN curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | bash -s -- -b $GOPATH/bin v1.18.0
|
||||
|
||||
# Download golangci-lint and misspell binary to bin folder in $GOPATH
|
||||
RUN GO111MODULE=off go get github.com/client9/misspell/cmd/misspell
|
||||
|
||||
# Download goreleaser binary to bin folder in $GOPATH
|
||||
RUN curl -sfL https://install.goreleaser.com/github.com/goreleaser/goreleaser.sh | sh
|
||||
|
||||
WORKDIR /go/src/github.com/containous/traefik
|
||||
|
||||
# Download go modules
|
||||
COPY go.mod .
|
||||
COPY go.sum .
|
||||
RUN GO111MODULE=on GOPROXY=https://proxy.golang.org go mod download
|
||||
COPY glide.yaml glide.yaml
|
||||
COPY glide.lock glide.lock
|
||||
RUN glide install -v
|
||||
|
||||
COPY integration/glide.yaml integration/glide.yaml
|
||||
COPY integration/glide.lock integration/glide.lock
|
||||
RUN cd integration && glide install
|
||||
|
||||
COPY . /go/src/github.com/containous/traefik
|
||||
|
36
circle.yml
Normal file
@@ -0,0 +1,36 @@
|
||||
machine:
|
||||
pre:
|
||||
- sudo docker -d -e lxc -s btrfs -H tcp://0.0.0.0:2375:
|
||||
background: true
|
||||
- curl --retry 15 --retry-delay 3 -v http://172.17.42.1:2375/version
|
||||
environment:
|
||||
REPO: $CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
|
||||
DOCKER_HOST: tcp://172.17.42.1:2375
|
||||
MAKE_DOCKER_HOST: $DOCKER_HOST
|
||||
VERSION: v1.0.alpha.$CIRCLE_BUILD_NUM
|
||||
|
||||
dependencies:
|
||||
pre:
|
||||
- docker version
|
||||
- go get github.com/tcnksm/ghr
|
||||
- make validate
|
||||
override:
|
||||
- make binary
|
||||
|
||||
test:
|
||||
override:
|
||||
- make test-unit
|
||||
- make test-integration
|
||||
post:
|
||||
- make crossbinary
|
||||
- make image
|
||||
|
||||
deployment:
|
||||
hub:
|
||||
branch: master
|
||||
commands:
|
||||
- ghr -t $GITHUB_TOKEN -u $CIRCLE_PROJECT_USERNAME -r $CIRCLE_PROJECT_REPONAME --prerelease ${VERSION} dist/
|
||||
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
|
||||
- docker push ${REPO,,}:latest
|
||||
- docker tag ${REPO,,}:latest ${REPO,,}:${VERSION}
|
||||
- docker push ${REPO,,}:${VERSION}
|
255
cluster/datastore.go
Normal file
@@ -0,0 +1,255 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/job"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/docker/libkv/store"
|
||||
"github.com/satori/go.uuid"
|
||||
)
|
||||
|
||||
// Metadata stores Object plus metadata
|
||||
type Metadata struct {
|
||||
object Object
|
||||
Object []byte
|
||||
Lock string
|
||||
}
|
||||
|
||||
// NewMetadata returns new Metadata
|
||||
func NewMetadata(object Object) *Metadata {
|
||||
return &Metadata{object: object}
|
||||
}
|
||||
|
||||
// Marshall marshalls object
|
||||
func (m *Metadata) Marshall() error {
|
||||
var err error
|
||||
m.Object, err = json.Marshal(m.object)
|
||||
return err
|
||||
}
|
||||
|
||||
func (m *Metadata) unmarshall() error {
|
||||
if len(m.Object) == 0 {
|
||||
return nil
|
||||
}
|
||||
return json.Unmarshal(m.Object, m.object)
|
||||
}
|
||||
|
||||
// Listener is called when Object has been changed in KV store
|
||||
type Listener func(Object) error
|
||||
|
||||
var _ Store = (*Datastore)(nil)
|
||||
|
||||
// Datastore holds a struct synced in a KV store
|
||||
type Datastore struct {
|
||||
kv staert.KvSource
|
||||
ctx context.Context
|
||||
localLock *sync.RWMutex
|
||||
meta *Metadata
|
||||
lockKey string
|
||||
listener Listener
|
||||
}
|
||||
|
||||
// NewDataStore creates a Datastore
|
||||
func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object, listener Listener) (*Datastore, error) {
|
||||
datastore := Datastore{
|
||||
kv: kvSource,
|
||||
ctx: ctx,
|
||||
meta: &Metadata{object: object},
|
||||
lockKey: kvSource.Prefix + "/lock",
|
||||
localLock: &sync.RWMutex{},
|
||||
listener: listener,
|
||||
}
|
||||
err := datastore.watchChanges()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &datastore, nil
|
||||
}
|
||||
|
||||
func (d *Datastore) watchChanges() error {
|
||||
stopCh := make(chan struct{})
|
||||
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
go func() {
|
||||
ctx, cancel := context.WithCancel(d.ctx)
|
||||
operation := func() error {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
stopCh <- struct{}{}
|
||||
return nil
|
||||
case _, ok := <-kvCh:
|
||||
if !ok {
|
||||
cancel()
|
||||
return err
|
||||
}
|
||||
err = d.reload()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// log.Debugf("Datastore object change received: %+v", d.meta)
|
||||
if d.listener != nil {
|
||||
err := d.listener(d.meta.object)
|
||||
if err != nil {
|
||||
log.Errorf("Error calling datastore listener: %s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error in watch datastore: %+v, retrying in %s", err, time)
|
||||
}
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error in watch datastore: %v", err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Datastore) reload() error {
|
||||
log.Debugf("Datastore reload")
|
||||
d.localLock.Lock()
|
||||
err := d.kv.LoadConfig(d.meta)
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
err = d.meta.unmarshall()
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
d.localLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Begin creates a transaction with the KV store.
|
||||
func (d *Datastore) Begin() (Transaction, Object, error) {
|
||||
id := uuid.NewV4().String()
|
||||
log.Debugf("Transaction %s begins", id)
|
||||
remoteLock, err := d.kv.NewLock(d.lockKey, &store.LockOptions{TTL: 20 * time.Second, Value: []byte(id)})
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
stopCh := make(chan struct{})
|
||||
ctx, cancel := context.WithCancel(d.ctx)
|
||||
var errLock error
|
||||
go func() {
|
||||
_, errLock = remoteLock.Lock(stopCh)
|
||||
cancel()
|
||||
}()
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
if errLock != nil {
|
||||
return nil, nil, errLock
|
||||
}
|
||||
case <-d.ctx.Done():
|
||||
stopCh <- struct{}{}
|
||||
return nil, nil, d.ctx.Err()
|
||||
}
|
||||
|
||||
// we got the lock! Now make sure we are synced with KV store
|
||||
operation := func() error {
|
||||
meta := d.get()
|
||||
if meta.Lock != id {
|
||||
return fmt.Errorf("Object lock value: expected %s, got %s", id, meta.Lock)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Datastore sync error: %v, retrying in %s", err, time)
|
||||
err = d.reload()
|
||||
if err != nil {
|
||||
log.Errorf("Error reloading: %+v", err)
|
||||
}
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("Datastore cannot sync: %v", err)
|
||||
}
|
||||
|
||||
// we synced with KV store, we can now return Setter
|
||||
return &datastoreTransaction{
|
||||
Datastore: d,
|
||||
remoteLock: remoteLock,
|
||||
id: id,
|
||||
}, d.meta.object, nil
|
||||
}
|
||||
|
||||
func (d *Datastore) get() *Metadata {
|
||||
d.localLock.RLock()
|
||||
defer d.localLock.RUnlock()
|
||||
return d.meta
|
||||
}
|
||||
|
||||
// Load load atomically a struct from the KV store
|
||||
func (d *Datastore) Load() (Object, error) {
|
||||
d.localLock.Lock()
|
||||
defer d.localLock.Unlock()
|
||||
err := d.kv.LoadConfig(d.meta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = d.meta.unmarshall()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return d.meta.object, nil
|
||||
}
|
||||
|
||||
// Get atomically a struct from the KV store
|
||||
func (d *Datastore) Get() Object {
|
||||
d.localLock.RLock()
|
||||
defer d.localLock.RUnlock()
|
||||
return d.meta.object
|
||||
}
|
||||
|
||||
var _ Transaction = (*datastoreTransaction)(nil)
|
||||
|
||||
type datastoreTransaction struct {
|
||||
*Datastore
|
||||
remoteLock store.Locker
|
||||
dirty bool
|
||||
id string
|
||||
}
|
||||
|
||||
// Commit allows to set an object in the KV store
|
||||
func (s *datastoreTransaction) Commit(object Object) error {
|
||||
s.localLock.Lock()
|
||||
defer s.localLock.Unlock()
|
||||
if s.dirty {
|
||||
return fmt.Errorf("Transaction already used, please begin a new one")
|
||||
}
|
||||
s.Datastore.meta.object = object
|
||||
err := s.Datastore.meta.Marshall()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Marshall error: %s", err)
|
||||
}
|
||||
err = s.kv.StoreConfig(s.Datastore.meta)
|
||||
if err != nil {
|
||||
return fmt.Errorf("StoreConfig error: %s", err)
|
||||
}
|
||||
|
||||
err = s.remoteLock.Unlock()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Unlock error: %s", err)
|
||||
}
|
||||
|
||||
s.dirty = true
|
||||
log.Debugf("Transaction committed %s", s.id)
|
||||
return nil
|
||||
}
|
104
cluster/leadership.go
Normal file
@@ -0,0 +1,104 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/docker/leadership"
|
||||
)
|
||||
|
||||
// Leadership allows leadership election using a KV store
|
||||
type Leadership struct {
|
||||
*safe.Pool
|
||||
*types.Cluster
|
||||
candidate *leadership.Candidate
|
||||
leader *safe.Safe
|
||||
listeners []LeaderListener
|
||||
}
|
||||
|
||||
// NewLeadership creates a leadership
|
||||
func NewLeadership(ctx context.Context, cluster *types.Cluster) *Leadership {
|
||||
return &Leadership{
|
||||
Pool: safe.NewPool(ctx),
|
||||
Cluster: cluster,
|
||||
candidate: leadership.NewCandidate(cluster.Store, cluster.Store.Prefix+"/leader", cluster.Node, 20*time.Second),
|
||||
listeners: []LeaderListener{},
|
||||
leader: safe.New(false),
|
||||
}
|
||||
}
|
||||
|
||||
// LeaderListener is called when leadership has changed
|
||||
type LeaderListener func(elected bool) error
|
||||
|
||||
// Participate tries to be a leader
|
||||
func (l *Leadership) Participate(pool *safe.Pool) {
|
||||
pool.GoCtx(func(ctx context.Context) {
|
||||
log.Debugf("Node %s running for election", l.Cluster.Node)
|
||||
defer log.Debugf("Node %s no more running for election", l.Cluster.Node)
|
||||
backOff := backoff.NewExponentialBackOff()
|
||||
operation := func() error {
|
||||
return l.run(ctx, l.candidate)
|
||||
}
|
||||
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Leadership election error %+v, retrying in %s", err, time)
|
||||
}
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), backOff, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Cannot elect leadership %+v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// AddListener adds a leadership listerner
|
||||
func (l *Leadership) AddListener(listener LeaderListener) {
|
||||
l.listeners = append(l.listeners, listener)
|
||||
}
|
||||
|
||||
// Resign resigns from being a leader
|
||||
func (l *Leadership) Resign() {
|
||||
l.candidate.Resign()
|
||||
log.Infof("Node %s resigned", l.Cluster.Node)
|
||||
}
|
||||
|
||||
func (l *Leadership) run(ctx context.Context, candidate *leadership.Candidate) error {
|
||||
electedCh, errCh := candidate.RunForElection()
|
||||
for {
|
||||
select {
|
||||
case elected := <-electedCh:
|
||||
l.onElection(elected)
|
||||
case err := <-errCh:
|
||||
return err
|
||||
case <-ctx.Done():
|
||||
l.candidate.Resign()
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (l *Leadership) onElection(elected bool) {
|
||||
if elected {
|
||||
log.Infof("Node %s elected leader ♚", l.Cluster.Node)
|
||||
l.leader.Set(true)
|
||||
l.Start()
|
||||
} else {
|
||||
log.Infof("Node %s elected slave ♝", l.Cluster.Node)
|
||||
l.leader.Set(false)
|
||||
l.Stop()
|
||||
}
|
||||
for _, listener := range l.listeners {
|
||||
err := listener(elected)
|
||||
if err != nil {
|
||||
log.Errorf("Error calling Leadership listener: %s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// IsLeader returns true if current node is leader
|
||||
func (l *Leadership) IsLeader() bool {
|
||||
return l.leader.Get().(bool)
|
||||
}
|
16
cluster/store.go
Normal file
@@ -0,0 +1,16 @@
|
||||
package cluster
|
||||
|
||||
// Object is the struct to store
|
||||
type Object interface{}
|
||||
|
||||
// Store is a generic interface to represents a storage
|
||||
type Store interface {
|
||||
Load() (Object, error)
|
||||
Get() Object
|
||||
Begin() (Transaction, Object, error)
|
||||
}
|
||||
|
||||
// Transaction allows to set a struct in the KV store
|
||||
type Transaction interface {
|
||||
Commit(object Object) error
|
||||
}
|
111
cmd/bug.go
Normal file
@@ -0,0 +1,111 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"text/template"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/mvdan/xurls"
|
||||
)
|
||||
|
||||
var (
|
||||
bugtracker = "https://github.com/containous/traefik/issues/new"
|
||||
bugTemplate = `### What version of Traefik are you using?
|
||||
` + "```" + `
|
||||
{{.Version}}
|
||||
` + "```" + `
|
||||
|
||||
### What is your environment & configuration (arguments, toml...)?
|
||||
` + "```" + `
|
||||
{{.Configuration}}
|
||||
` + "```" + `
|
||||
|
||||
### What did you do?
|
||||
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
`
|
||||
)
|
||||
|
||||
// NewBugCmd builds a new Bug command
|
||||
func NewBugCmd(traefikConfiguration interface{}, traefikPointersConfiguration interface{}) *flaeg.Command {
|
||||
|
||||
//version Command init
|
||||
return &flaeg.Command{
|
||||
Name: "bug",
|
||||
Description: `Report an issue on Traefik bugtracker`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Run: func() error {
|
||||
var version bytes.Buffer
|
||||
if err := getVersionPrint(&version); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
tmpl, err := template.New("").Parse(bugTemplate)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
configJSON, err := json.MarshalIndent(traefikConfiguration, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
v := struct {
|
||||
Version string
|
||||
Configuration string
|
||||
}{
|
||||
Version: version.String(),
|
||||
Configuration: anonymize(string(configJSON)),
|
||||
}
|
||||
|
||||
var bug bytes.Buffer
|
||||
if err := tmpl.Execute(&bug, v); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
body := bug.String()
|
||||
url := bugtracker + "?body=" + url.QueryEscape(body)
|
||||
if err := openBrowser(url); err != nil {
|
||||
fmt.Print("Please file a new issue at " + bugtracker + " using this template:\n\n")
|
||||
fmt.Print(body)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
Metadata: map[string]string{
|
||||
"parseAllSources": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func openBrowser(url string) error {
|
||||
var err error
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
err = exec.Command("xdg-open", url).Start()
|
||||
case "windows":
|
||||
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
|
||||
case "darwin":
|
||||
err = exec.Command("open", url).Start()
|
||||
default:
|
||||
err = fmt.Errorf("unsupported platform")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func anonymize(input string) string {
|
||||
replace := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
|
||||
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, replace), replace)
|
||||
}
|
@@ -1,34 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/v2/pkg/config/static"
|
||||
"github.com/containous/traefik/v2/pkg/types"
|
||||
)
|
||||
|
||||
// TraefikCmdConfiguration wraps the static configuration and extra parameters.
|
||||
type TraefikCmdConfiguration struct {
|
||||
static.Configuration `export:"true"`
|
||||
// ConfigFile is the path to the configuration file.
|
||||
ConfigFile string `description:"Configuration file to use. If specified all other flags are ignored." export:"true"`
|
||||
}
|
||||
|
||||
// NewTraefikConfiguration creates a TraefikCmdConfiguration with default values.
|
||||
func NewTraefikConfiguration() *TraefikCmdConfiguration {
|
||||
return &TraefikCmdConfiguration{
|
||||
Configuration: static.Configuration{
|
||||
Global: &static.Global{
|
||||
CheckNewVersion: true,
|
||||
},
|
||||
EntryPoints: make(static.EntryPoints),
|
||||
Providers: &static.Providers{
|
||||
ProvidersThrottleDuration: types.Duration(2 * time.Second),
|
||||
},
|
||||
ServersTransport: &static.ServersTransport{
|
||||
MaxIdleConnsPerHost: 200,
|
||||
},
|
||||
},
|
||||
ConfigFile: "",
|
||||
}
|
||||
}
|
@@ -1,22 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// ContextWithSignal creates a context canceled when SIGINT or SIGTERM are notified
|
||||
func ContextWithSignal(ctx context.Context) context.Context {
|
||||
newCtx, cancel := context.WithCancel(ctx)
|
||||
signals := make(chan os.Signal)
|
||||
signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
select {
|
||||
case <-signals:
|
||||
cancel()
|
||||
}
|
||||
}()
|
||||
return newCtx
|
||||
}
|
@@ -1,74 +0,0 @@
|
||||
package healthcheck
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/v2/pkg/cli"
|
||||
"github.com/containous/traefik/v2/pkg/config/static"
|
||||
)
|
||||
|
||||
// NewCmd builds a new HealthCheck command.
|
||||
func NewCmd(traefikConfiguration *static.Configuration, loaders []cli.ResourceLoader) *cli.Command {
|
||||
return &cli.Command{
|
||||
Name: "healthcheck",
|
||||
Description: `Calls Traefik /ping to check the health of Traefik (the API must be enabled).`,
|
||||
Configuration: traefikConfiguration,
|
||||
Run: runCmd(traefikConfiguration),
|
||||
Resources: loaders,
|
||||
}
|
||||
}
|
||||
|
||||
func runCmd(traefikConfiguration *static.Configuration) func(_ []string) error {
|
||||
return func(_ []string) error {
|
||||
traefikConfiguration.SetEffectiveConfiguration()
|
||||
|
||||
resp, errPing := Do(*traefikConfiguration)
|
||||
if resp != nil {
|
||||
resp.Body.Close()
|
||||
}
|
||||
if errPing != nil {
|
||||
fmt.Printf("Error calling healthcheck: %s\n", errPing)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("OK: %s\n", resp.Request.URL)
|
||||
os.Exit(0)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Do try to do a healthcheck
|
||||
func Do(staticConfiguration static.Configuration) (*http.Response, error) {
|
||||
if staticConfiguration.Ping == nil {
|
||||
return nil, errors.New("please enable `ping` to use health check")
|
||||
}
|
||||
|
||||
pingEntryPoint, ok := staticConfiguration.EntryPoints["traefik"]
|
||||
if !ok {
|
||||
return nil, errors.New("missing `ping` entrypoint")
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
protocol := "http"
|
||||
|
||||
// FIXME Handle TLS on ping etc...
|
||||
// if pingEntryPoint.TLS != nil {
|
||||
// protocol = "https"
|
||||
// tr := &http.Transport{
|
||||
// TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
// }
|
||||
// client.Transport = tr
|
||||
// }
|
||||
|
||||
path := "/"
|
||||
|
||||
return client.Head(protocol + "://" + pingEntryPoint.Address + path + "ping")
|
||||
}
|
@@ -1,314 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
stdlog "log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/v2/autogen/genstatic"
|
||||
"github.com/containous/traefik/v2/cmd"
|
||||
"github.com/containous/traefik/v2/cmd/healthcheck"
|
||||
cmdVersion "github.com/containous/traefik/v2/cmd/version"
|
||||
"github.com/containous/traefik/v2/pkg/cli"
|
||||
"github.com/containous/traefik/v2/pkg/collector"
|
||||
"github.com/containous/traefik/v2/pkg/config/dynamic"
|
||||
"github.com/containous/traefik/v2/pkg/config/static"
|
||||
"github.com/containous/traefik/v2/pkg/log"
|
||||
"github.com/containous/traefik/v2/pkg/provider/acme"
|
||||
"github.com/containous/traefik/v2/pkg/provider/aggregator"
|
||||
"github.com/containous/traefik/v2/pkg/safe"
|
||||
"github.com/containous/traefik/v2/pkg/server"
|
||||
"github.com/containous/traefik/v2/pkg/server/router"
|
||||
traefiktls "github.com/containous/traefik/v2/pkg/tls"
|
||||
"github.com/containous/traefik/v2/pkg/version"
|
||||
"github.com/coreos/go-systemd/daemon"
|
||||
assetfs "github.com/elazarl/go-bindata-assetfs"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/vulcand/oxy/roundrobin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// traefik config inits
|
||||
tConfig := cmd.NewTraefikConfiguration()
|
||||
|
||||
loaders := []cli.ResourceLoader{&cli.FileLoader{}, &cli.FlagLoader{}, &cli.EnvLoader{}}
|
||||
|
||||
cmdTraefik := &cli.Command{
|
||||
Name: "traefik",
|
||||
Description: `Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
Complete documentation is available at https://traefik.io`,
|
||||
Configuration: tConfig,
|
||||
Resources: loaders,
|
||||
Run: func(_ []string) error {
|
||||
return runCmd(&tConfig.Configuration)
|
||||
},
|
||||
}
|
||||
|
||||
err := cmdTraefik.AddCommand(healthcheck.NewCmd(&tConfig.Configuration, loaders))
|
||||
if err != nil {
|
||||
stdlog.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = cmdTraefik.AddCommand(cmdVersion.NewCmd())
|
||||
if err != nil {
|
||||
stdlog.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
err = cli.Execute(cmdTraefik)
|
||||
if err != nil {
|
||||
stdlog.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
func runCmd(staticConfiguration *static.Configuration) error {
|
||||
configureLogging(staticConfiguration)
|
||||
|
||||
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
|
||||
|
||||
if err := roundrobin.SetDefaultWeight(0); err != nil {
|
||||
log.WithoutContext().Errorf("Could not set roundrobin default weight: %v", err)
|
||||
}
|
||||
|
||||
staticConfiguration.SetEffectiveConfiguration()
|
||||
if err := staticConfiguration.ValidateConfiguration(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithoutContext().Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
|
||||
|
||||
jsonConf, err := json.Marshal(staticConfiguration)
|
||||
if err != nil {
|
||||
log.WithoutContext().Errorf("Could not marshal static configuration: %v", err)
|
||||
log.WithoutContext().Debugf("Static configuration loaded [struct] %#v", staticConfiguration)
|
||||
} else {
|
||||
log.WithoutContext().Debugf("Static configuration loaded %s", string(jsonConf))
|
||||
}
|
||||
|
||||
if staticConfiguration.API != nil && staticConfiguration.API.Dashboard {
|
||||
staticConfiguration.API.DashboardAssets = &assetfs.AssetFS{Asset: genstatic.Asset, AssetInfo: genstatic.AssetInfo, AssetDir: genstatic.AssetDir, Prefix: "static"}
|
||||
}
|
||||
|
||||
if staticConfiguration.Global.CheckNewVersion {
|
||||
checkNewVersion()
|
||||
}
|
||||
|
||||
stats(staticConfiguration)
|
||||
|
||||
providerAggregator := aggregator.NewProviderAggregator(*staticConfiguration.Providers)
|
||||
|
||||
tlsManager := traefiktls.NewManager()
|
||||
|
||||
acmeProviders := initACMEProvider(staticConfiguration, &providerAggregator, tlsManager)
|
||||
|
||||
serverEntryPointsTCP := make(server.TCPEntryPoints)
|
||||
for entryPointName, config := range staticConfiguration.EntryPoints {
|
||||
ctx := log.With(context.Background(), log.Str(log.EntryPointName, entryPointName))
|
||||
serverEntryPointsTCP[entryPointName], err = server.NewTCPEntryPoint(ctx, config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error while building entryPoint %s: %v", entryPointName, err)
|
||||
}
|
||||
serverEntryPointsTCP[entryPointName].RouteAppenderFactory = router.NewRouteAppenderFactory(*staticConfiguration, entryPointName, acmeProviders)
|
||||
}
|
||||
|
||||
svr := server.NewServer(*staticConfiguration, providerAggregator, serverEntryPointsTCP, tlsManager)
|
||||
|
||||
resolverNames := map[string]struct{}{}
|
||||
|
||||
for _, p := range acmeProviders {
|
||||
resolverNames[p.ResolverName] = struct{}{}
|
||||
svr.AddListener(p.ListenConfiguration)
|
||||
}
|
||||
|
||||
svr.AddListener(func(config dynamic.Configuration) {
|
||||
for rtName, rt := range config.HTTP.Routers {
|
||||
if rt.TLS == nil || rt.TLS.CertResolver == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := resolverNames[rt.TLS.CertResolver]; !ok {
|
||||
log.WithoutContext().Errorf("the router %s uses a non-existent resolver: %s", rtName, rt.TLS.CertResolver)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
ctx := cmd.ContextWithSignal(context.Background())
|
||||
|
||||
if staticConfiguration.Ping != nil {
|
||||
staticConfiguration.Ping.WithContext(ctx)
|
||||
}
|
||||
|
||||
svr.Start(ctx)
|
||||
defer svr.Close()
|
||||
|
||||
sent, err := daemon.SdNotify(false, "READY=1")
|
||||
if !sent && err != nil {
|
||||
log.WithoutContext().Errorf("Failed to notify: %v", err)
|
||||
}
|
||||
|
||||
t, err := daemon.SdWatchdogEnabled(false)
|
||||
if err != nil {
|
||||
log.WithoutContext().Errorf("Could not enable Watchdog: %v", err)
|
||||
} else if t != 0 {
|
||||
// Send a ping each half time given
|
||||
t /= 2
|
||||
log.WithoutContext().Infof("Watchdog activated with timer duration %s", t)
|
||||
safe.Go(func() {
|
||||
tick := time.Tick(t)
|
||||
for range tick {
|
||||
resp, errHealthCheck := healthcheck.Do(*staticConfiguration)
|
||||
if resp != nil {
|
||||
resp.Body.Close()
|
||||
}
|
||||
|
||||
if staticConfiguration.Ping == nil || errHealthCheck == nil {
|
||||
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
|
||||
log.WithoutContext().Error("Fail to tick watchdog")
|
||||
}
|
||||
} else {
|
||||
log.WithoutContext().Error(errHealthCheck)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
svr.Wait()
|
||||
log.WithoutContext().Info("Shutting down")
|
||||
logrus.Exit(0)
|
||||
return nil
|
||||
}
|
||||
|
||||
// initACMEProvider creates an acme provider from the ACME part of globalConfiguration
|
||||
func initACMEProvider(c *static.Configuration, providerAggregator *aggregator.ProviderAggregator, tlsManager *traefiktls.Manager) []*acme.Provider {
|
||||
challengeStore := acme.NewLocalChallengeStore()
|
||||
localStores := map[string]*acme.LocalStore{}
|
||||
|
||||
var resolvers []*acme.Provider
|
||||
for name, resolver := range c.CertificatesResolvers {
|
||||
if resolver.ACME != nil {
|
||||
if localStores[resolver.ACME.Storage] == nil {
|
||||
localStores[resolver.ACME.Storage] = acme.NewLocalStore(resolver.ACME.Storage)
|
||||
}
|
||||
|
||||
p := &acme.Provider{
|
||||
Configuration: resolver.ACME,
|
||||
Store: localStores[resolver.ACME.Storage],
|
||||
ChallengeStore: challengeStore,
|
||||
ResolverName: name,
|
||||
}
|
||||
|
||||
if err := providerAggregator.AddProvider(p); err != nil {
|
||||
log.WithoutContext().Errorf("Unable to add ACME provider to the providers list: %v", err)
|
||||
continue
|
||||
}
|
||||
p.SetTLSManager(tlsManager)
|
||||
if p.TLSChallenge != nil {
|
||||
tlsManager.TLSAlpnGetter = p.GetTLSALPNCertificate
|
||||
}
|
||||
p.SetConfigListenerChan(make(chan dynamic.Configuration))
|
||||
resolvers = append(resolvers, p)
|
||||
}
|
||||
}
|
||||
return resolvers
|
||||
}
|
||||
|
||||
func configureLogging(staticConfiguration *static.Configuration) {
|
||||
// configure default log flags
|
||||
stdlog.SetFlags(stdlog.Lshortfile | stdlog.LstdFlags)
|
||||
|
||||
// configure log level
|
||||
// an explicitly defined log level always has precedence. if none is
|
||||
// given and debug mode is disabled, the default is ERROR, and DEBUG
|
||||
// otherwise.
|
||||
levelStr := "error"
|
||||
if staticConfiguration.Log != nil && staticConfiguration.Log.Level != "" {
|
||||
levelStr = strings.ToLower(staticConfiguration.Log.Level)
|
||||
}
|
||||
|
||||
level, err := logrus.ParseLevel(levelStr)
|
||||
if err != nil {
|
||||
log.WithoutContext().Errorf("Error getting level: %v", err)
|
||||
}
|
||||
log.SetLevel(level)
|
||||
|
||||
var logFile string
|
||||
if staticConfiguration.Log != nil && len(staticConfiguration.Log.FilePath) > 0 {
|
||||
logFile = staticConfiguration.Log.FilePath
|
||||
}
|
||||
|
||||
// configure log format
|
||||
var formatter logrus.Formatter
|
||||
if staticConfiguration.Log != nil && staticConfiguration.Log.Format == "json" {
|
||||
formatter = &logrus.JSONFormatter{}
|
||||
} else {
|
||||
disableColors := len(logFile) > 0
|
||||
formatter = &logrus.TextFormatter{DisableColors: disableColors, FullTimestamp: true, DisableSorting: true}
|
||||
}
|
||||
log.SetFormatter(formatter)
|
||||
|
||||
if len(logFile) > 0 {
|
||||
dir := filepath.Dir(logFile)
|
||||
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
log.WithoutContext().Errorf("Failed to create log path %s: %s", dir, err)
|
||||
}
|
||||
|
||||
err = log.OpenFile(logFile)
|
||||
logrus.RegisterExitHandler(func() {
|
||||
if err := log.CloseFile(); err != nil {
|
||||
log.WithoutContext().Errorf("Error while closing log: %v", err)
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
log.WithoutContext().Errorf("Error while opening log file %s: %v", logFile, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func checkNewVersion() {
|
||||
ticker := time.Tick(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for time.Sleep(10 * time.Minute); ; <-ticker {
|
||||
version.CheckNewVersion()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func stats(staticConfiguration *static.Configuration) {
|
||||
logger := log.WithoutContext()
|
||||
|
||||
if staticConfiguration.Global.SendAnonymousUsage {
|
||||
logger.Info(`Stats collection is enabled.`)
|
||||
logger.Info(`Many thanks for contributing to Traefik's improvement by allowing us to receive anonymous information from your configuration.`)
|
||||
logger.Info(`Help us improve Traefik by leaving this feature on :)`)
|
||||
logger.Info(`More details on: https://docs.traefik.io/v2.0/contributing/data-collection/`)
|
||||
collect(staticConfiguration)
|
||||
} else {
|
||||
logger.Info(`
|
||||
Stats collection is disabled.
|
||||
Help us improve Traefik by turning this feature on :)
|
||||
More details on: https://docs.traefik.io/v2.0/contributing/data-collection/
|
||||
`)
|
||||
}
|
||||
}
|
||||
|
||||
func collect(staticConfiguration *static.Configuration) {
|
||||
ticker := time.Tick(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for time.Sleep(10 * time.Minute); ; <-ticker {
|
||||
if err := collector.Collect(staticConfiguration); err != nil {
|
||||
log.WithoutContext().Debug(err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
@@ -1,4 +1,4 @@
|
||||
package version
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
@@ -7,8 +7,8 @@ import (
|
||||
"runtime"
|
||||
"text/template"
|
||||
|
||||
"github.com/containous/traefik/v2/pkg/cli"
|
||||
"github.com/containous/traefik/v2/pkg/version"
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/version"
|
||||
)
|
||||
|
||||
var versionTemplate = `Version: {{.Version}}
|
||||
@@ -17,24 +17,27 @@ Go version: {{.GoVersion}}
|
||||
Built: {{.BuildTime}}
|
||||
OS/Arch: {{.Os}}/{{.Arch}}`
|
||||
|
||||
// NewCmd builds a new Version command
|
||||
func NewCmd() *cli.Command {
|
||||
return &cli.Command{
|
||||
Name: "version",
|
||||
Description: `Shows the current Traefik version.`,
|
||||
Configuration: nil,
|
||||
Run: func(_ []string) error {
|
||||
if err := GetPrint(os.Stdout); err != nil {
|
||||
// NewVersionCmd builds a new Version command
|
||||
func NewVersionCmd() *flaeg.Command {
|
||||
|
||||
//version Command init
|
||||
return &flaeg.Command{
|
||||
Name: "version",
|
||||
Description: `Print version`,
|
||||
Config: struct{}{},
|
||||
DefaultPointersConfig: struct{}{},
|
||||
Run: func() error {
|
||||
if err := getVersionPrint(os.Stdout); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Print("\n")
|
||||
fmt.Printf("\n")
|
||||
return nil
|
||||
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetPrint write Printable version
|
||||
func GetPrint(wr io.Writer) error {
|
||||
func getVersionPrint(wr io.Writer) error {
|
||||
tmpl, err := template.New("").Parse(versionTemplate)
|
||||
if err != nil {
|
||||
return err
|
456
configuration.go
Normal file
@@ -0,0 +1,456 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
// TraefikConfiguration holds GlobalConfiguration and other stuff
|
||||
type TraefikConfiguration struct {
|
||||
GlobalConfiguration `mapstructure:",squash"`
|
||||
ConfigFile string `short:"c" description:"Configuration file to use (TOML)."`
|
||||
}
|
||||
|
||||
// GlobalConfiguration holds global configuration (with providers, etc.).
|
||||
// It's populated from the traefik configuration file passed as an argument to the binary.
|
||||
type GlobalConfiguration struct {
|
||||
GraceTimeOut int64 `short:"g" description:"Duration to give active requests a chance to finish during hot-reload"`
|
||||
Debug bool `short:"d" description:"Enable debug mode"`
|
||||
CheckNewVersion bool `description:"Periodically check if a new version has been released"`
|
||||
AccessLogsFile string `description:"Access logs file"`
|
||||
TraefikLogsFile string `description:"Traefik logs file"`
|
||||
LogLevel string `short:"l" description:"Log level"`
|
||||
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'"`
|
||||
Cluster *types.Cluster `description:"Enable clustering"`
|
||||
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags"`
|
||||
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL"`
|
||||
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint"`
|
||||
ProvidersThrottleDuration time.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time."`
|
||||
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used"`
|
||||
InsecureSkipVerify bool `description:"Disable SSL certificate verification"`
|
||||
Retry *Retry `description:"Enable retry sending request if network error"`
|
||||
Docker *provider.Docker `description:"Enable Docker backend"`
|
||||
File *provider.File `description:"Enable File backend"`
|
||||
Web *WebProvider `description:"Enable Web backend"`
|
||||
Marathon *provider.Marathon `description:"Enable Marathon backend"`
|
||||
Consul *provider.Consul `description:"Enable Consul backend"`
|
||||
ConsulCatalog *provider.ConsulCatalog `description:"Enable Consul catalog backend"`
|
||||
Etcd *provider.Etcd `description:"Enable Etcd backend"`
|
||||
Zookeeper *provider.Zookepper `description:"Enable Zookeeper backend"`
|
||||
Boltdb *provider.BoltDb `description:"Enable Boltdb backend"`
|
||||
Kubernetes *provider.Kubernetes `description:"Enable Kubernetes backend"`
|
||||
Mesos *provider.Mesos `description:"Enable Mesos backend"`
|
||||
Eureka *provider.Eureka `description:"Enable Eureka backend"`
|
||||
ECS *provider.ECS `description:"Enable ECS backend"`
|
||||
Rancher *provider.Rancher `description:"Enable Rancher backend"`
|
||||
}
|
||||
|
||||
// DefaultEntryPoints holds default entry points
|
||||
type DefaultEntryPoints []string
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (dep *DefaultEntryPoints) String() string {
|
||||
return strings.Join(*dep, ",")
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (dep *DefaultEntryPoints) Set(value string) error {
|
||||
entrypoints := strings.Split(value, ",")
|
||||
if len(entrypoints) == 0 {
|
||||
return errors.New("Bad DefaultEntryPoints format: " + value)
|
||||
}
|
||||
for _, entrypoint := range entrypoints {
|
||||
*dep = append(*dep, entrypoint)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (dep *DefaultEntryPoints) Get() interface{} {
|
||||
return DefaultEntryPoints(*dep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
|
||||
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (dep *DefaultEntryPoints) Type() string {
|
||||
return fmt.Sprint("defaultentrypoints")
|
||||
}
|
||||
|
||||
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoints map[string]*EntryPoint
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (ep *EntryPoints) String() string {
|
||||
return fmt.Sprintf("%+v", *ep)
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (ep *EntryPoints) Set(value string) error {
|
||||
regex := regexp.MustCompile("(?:Name:(?P<Name>\\S*))\\s*(?:Address:(?P<Address>\\S*))?\\s*(?:TLS:(?P<TLS>\\S*))?\\s*((?P<TLSACME>TLS))?\\s*(?:CA:(?P<CA>\\S*))?\\s*(?:Redirect.EntryPoint:(?P<RedirectEntryPoint>\\S*))?\\s*(?:Redirect.Regex:(?P<RedirectRegex>\\S*))?\\s*(?:Redirect.Replacement:(?P<RedirectReplacement>\\S*))?\\s*(?:Compress:(?P<Compress>\\S*))?")
|
||||
match := regex.FindAllStringSubmatch(value, -1)
|
||||
if match == nil {
|
||||
return errors.New("Bad EntryPoints format: " + value)
|
||||
}
|
||||
matchResult := match[0]
|
||||
result := make(map[string]string)
|
||||
for i, name := range regex.SubexpNames() {
|
||||
if i != 0 {
|
||||
result[name] = matchResult[i]
|
||||
}
|
||||
}
|
||||
var tls *TLS
|
||||
if len(result["TLS"]) > 0 {
|
||||
certs := Certificates{}
|
||||
if err := certs.Set(result["TLS"]); err != nil {
|
||||
return err
|
||||
}
|
||||
tls = &TLS{
|
||||
Certificates: certs,
|
||||
}
|
||||
} else if len(result["TLSACME"]) > 0 {
|
||||
tls = &TLS{
|
||||
Certificates: Certificates{},
|
||||
}
|
||||
}
|
||||
if len(result["CA"]) > 0 {
|
||||
files := strings.Split(result["CA"], ",")
|
||||
tls.ClientCAFiles = files
|
||||
}
|
||||
var redirect *Redirect
|
||||
if len(result["RedirectEntryPoint"]) > 0 || len(result["RedirectRegex"]) > 0 || len(result["RedirectReplacement"]) > 0 {
|
||||
redirect = &Redirect{
|
||||
EntryPoint: result["RedirectEntryPoint"],
|
||||
Regex: result["RedirectRegex"],
|
||||
Replacement: result["RedirectReplacement"],
|
||||
}
|
||||
}
|
||||
|
||||
compress := false
|
||||
if len(result["Compress"]) > 0 {
|
||||
compress = strings.EqualFold(result["Compress"], "enable") || strings.EqualFold(result["Compress"], "on")
|
||||
}
|
||||
|
||||
(*ep)[result["Name"]] = &EntryPoint{
|
||||
Address: result["Address"],
|
||||
TLS: tls,
|
||||
Redirect: redirect,
|
||||
Compress: compress,
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (ep *EntryPoints) Get() interface{} {
|
||||
return EntryPoints(*ep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (ep *EntryPoints) SetValue(val interface{}) {
|
||||
*ep = EntryPoints(val.(EntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (ep *EntryPoints) Type() string {
|
||||
return fmt.Sprint("entrypoints")
|
||||
}
|
||||
|
||||
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoint struct {
|
||||
Network string
|
||||
Address string
|
||||
TLS *TLS
|
||||
Redirect *Redirect
|
||||
Auth *types.Auth
|
||||
Compress bool
|
||||
}
|
||||
|
||||
// Redirect configures a redirection of an entry point to another, or to an URL
|
||||
type Redirect struct {
|
||||
EntryPoint string
|
||||
Regex string
|
||||
Replacement string
|
||||
}
|
||||
|
||||
// TLS configures TLS for an entry point
|
||||
type TLS struct {
|
||||
MinVersion string
|
||||
CipherSuites []string
|
||||
Certificates Certificates
|
||||
ClientCAFiles []string
|
||||
}
|
||||
|
||||
// Map of allowed TLS minimum versions
|
||||
var minVersion = map[string]uint16{
|
||||
`VersionTLS10`: tls.VersionTLS10,
|
||||
`VersionTLS11`: tls.VersionTLS11,
|
||||
`VersionTLS12`: tls.VersionTLS12,
|
||||
}
|
||||
|
||||
// Map of TLS CipherSuites from crypto/tls
|
||||
var cipherSuites = map[string]uint16{
|
||||
`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
||||
`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
||||
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
|
||||
`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
|
||||
`TLS_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
|
||||
`TLS_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
|
||||
`TLS_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_RSA_WITH_AES_128_CBC_SHA,
|
||||
`TLS_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_RSA_WITH_AES_256_CBC_SHA,
|
||||
`TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
|
||||
`TLS_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
|
||||
}
|
||||
|
||||
// Certificates defines traefik certificates type
|
||||
// Certs and Keys could be either a file path, or the file content itself
|
||||
type Certificates []Certificate
|
||||
|
||||
//CreateTLSConfig creates a TLS config from Certificate structures
|
||||
func (certs *Certificates) CreateTLSConfig() (*tls.Config, error) {
|
||||
config := &tls.Config{}
|
||||
config.Certificates = []tls.Certificate{}
|
||||
certsSlice := []Certificate(*certs)
|
||||
for _, v := range certsSlice {
|
||||
isAPath := false
|
||||
_, errCert := os.Stat(v.CertFile)
|
||||
_, errKey := os.Stat(v.KeyFile)
|
||||
if errCert == nil {
|
||||
if errKey == nil {
|
||||
isAPath = true
|
||||
} else {
|
||||
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
|
||||
}
|
||||
} else if errKey == nil {
|
||||
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
|
||||
}
|
||||
|
||||
cert := tls.Certificate{}
|
||||
var err error
|
||||
if isAPath {
|
||||
cert, err = tls.LoadX509KeyPair(v.CertFile, v.KeyFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
cert, err = tls.X509KeyPair([]byte(v.CertFile), []byte(v.KeyFile))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
config.Certificates = append(config.Certificates, cert)
|
||||
}
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (certs *Certificates) String() string {
|
||||
if len(*certs) == 0 {
|
||||
return ""
|
||||
}
|
||||
var result []string
|
||||
for _, certificate := range *certs {
|
||||
result = append(result, certificate.CertFile+","+certificate.KeyFile)
|
||||
}
|
||||
return strings.Join(result, ";")
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (certs *Certificates) Set(value string) error {
|
||||
certificates := strings.Split(value, ";")
|
||||
for _, certificate := range certificates {
|
||||
files := strings.Split(certificate, ",")
|
||||
if len(files) != 2 {
|
||||
return errors.New("Bad certificates format: " + value)
|
||||
}
|
||||
*certs = append(*certs, Certificate{
|
||||
CertFile: files[0],
|
||||
KeyFile: files[1],
|
||||
})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (certs *Certificates) Type() string {
|
||||
return fmt.Sprint("certificates")
|
||||
}
|
||||
|
||||
// Certificate holds a SSL cert/key pair
|
||||
// Certs and Key could be either a file path, or the file content itself
|
||||
type Certificate struct {
|
||||
CertFile string
|
||||
KeyFile string
|
||||
}
|
||||
|
||||
// Retry contains request retry config
|
||||
type Retry struct {
|
||||
Attempts int `description:"Number of attempts"`
|
||||
}
|
||||
|
||||
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
|
||||
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
|
||||
//default Docker
|
||||
var defaultDocker provider.Docker
|
||||
defaultDocker.Watch = true
|
||||
defaultDocker.ExposedByDefault = true
|
||||
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
|
||||
defaultDocker.SwarmMode = false
|
||||
|
||||
// default File
|
||||
var defaultFile provider.File
|
||||
defaultFile.Watch = true
|
||||
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
|
||||
|
||||
// default Web
|
||||
var defaultWeb WebProvider
|
||||
defaultWeb.Address = ":8080"
|
||||
defaultWeb.Statistics = &types.Statistics{
|
||||
RecentErrors: 10,
|
||||
}
|
||||
|
||||
// default Metrics
|
||||
defaultWeb.Metrics = &types.Metrics{
|
||||
Prometheus: &types.Prometheus{
|
||||
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
|
||||
},
|
||||
}
|
||||
|
||||
// default Marathon
|
||||
var defaultMarathon provider.Marathon
|
||||
defaultMarathon.Watch = true
|
||||
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
|
||||
defaultMarathon.ExposedByDefault = true
|
||||
defaultMarathon.Constraints = types.Constraints{}
|
||||
defaultMarathon.DialerTimeout = 60
|
||||
defaultMarathon.KeepAlive = 10
|
||||
|
||||
// default Consul
|
||||
var defaultConsul provider.Consul
|
||||
defaultConsul.Watch = true
|
||||
defaultConsul.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsul.Prefix = "traefik"
|
||||
defaultConsul.Constraints = types.Constraints{}
|
||||
|
||||
// default ConsulCatalog
|
||||
var defaultConsulCatalog provider.ConsulCatalog
|
||||
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsulCatalog.Constraints = types.Constraints{}
|
||||
|
||||
// default Etcd
|
||||
var defaultEtcd provider.Etcd
|
||||
defaultEtcd.Watch = true
|
||||
defaultEtcd.Endpoint = "127.0.0.1:2379"
|
||||
defaultEtcd.Prefix = "/traefik"
|
||||
defaultEtcd.Constraints = types.Constraints{}
|
||||
|
||||
//default Zookeeper
|
||||
var defaultZookeeper provider.Zookepper
|
||||
defaultZookeeper.Watch = true
|
||||
defaultZookeeper.Endpoint = "127.0.0.1:2181"
|
||||
defaultZookeeper.Prefix = "/traefik"
|
||||
defaultZookeeper.Constraints = types.Constraints{}
|
||||
|
||||
//default Boltdb
|
||||
var defaultBoltDb provider.BoltDb
|
||||
defaultBoltDb.Watch = true
|
||||
defaultBoltDb.Endpoint = "127.0.0.1:4001"
|
||||
defaultBoltDb.Prefix = "/traefik"
|
||||
defaultBoltDb.Constraints = types.Constraints{}
|
||||
|
||||
//default Kubernetes
|
||||
var defaultKubernetes provider.Kubernetes
|
||||
defaultKubernetes.Watch = true
|
||||
defaultKubernetes.Endpoint = ""
|
||||
defaultKubernetes.LabelSelector = ""
|
||||
defaultKubernetes.Constraints = types.Constraints{}
|
||||
|
||||
// default Mesos
|
||||
var defaultMesos provider.Mesos
|
||||
defaultMesos.Watch = true
|
||||
defaultMesos.Endpoint = "http://127.0.0.1:5050"
|
||||
defaultMesos.ExposedByDefault = true
|
||||
defaultMesos.Constraints = types.Constraints{}
|
||||
defaultMesos.RefreshSeconds = 30
|
||||
defaultMesos.ZkDetectionTimeout = 30
|
||||
defaultMesos.StateTimeoutSecond = 30
|
||||
|
||||
//default ECS
|
||||
var defaultECS provider.ECS
|
||||
defaultECS.Watch = true
|
||||
defaultECS.ExposedByDefault = true
|
||||
defaultECS.RefreshSeconds = 15
|
||||
defaultECS.Cluster = "default"
|
||||
defaultECS.Constraints = types.Constraints{}
|
||||
|
||||
//default Rancher
|
||||
var defaultRancher provider.Rancher
|
||||
defaultRancher.Watch = true
|
||||
defaultRancher.ExposedByDefault = true
|
||||
|
||||
defaultConfiguration := GlobalConfiguration{
|
||||
Docker: &defaultDocker,
|
||||
File: &defaultFile,
|
||||
Web: &defaultWeb,
|
||||
Marathon: &defaultMarathon,
|
||||
Consul: &defaultConsul,
|
||||
ConsulCatalog: &defaultConsulCatalog,
|
||||
Etcd: &defaultEtcd,
|
||||
Zookeeper: &defaultZookeeper,
|
||||
Boltdb: &defaultBoltDb,
|
||||
Kubernetes: &defaultKubernetes,
|
||||
Mesos: &defaultMesos,
|
||||
ECS: &defaultECS,
|
||||
Rancher: &defaultRancher,
|
||||
Retry: &Retry{},
|
||||
}
|
||||
|
||||
//default Rancher
|
||||
//@TODO: ADD
|
||||
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: defaultConfiguration,
|
||||
}
|
||||
}
|
||||
|
||||
// NewTraefikConfiguration creates a TraefikConfiguration with default values
|
||||
func NewTraefikConfiguration() *TraefikConfiguration {
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: GlobalConfiguration{
|
||||
GraceTimeOut: 10,
|
||||
AccessLogsFile: "",
|
||||
TraefikLogsFile: "",
|
||||
LogLevel: "ERROR",
|
||||
EntryPoints: map[string]*EntryPoint{},
|
||||
Constraints: types.Constraints{},
|
||||
DefaultEntryPoints: []string{},
|
||||
ProvidersThrottleDuration: time.Duration(2 * time.Second),
|
||||
MaxIdleConnsPerHost: 200,
|
||||
CheckNewVersion: true,
|
||||
},
|
||||
ConfigFile: "",
|
||||
}
|
||||
}
|
||||
|
||||
type configs map[string]*types.Configuration
|
@@ -1,41 +1,11 @@
|
||||
[Unit]
|
||||
Description=Traefik
|
||||
Documentation=https://docs.traefik.io
|
||||
#After=network-online.target
|
||||
#AssertFileIsExecutable=/usr/bin/traefik
|
||||
#AssertPathExists=/etc/traefik/traefik.toml
|
||||
|
||||
[Service]
|
||||
# Run traefik as its own user (create new user with: useradd -r -s /bin/false -U -M traefik)
|
||||
#User=traefik
|
||||
#AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||
|
||||
# configure service behavior
|
||||
Type=notify
|
||||
#ExecStart=/usr/bin/traefik --configFile=/etc/traefik/traefik.toml
|
||||
ExecStart=/usr/bin/traefik --configFile=/etc/traefik.toml
|
||||
Restart=always
|
||||
WatchdogSec=1s
|
||||
|
||||
# lock down system access
|
||||
# prohibit any operating system and configuration modification
|
||||
#ProtectSystem=strict
|
||||
# create separate, new (and empty) /tmp and /var/tmp filesystems
|
||||
#PrivateTmp=true
|
||||
# make /home directories inaccessible
|
||||
#ProtectHome=true
|
||||
# turns off access to physical devices (/dev/...)
|
||||
#PrivateDevices=true
|
||||
# make kernel settings (procfs and sysfs) read-only
|
||||
#ProtectKernelTunables=true
|
||||
# make cgroups /sys/fs/cgroup read-only
|
||||
#ProtectControlGroups=true
|
||||
|
||||
# allow writing of acme.json
|
||||
#ReadWritePaths=/etc/traefik/acme.json
|
||||
# depending on log and entrypoint configuration, you may need to allow writing to other paths, too
|
||||
|
||||
# limit number of processes in this unit
|
||||
#LimitNPROC=1
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
@@ -1 +0,0 @@
|
||||
site/
|
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"no-hard-tabs": false,
|
||||
"MD007": { "indent": 4 },
|
||||
"MD009": false,
|
||||
"MD013": false,
|
||||
"MD024": false,
|
||||
"MD026": false,
|
||||
"MD033": false,
|
||||
"MD034": false,
|
||||
"MD036": false
|
||||
}
|
@@ -1,52 +0,0 @@
|
||||
|
||||
#######
|
||||
# This Makefile contains all targets related to the documentation
|
||||
#######
|
||||
|
||||
DOCS_VERIFY_SKIP ?= false
|
||||
DOCS_LINT_SKIP ?= false
|
||||
|
||||
TRAEFIK_DOCS_BUILD_IMAGE ?= traefik-docs
|
||||
TRAEFIK_DOCS_CHECK_IMAGE ?= $(TRAEFIK_DOCS_BUILD_IMAGE)-check
|
||||
|
||||
SITE_DIR := $(CURDIR)/site
|
||||
|
||||
DOCKER_RUN_DOC_PORT := 8000
|
||||
DOCKER_RUN_DOC_MOUNTS := -v $(CURDIR):/mkdocs
|
||||
DOCKER_RUN_DOC_OPTS := --rm $(DOCKER_RUN_DOC_MOUNTS) -p $(DOCKER_RUN_DOC_PORT):8000
|
||||
|
||||
# Default: generates the documentation into $(SITE_DIR)
|
||||
docs: docs-clean docs-image docs-lint docs-build docs-verify
|
||||
|
||||
# Writer Mode: build and serve docs on http://localhost:8000 with livereload
|
||||
docs-serve: docs-image
|
||||
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) mkdocs serve
|
||||
|
||||
# Utilities Targets for each step
|
||||
docs-image:
|
||||
docker build -t $(TRAEFIK_DOCS_BUILD_IMAGE) -f docs.Dockerfile ./
|
||||
|
||||
docs-build: docs-image
|
||||
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) sh -c "mkdocs build \
|
||||
&& chown -R $(shell id -u):$(shell id -g) ./site"
|
||||
|
||||
docs-verify: docs-build
|
||||
@if [ "$(DOCS_VERIFY_SKIP)" != "true" ]; then \
|
||||
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./; \
|
||||
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /verify.sh; \
|
||||
else \
|
||||
echo "DOCS_VERIFY_SKIP is true: no verification done."; \
|
||||
fi
|
||||
|
||||
docs-lint:
|
||||
@if [ "$(DOCS_LINT_SKIP)" != "true" ]; then \
|
||||
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./ && \
|
||||
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /lint.sh; \
|
||||
else \
|
||||
echo "DOCS_LINT_SKIP is true: no linting done."; \
|
||||
fi
|
||||
|
||||
docs-clean:
|
||||
rm -rf $(SITE_DIR)
|
||||
|
||||
.PHONY: all docs-verify docs docs-clean docs-build docs-lint
|
386
docs/basics.md
Normal file
@@ -0,0 +1,386 @@
|
||||
|
||||
# Concepts
|
||||
|
||||
Let's take our example from the [overview](https://docs.traefik.io/#overview) again:
|
||||
|
||||
|
||||
> Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
|
||||
> If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
|
||||
|
||||
> - domain `api.domain.com` will point the microservice `api` in your private network
|
||||
> - path `domain.com/web` will point the microservice `web` in your private network
|
||||
> - domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
|
||||
|
||||
> 
|
||||
|
||||
Let's zoom on Træfɪk and have an overview of its internal architecture:
|
||||
|
||||
|
||||

|
||||
|
||||
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfɪk (listening port, SSL, traffic redirection...).
|
||||
- Traffic is then forwarded to a matching [frontend](#frontends). A frontend defines routes from [entrypoints](#entrypoints) to [backends](#backends).
|
||||
Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can match or not a request.
|
||||
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
|
||||
- Finally, the [server](#servers) will forward the request to the corresponding microservice in the private network.
|
||||
|
||||
## Entrypoints
|
||||
|
||||
Entrypoints are the network entry points into Træfɪk.
|
||||
They can be defined using:
|
||||
|
||||
- a port (80, 443...)
|
||||
- SSL (Certificates, Keys, authentication with a client certificate signed by a trusted CA...)
|
||||
- redirection to another entrypoint (redirect `HTTP` to `HTTPS`)
|
||||
|
||||
Here is an example of entrypoints definition:
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.http.redirect]
|
||||
entryPoint = "https"
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "tests/traefik.crt"
|
||||
keyFile = "tests/traefik.key"
|
||||
```
|
||||
|
||||
- Two entrypoints are defined `http` and `https`.
|
||||
- `http` listens on port `80` and `https` on port `443`.
|
||||
- We enable SSL on `https` by giving a certificate and a key.
|
||||
- We also redirect all the traffic from entrypoint `http` to `https`.
|
||||
|
||||
And here is another example with client certificate authentication:
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "tests/traefik.crt"
|
||||
keyFile = "tests/traefik.key"
|
||||
```
|
||||
|
||||
- We enable SSL on `https` by giving a certificate and a key.
|
||||
- One or several files containing Certificate Authorities in PEM format are added.
|
||||
- It is possible to have multiple CA:s in the same file or keep them in separate files.
|
||||
|
||||
## Frontends
|
||||
|
||||
A frontend is a set of rules that forwards the incoming traffic from an entrypoint to a backend.
|
||||
Frontends can be defined using the following rules:
|
||||
|
||||
- `Headers: Content-Type, application/json`: Headers adds a matcher for request header values. It accepts a sequence of key/value pairs to be matched.
|
||||
- `HeadersRegexp: Content-Type, application/(text|json)`: Regular expressions can be used with headers as well. It accepts a sequence of key/value pairs, where the value has regex support.
|
||||
- `Host: traefik.io, www.traefik.io`: Match request host with given host list.
|
||||
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Adds a matcher for the URL hosts. It accepts templates with zero or more URL variables enclosed by `{}`. Variables can define an optional regexp pattern to be matched.
|
||||
- `Method: GET, POST, PUT`: Method adds a matcher for HTTP methods. It accepts a sequence of one or more methods to be matched.
|
||||
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Path adds a matcher for the URL paths. It accepts templates with zero or more URL variables enclosed by `{}`.
|
||||
- `PathStrip`: Same as `Path` but strip the given prefix from the request URL's Path.
|
||||
- `PathPrefix`: PathPrefix adds a matcher for the URL path prefixes. This matches if the given template is a prefix of the full URL path.
|
||||
- `PathPrefixStrip`: Same as `PathPrefix` but strip the given prefix from the request URL's Path.
|
||||
- `AddPrefix` : Add prefix from the request URL's Path.
|
||||
|
||||
You can use multlple values for a rule by separating them with `,`.
|
||||
You can use multiple rules by separating them by `;`.
|
||||
|
||||
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
|
||||
|
||||
Here is an example of frontends definition:
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend2"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "Host:test.localhost,test2.localhost"
|
||||
[frontends.frontend2]
|
||||
backend = "backend1"
|
||||
passHostHeader = true
|
||||
priority = 10
|
||||
entrypoints = ["https"] # overrides defaultEntryPoints
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "HostRegexp:localhost,{subdomain:[a-z]+}.localhost"
|
||||
[frontends.frontend3]
|
||||
backend = "backend2"
|
||||
[frontends.frontend3.routes.test_1]
|
||||
rule = "Host:test3.localhost;Path:/test"
|
||||
```
|
||||
|
||||
- Three frontends are defined: `frontend1`, `frontend2` and `frontend3`
|
||||
- `frontend1` will forward the traffic to the `backend2` if the rule `Host:test.localhost,test2.localhost` is matched
|
||||
- `frontend2` will forward the traffic to the `backend1` if the rule `Host:localhost,{subdomain:[a-z]+}.localhost` is matched (forwarding client `Host` header to the backend)
|
||||
- `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched
|
||||
|
||||
### Combining multiple rules
|
||||
|
||||
As seen in the previous example, you can combine multiple rules.
|
||||
In TOML file, you can use multiple routes:
|
||||
|
||||
```toml
|
||||
[frontends.frontend3]
|
||||
backend = "backend2"
|
||||
[frontends.frontend3.routes.test_1]
|
||||
rule = "Host:test3.localhost"
|
||||
[frontends.frontend3.routes.test_2]
|
||||
rule = "Path:/test"
|
||||
```
|
||||
|
||||
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
|
||||
You can also use the notation using a `;` separator, same result:
|
||||
|
||||
```toml
|
||||
[frontends.frontend3]
|
||||
backend = "backend2"
|
||||
[frontends.frontend3.routes.test_1]
|
||||
rule = "Host:test3.localhost;Path:/test"
|
||||
```
|
||||
|
||||
Finally, you can create a rule to bind multiple domains or Path to a frontend, using the `,` separator:
|
||||
|
||||
```toml
|
||||
[frontends.frontend2]
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "Host:test1.localhost,test2.localhost"
|
||||
[frontends.frontend3]
|
||||
backend = "backend2"
|
||||
[frontends.frontend3.routes.test_1]
|
||||
rule = "Path:/test1,/test2"
|
||||
```
|
||||
|
||||
### Priorities
|
||||
|
||||
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
|
||||
`PathPrefix:/12345` will be matched before `PathPrefix:/1234` that will be matched before `PathPrefix:/1`.
|
||||
|
||||
You can customize priority by frontend:
|
||||
|
||||
```
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
priority = 10
|
||||
passHostHeader = true
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefix:/to"
|
||||
[frontends.frontend2]
|
||||
priority = 5
|
||||
backend = "backend2"
|
||||
passHostHeader = true
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "PathPrefix:/toto"
|
||||
```
|
||||
|
||||
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
|
||||
|
||||
## Backends
|
||||
|
||||
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
|
||||
Various methods of load-balancing are supported:
|
||||
|
||||
- `wrr`: Weighted Round Robin
|
||||
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others. It also rolls back to original weights if the servers have changed.
|
||||
|
||||
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
|
||||
Initial state is Standby. CB observes the statistics and does not modify the request.
|
||||
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
|
||||
Once Tripped timer expires, CB enters Recovering state and resets all stats.
|
||||
In case the condition does not match and recovery timer expires, CB enters Standby state.
|
||||
|
||||
It can be configured using:
|
||||
|
||||
- Methods: `LatencyAtQuantileMS`, `NetworkErrorRatio`, `ResponseCodeRatio`
|
||||
- Operators: `AND`, `OR`, `EQ`, `NEQ`, `LT`, `LE`, `GT`, `GE`
|
||||
|
||||
For example:
|
||||
|
||||
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend
|
||||
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
|
||||
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in range [500-600) to [0-600)
|
||||
|
||||
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can
|
||||
also be applied to each backend.
|
||||
|
||||
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and
|
||||
`maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to
|
||||
evaluate the maximum connections.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.maxconn]
|
||||
amount = 10
|
||||
extractorfunc = "request.host"
|
||||
```
|
||||
|
||||
- `backend1` will return `HTTP code 429 Too Many Requests` if there are already 10 requests in progress for the same Host header.
|
||||
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
|
||||
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
|
||||
|
||||
Sticky sessions are supported with both load balancers. When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial
|
||||
request. On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend
|
||||
will be assigned.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.loadbalancer]
|
||||
sticky = true
|
||||
```
|
||||
|
||||
A health check can be configured in order to remove a backend from LB rotation
|
||||
as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
|
||||
requests periodically carried out by Traefik. The check is defined by a path
|
||||
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
|
||||
often the health check should be executed (the default being 30 seconds). Each
|
||||
backend must respond to the health check within 5 seconds.
|
||||
|
||||
A recovering backend returning 200 OK responses again is being returned to the
|
||||
LB rotation pool.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.healthcheck]
|
||||
path = "/health"
|
||||
interval = "10s"
|
||||
```
|
||||
|
||||
## Servers
|
||||
|
||||
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
|
||||
|
||||
Here is an example of backends and servers definition:
|
||||
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.circuitbreaker]
|
||||
expression = "NetworkErrorRatio() > 0.5"
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://172.17.0.2:80"
|
||||
weight = 10
|
||||
[backends.backend1.servers.server2]
|
||||
url = "http://172.17.0.3:80"
|
||||
weight = 1
|
||||
[backends.backend2]
|
||||
[backends.backend2.LoadBalancer]
|
||||
method = "drr"
|
||||
[backends.backend2.servers.server1]
|
||||
url = "http://172.17.0.4:80"
|
||||
weight = 1
|
||||
[backends.backend2.servers.server2]
|
||||
url = "http://172.17.0.5:80"
|
||||
weight = 2
|
||||
```
|
||||
|
||||
- Two backends are defined: `backend1` and `backend2`
|
||||
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1` using default `wrr` load-balancing strategy.
|
||||
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
|
||||
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
|
||||
|
||||
# Configuration
|
||||
|
||||
Træfɪk's configuration has two parts:
|
||||
|
||||
- The [static Træfɪk configuration](/basics#static-trfk-configuration) which is loaded only at the beginning.
|
||||
- The [dynamic Træfɪk configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
|
||||
|
||||
|
||||
## Static Træfɪk configuration
|
||||
|
||||
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
|
||||
|
||||
Træfɪk can be configured using many configuration sources with the following precedence order.
|
||||
Each item takes precedence over the item below it:
|
||||
|
||||
- [Key-value Store](/basics/#key-value-stores)
|
||||
- [Arguments](/basics/#arguments)
|
||||
- [Configuration file](/basics/#configuration-file)
|
||||
- Default
|
||||
|
||||
It means that arguments override configuration file, and Key-value Store overrides arguments.
|
||||
|
||||
### Configuration file
|
||||
|
||||
By default, Træfɪk will try to find a `traefik.toml` in the following places:
|
||||
|
||||
- `/etc/traefik/`
|
||||
- `$HOME/.traefik/`
|
||||
- `.` *the working directory*
|
||||
|
||||
You can override this by setting a `configFile` argument:
|
||||
|
||||
```bash
|
||||
$ traefik --configFile=foo/bar/myconfigfile.toml
|
||||
```
|
||||
|
||||
Please refer to the [global configuration](/toml/#global-configuration) section to get documentation on it.
|
||||
|
||||
### Arguments
|
||||
|
||||
Each argument (and command) is described in the help section:
|
||||
|
||||
```bash
|
||||
$ traefik --help
|
||||
```
|
||||
|
||||
Note that all default values will be displayed as well.
|
||||
|
||||
### Key-value stores
|
||||
|
||||
Træfɪk supports several Key-value stores:
|
||||
|
||||
- [Consul](https://consul.io)
|
||||
- [etcd](https://coreos.com/etcd/)
|
||||
- [ZooKeeper](https://zookeeper.apache.org/)
|
||||
- [boltdb](https://github.com/boltdb/bolt)
|
||||
|
||||
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
|
||||
|
||||
## Dynamic Træfɪk configuration
|
||||
|
||||
The dynamic configuration concerns :
|
||||
|
||||
- [Frontends](/basics/#frontends)
|
||||
- [Backends](/basics/#backends)
|
||||
- [Servers](/basics/#servers)
|
||||
|
||||
Træfɪk can hot-reload those rules which could be provided by [multiple configuration backends](/toml/#configuration-backends).
|
||||
|
||||
We only need to enable `watch` option to make Træfɪk watch configuration backend changes and generate its configuration automatically.
|
||||
Routes to services will be created and updated instantly at any changes.
|
||||
|
||||
Please refer to the [configuration backends](/toml/#configuration-backends) section to get documentation on it.
|
||||
|
||||
# Commands
|
||||
|
||||
Usage: `traefik [command] [--flag=flag_argument]`
|
||||
|
||||
List of Træfɪk available commands with description :
|
||||
|
||||
- `version` : Print version
|
||||
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfɪk configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
|
||||
|
||||
Each command may have related flags.
|
||||
All those related flags will be displayed with :
|
||||
|
||||
```bash
|
||||
$ traefik [command] --help
|
||||
```
|
||||
|
||||
Note that each command is described at the beginning of the help section:
|
||||
|
||||
```bash
|
||||
$ traefik --help
|
||||
```
|
||||
|
213
docs/benchmarks.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Benchmarks
|
||||
|
||||
## Configuration
|
||||
|
||||
I would like to thanks [vincentbernat](https://github.com/vincentbernat) from [exoscale.ch](https://www.exoscale.ch) who kindly provided the infrastructure needed for the benchmarks.
|
||||
|
||||
I used 4 VMs for the tests with the following configuration:
|
||||
|
||||
- 32 GB RAM
|
||||
- 8 CPU Cores
|
||||
- 10 GB SSD
|
||||
- Ubuntu 14.04 LTS 64-bit
|
||||
|
||||
## Setup
|
||||
|
||||
1. One VM used to launch the benchmarking tool [wrk](https://github.com/wg/wrk)
|
||||
2. One VM for traefik (v1.0.0-beta.416) / nginx (v1.4.6)
|
||||
3. Two VMs for 2 backend servers in go [whoami](https://github.com/emilevauge/whoamI/)
|
||||
|
||||
Each VM has been tuned using the following limits:
|
||||
|
||||
```bash
|
||||
sysctl -w fs.file-max="9999999"
|
||||
sysctl -w fs.nr_open="9999999"
|
||||
sysctl -w net.core.netdev_max_backlog="4096"
|
||||
sysctl -w net.core.rmem_max="16777216"
|
||||
sysctl -w net.core.somaxconn="65535"
|
||||
sysctl -w net.core.wmem_max="16777216"
|
||||
sysctl -w net.ipv4.ip_local_port_range="1025 65535"
|
||||
sysctl -w net.ipv4.tcp_fin_timeout="30"
|
||||
sysctl -w net.ipv4.tcp_keepalive_time="30"
|
||||
sysctl -w net.ipv4.tcp_max_syn_backlog="20480"
|
||||
sysctl -w net.ipv4.tcp_max_tw_buckets="400000"
|
||||
sysctl -w net.ipv4.tcp_no_metrics_save="1"
|
||||
sysctl -w net.ipv4.tcp_syn_retries="2"
|
||||
sysctl -w net.ipv4.tcp_synack_retries="2"
|
||||
sysctl -w net.ipv4.tcp_tw_recycle="1"
|
||||
sysctl -w net.ipv4.tcp_tw_reuse="1"
|
||||
sysctl -w vm.min_free_kbytes="65536"
|
||||
sysctl -w vm.overcommit_memory="1"
|
||||
ulimit -n 9999999
|
||||
```
|
||||
|
||||
### Nginx
|
||||
|
||||
Here is the config Nginx file use `/etc/nginx/nginx.conf`:
|
||||
|
||||
```
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
worker_rlimit_nofile 200000;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 10000;
|
||||
use epoll;
|
||||
multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 300;
|
||||
keepalive_requests 10000;
|
||||
types_hash_max_size 2048;
|
||||
|
||||
open_file_cache max=200000 inactive=300s;
|
||||
open_file_cache_valid 300s;
|
||||
open_file_cache_min_uses 2;
|
||||
open_file_cache_errors on;
|
||||
|
||||
server_tokens off;
|
||||
dav_methods off;
|
||||
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
access_log /var/log/nginx/access.log combined;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
|
||||
gzip off;
|
||||
gzip_vary off;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*.conf;
|
||||
}
|
||||
```
|
||||
|
||||
Here is the Nginx vhost file used:
|
||||
|
||||
```
|
||||
upstream whoami {
|
||||
server IP-whoami1:80;
|
||||
server IP-whoami2:80;
|
||||
keepalive 300;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 8001;
|
||||
server_name test.traefik;
|
||||
access_log off;
|
||||
error_log /dev/null crit;
|
||||
if ($host != "test.traefik") {
|
||||
return 404;
|
||||
}
|
||||
location / {
|
||||
proxy_pass http://whoami;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "";
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Traefik
|
||||
|
||||
Here is the `traefik.toml` file used:
|
||||
|
||||
```
|
||||
MaxIdleConnsPerHost = 100000
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":8000"
|
||||
|
||||
[file]
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://IP-whoami1:80"
|
||||
weight = 1
|
||||
[backends.backend1.servers.server2]
|
||||
url = "http://IP-whoami2:80"
|
||||
weight = 1
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "Host: test.traefik"
|
||||
```
|
||||
|
||||
## Results
|
||||
|
||||
### whoami:
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
|
||||
Running 1m test @ http://IP-whoami:80/bench
|
||||
20 threads and 1000 connections
|
||||
Thread Stats Avg Stdev Max +/- Stdev
|
||||
Latency 70.28ms 134.72ms 1.91s 89.94%
|
||||
Req/Sec 2.92k 742.42 8.78k 68.80%
|
||||
Latency Distribution
|
||||
50% 10.63ms
|
||||
75% 75.64ms
|
||||
90% 205.65ms
|
||||
99% 668.28ms
|
||||
3476705 requests in 1.00m, 384.61MB read
|
||||
Socket errors: connect 0, read 0, write 0, timeout 103
|
||||
Requests/sec: 57894.35
|
||||
Transfer/sec: 6.40MB
|
||||
```
|
||||
|
||||
### nginx:
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
|
||||
Running 1m test @ http://IP-nginx:8001/bench
|
||||
20 threads and 1000 connections
|
||||
Thread Stats Avg Stdev Max +/- Stdev
|
||||
Latency 101.25ms 180.09ms 1.99s 89.34%
|
||||
Req/Sec 1.69k 567.69 9.39k 72.62%
|
||||
Latency Distribution
|
||||
50% 15.46ms
|
||||
75% 129.11ms
|
||||
90% 302.44ms
|
||||
99% 846.59ms
|
||||
2018427 requests in 1.00m, 298.36MB read
|
||||
Socket errors: connect 0, read 0, write 0, timeout 90
|
||||
Requests/sec: 33591.67
|
||||
Transfer/sec: 4.97MB
|
||||
```
|
||||
|
||||
### traefik:
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
|
||||
Running 1m test @ http://IP-traefik:8000/bench
|
||||
20 threads and 1000 connections
|
||||
Thread Stats Avg Stdev Max +/- Stdev
|
||||
Latency 91.72ms 150.43ms 2.00s 90.50%
|
||||
Req/Sec 1.43k 266.37 2.97k 69.77%
|
||||
Latency Distribution
|
||||
50% 19.74ms
|
||||
75% 121.98ms
|
||||
90% 237.39ms
|
||||
99% 687.49ms
|
||||
1705073 requests in 1.00m, 188.63MB read
|
||||
Socket errors: connect 0, read 0, write 0, timeout 7
|
||||
Requests/sec: 28392.44
|
||||
Transfer/sec: 3.14MB
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Traefik is obviously slower than Nginx, but not so much: Traefik can serve 28392 requests/sec and Nginx 33591 requests/sec which gives a ratio of 85%.
|
||||
Not bad for young project :) !
|
||||
|
||||
Some areas of possible improvements:
|
||||
|
||||
- Use [GO_REUSEPORT](https://github.com/kavu/go_reuseport) listener
|
||||
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with traefik than with nginx)
|
||||
|
@@ -1,43 +0,0 @@
|
||||
|
||||
FROM alpine:3.9 as alpine
|
||||
|
||||
# The "build-dependencies" virtual package provides build tools for html-proofer installation.
|
||||
# It compile ruby-nokogiri, because alpine native version is always out of date
|
||||
# This virtual package is cleaned at the end.
|
||||
RUN apk --no-cache --no-progress add \
|
||||
libcurl \
|
||||
ruby \
|
||||
ruby-bigdecimal \
|
||||
ruby-etc \
|
||||
ruby-ffi \
|
||||
ruby-json \
|
||||
&& apk add --no-cache --virtual build-dependencies \
|
||||
build-base \
|
||||
libcurl \
|
||||
libxml2-dev \
|
||||
libxslt-dev \
|
||||
ruby-dev \
|
||||
&& gem install --no-document html-proofer -v 3.10.2 \
|
||||
&& apk del build-dependencies
|
||||
|
||||
# After Ruby, some NodeJS YAY!
|
||||
RUN apk --no-cache --no-progress add \
|
||||
git \
|
||||
nodejs \
|
||||
npm \
|
||||
&& npm install markdownlint@0.12.0 markdownlint-cli@0.13.0 --global
|
||||
|
||||
# Finally the shell tools we need for later
|
||||
# tini helps to terminate properly all the parallelized tasks when sending CTRL-C
|
||||
RUN apk --no-cache --no-progress add \
|
||||
ca-certificates \
|
||||
curl \
|
||||
tini
|
||||
|
||||
COPY ./scripts/verify.sh /verify.sh
|
||||
COPY ./scripts/lint.sh /lint.sh
|
||||
|
||||
WORKDIR /app
|
||||
VOLUME ["/tmp","/app"]
|
||||
|
||||
ENTRYPOINT ["/sbin/tini","-g","sh"]
|
Before Width: | Height: | Size: 361 KiB |
Before Width: | Height: | Size: 376 KiB |
Before Width: | Height: | Size: 42 KiB |
Before Width: | Height: | Size: 92 KiB |
Before Width: | Height: | Size: 71 KiB |
Before Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 73 KiB |
Before Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 120 KiB |
Before Width: | Height: | Size: 67 KiB |
Before Width: | Height: | Size: 63 KiB |
Before Width: | Height: | Size: 58 KiB |
Before Width: | Height: | Size: 307 KiB |
Before Width: | Height: | Size: 377 KiB |
Before Width: | Height: | Size: 228 KiB |
Before Width: | Height: | Size: 2.2 KiB |
Before Width: | Height: | Size: 289 KiB |
Before Width: | Height: | Size: 354 KiB |
Before Width: | Height: | Size: 339 KiB |
Before Width: | Height: | Size: 378 KiB |
Before Width: | Height: | Size: 452 KiB |
Before Width: | Height: | Size: 182 KiB |
Before Width: | Height: | Size: 209 KiB |
Before Width: | Height: | Size: 186 KiB |
Before Width: | Height: | Size: 125 KiB |
@@ -1,4 +0,0 @@
|
||||
/* Highlight */
|
||||
(function(hljs) {
|
||||
hljs.initHighlightingOnLoad();
|
||||
})(hljs);
|
@@ -1,24 +0,0 @@
|
||||
Copyright (c) 2006, Ivan Sagalaev
|
||||
All rights reserved.
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
* Neither the name of highlight.js nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY
|
||||
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY
|
||||
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@@ -1,96 +0,0 @@
|
||||
/*
|
||||
|
||||
Atom One Light by Daniel Gamage
|
||||
Original One Light Syntax theme from https://github.com/atom/one-light-syntax
|
||||
|
||||
base: #fafafa
|
||||
mono-1: #383a42
|
||||
mono-2: #686b77
|
||||
mono-3: #a0a1a7
|
||||
hue-1: #0184bb
|
||||
hue-2: #4078f2
|
||||
hue-3: #a626a4
|
||||
hue-4: #50a14f
|
||||
hue-5: #e45649
|
||||
hue-5-2: #c91243
|
||||
hue-6: #986801
|
||||
hue-6-2: #c18401
|
||||
|
||||
*/
|
||||
|
||||
.hljs {
|
||||
display: block;
|
||||
overflow-x: auto;
|
||||
padding: 0.5em;
|
||||
color: #383a42;
|
||||
background: #fafafa;
|
||||
}
|
||||
|
||||
.hljs-comment,
|
||||
.hljs-quote {
|
||||
color: #a0a1a7;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.hljs-doctag,
|
||||
.hljs-keyword,
|
||||
.hljs-formula {
|
||||
color: #a626a4;
|
||||
}
|
||||
|
||||
.hljs-section,
|
||||
.hljs-name,
|
||||
.hljs-selector-tag,
|
||||
.hljs-deletion,
|
||||
.hljs-subst {
|
||||
color: #e45649;
|
||||
}
|
||||
|
||||
.hljs-literal {
|
||||
color: #0184bb;
|
||||
}
|
||||
|
||||
.hljs-string,
|
||||
.hljs-regexp,
|
||||
.hljs-addition,
|
||||
.hljs-attribute,
|
||||
.hljs-meta-string {
|
||||
color: #50a14f;
|
||||
}
|
||||
|
||||
.hljs-built_in,
|
||||
.hljs-class .hljs-title {
|
||||
color: #c18401;
|
||||
}
|
||||
|
||||
.hljs-attr,
|
||||
.hljs-variable,
|
||||
.hljs-template-variable,
|
||||
.hljs-type,
|
||||
.hljs-selector-class,
|
||||
.hljs-selector-attr,
|
||||
.hljs-selector-pseudo,
|
||||
.hljs-number {
|
||||
color: #986801;
|
||||
}
|
||||
|
||||
.hljs-symbol,
|
||||
.hljs-bullet,
|
||||
.hljs-link,
|
||||
.hljs-meta,
|
||||
.hljs-selector-id,
|
||||
.hljs-title {
|
||||
color: #4078f2;
|
||||
}
|
||||
|
||||
.hljs-emphasis {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.hljs-strong {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.hljs-link {
|
||||
text-decoration: underline;
|
||||
}
|
@@ -1,63 +0,0 @@
|
||||
@import url('https://fonts.googleapis.com/css?family=Noto+Sans|Noto+Serif');
|
||||
|
||||
.md-logo img {
|
||||
background-color: white;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
}
|
||||
|
||||
/* Fix for Chrome */
|
||||
.md-typeset__table td code {
|
||||
word-break: unset;
|
||||
}
|
||||
|
||||
.md-typeset__table tr :nth-child(1) {
|
||||
word-wrap: break-word;
|
||||
max-width: 30em;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: 'Noto Sans', sans-serif;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-weight: bold !important;
|
||||
color: rgba(0,0,0,.9) !important;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-weight: bold !important;
|
||||
}
|
||||
|
||||
h3 {
|
||||
font-weight: bold !important;
|
||||
}
|
||||
|
||||
.md-typeset h5 {
|
||||
text-transform: none;
|
||||
}
|
||||
|
||||
figcaption {
|
||||
text-align: center;
|
||||
font-size: 0.8em;
|
||||
font-style: italic;
|
||||
color: #8D909F;
|
||||
}
|
||||
|
||||
p.subtitle {
|
||||
color: rgba(0,0,0,.54);
|
||||
padding-top: 0;
|
||||
margin-top: -2em;
|
||||
font-weight: bold;
|
||||
font-size: 1.25em;
|
||||
}
|
||||
|
||||
.markdown-body .task-list-item {
|
||||
list-style-type: none !important;
|
||||
}
|
||||
|
||||
.markdown-body .task-list-item input[type="checkbox"] {
|
||||
margin: 0 4px 0.25em -20px;
|
||||
vertical-align: middle;
|
||||
}
|
@@ -1,10 +0,0 @@
|
||||
# Advocating
|
||||
|
||||
Spread the Love & Tell Us about It
|
||||
{: .subtitle }
|
||||
|
||||
There are many ways to contribute to the project, and there is one that always spark joy: when we see/read about users talking about how Traefik helps them solve their problems.
|
||||
|
||||
If you're talking about Traefik, [let us know](https://blog.containo.us/spread-the-love-ba5a40aa72e7) and we'll promote your enthusiasm!
|
||||
|
||||
Also, if you've written about Traefik or shared useful information you'd like to promote, feel free to add links in the [dedicated wiki page on Github](https://github.com/containous/traefik/wiki/Awesome-Traefik).
|
@@ -1,173 +0,0 @@
|
||||
# Building and Testing
|
||||
|
||||
Compile and Test Your Own Traefik!
|
||||
{: .subtitle }
|
||||
|
||||
So you want to build your own Traefik binary from the sources?
|
||||
Let's see how.
|
||||
|
||||
## Building
|
||||
|
||||
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build Traefik.
|
||||
For changes to its dependencies, the `dep` dependency management tool is required.
|
||||
|
||||
### Method 1: Using `Docker` and `Makefile`
|
||||
|
||||
Run make with the `binary` target.
|
||||
This will create binaries for the Linux platform in the `dist` folder.
|
||||
|
||||
```bash
|
||||
$ make binary
|
||||
docker build -t traefik-webui -f webui/Dockerfile webui
|
||||
Sending build context to Docker daemon 2.686MB
|
||||
Step 1/11 : FROM node:8.15.0
|
||||
---> 1f6c34f7921c
|
||||
[...]
|
||||
Successfully built ce4ff439c06a
|
||||
Successfully tagged traefik-webui:latest
|
||||
[...]
|
||||
docker build -t "traefik-dev:4475--feature-documentation" -f build.Dockerfile .
|
||||
Sending build context to Docker daemon 279MB
|
||||
Step 1/10 : FROM golang:1.13-alpine
|
||||
---> f4bfb3d22bda
|
||||
[...]
|
||||
Successfully built 5c3c1a911277
|
||||
Successfully tagged traefik-dev:4475--feature-documentation
|
||||
docker run -e "TEST_CONTAINER=1" -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -e VERBOSE -e VERSION -e CODENAME -e TESTDIRS -e CI -e CONTAINER=DOCKER -v "/home/ldez/sources/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:4475--feature-documentation" ./script/make.sh generate binary
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'autogen/genstatic/gen.go'
|
||||
|
||||
---> Making bundle: binary (in .)
|
||||
|
||||
$ ls dist/
|
||||
traefik*
|
||||
```
|
||||
|
||||
The following targets can be executed outside Docker by setting the variable `PRE_TARGET` to an empty string (we don't recommend that):
|
||||
|
||||
- `test-unit`
|
||||
- `test-integration`
|
||||
- `validate`
|
||||
- `binary` (the webUI is still generated by using Docker)
|
||||
|
||||
ex:
|
||||
|
||||
```bash
|
||||
PRE_TARGET= make test-unit
|
||||
```
|
||||
|
||||
### Method 2: Using `go`
|
||||
|
||||
Requirements:
|
||||
|
||||
- `go` v1.13+
|
||||
- environment variable `GO111MODULE=on`
|
||||
|
||||
!!! tip "Source Directory"
|
||||
|
||||
It is recommended that you clone Traefik into the `~/go/src/github.com/containous/traefik` directory.
|
||||
This is the official golang workspace hierarchy that will allow dependencies to be properly resolved.
|
||||
|
||||
!!! note "Environment"
|
||||
|
||||
Set your `GOPATH` and `PATH` variable to be set to `~/go` via:
|
||||
|
||||
```bash
|
||||
export GOPATH=~/go
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
For convenience, add `GOPATH` and `PATH` to your `.bashrc` or `.bash_profile`
|
||||
|
||||
Verify your environment is setup properly by running `$ go env`.
|
||||
Depending on your OS and environment, you should see an output similar to:
|
||||
|
||||
```bash
|
||||
GOARCH="amd64"
|
||||
GOBIN=""
|
||||
GOEXE=""
|
||||
GOHOSTARCH="amd64"
|
||||
GOHOSTOS="linux"
|
||||
GOOS="linux"
|
||||
GOPATH="/home/<yourusername>/go"
|
||||
GORACE=""
|
||||
## ... and the list goes on
|
||||
```
|
||||
|
||||
#### Build Traefik
|
||||
|
||||
Once you've set up your go environment and cloned the source repository, you can build Traefik.
|
||||
Beforehand, you need to get `go-bindata` (the first time) in order to be able to use the `go generate` command (which is part of the build process).
|
||||
|
||||
```bash
|
||||
cd ~/go/src/github.com/containous/traefik
|
||||
|
||||
# Get go-bindata. (Important: the ellipses are required.)
|
||||
GO111MODULE=off go get github.com/containous/go-bindata/...
|
||||
|
||||
# Let's build
|
||||
|
||||
# generate
|
||||
# (required to merge non-code components into the final binary, such as the web dashboard and the provider's templates)
|
||||
go generate
|
||||
|
||||
# Standard go build
|
||||
go build ./cmd/traefik
|
||||
```
|
||||
|
||||
You will find the Traefik executable (`traefik`) in the `~/go/src/github.com/containous/traefik` directory.
|
||||
|
||||
### Updating the templates
|
||||
|
||||
If you happen to update the provider's templates (located in `/templates`), you must run `go generate` to update the `autogen` package.
|
||||
|
||||
## Testing
|
||||
|
||||
### Method 1: `Docker` and `make`
|
||||
|
||||
Run unit tests using the `test-unit` target.
|
||||
Run integration tests using the `test-integration` target.
|
||||
Run all tests (unit and integration) using the `test` target.
|
||||
|
||||
```bash
|
||||
$ make test-unit
|
||||
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
|
||||
# […]
|
||||
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/user/go/src/github/containous/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: test-unit (in .)
|
||||
+ go test -cover -coverprofile=cover.out .
|
||||
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
|
||||
|
||||
Test success
|
||||
```
|
||||
|
||||
For development purposes, you can specify which tests to run by using (only works the `test-integration` target):
|
||||
|
||||
```bash
|
||||
# Run every tests in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite" make test-integration
|
||||
|
||||
# Run the test "MyTest" in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
|
||||
|
||||
# Run every tests starting with "My", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
|
||||
|
||||
# Run every tests ending with "Test", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
|
||||
```
|
||||
|
||||
More: https://labix.org/gocheck
|
||||
|
||||
### Method 2: `go`
|
||||
|
||||
Unit tests can be run from the cloned directory using `$ go test ./...` which should return `ok`, similar to:
|
||||
|
||||
```test
|
||||
ok _/home/user/go/src/github/containous/traefik 0.004s
|
||||
```
|
||||
|
||||
Integration tests must be run from the `integration/` directory and require the `-integration` switch: `$ cd integration && go test -integration ./...`.
|