1
0
mirror of https://github.com/containous/traefik.git synced 2025-09-27 05:44:22 +03:00

Compare commits

..

61 Commits

Author SHA1 Message Date
Emile Vauge
fba3db5291 Merge pull request #1349 from containous/prepare-release-v1.2.1
Prepare release v1.2.1
2017-03-27 17:08:32 +02:00
Emile Vauge
023cda1398 Prepare release v1.2.1
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-27 15:45:00 +02:00
Emile Vauge
3cf6d7e9e5 Merge pull request #1347 from containous/bump-lego-0e2937900
bump lego 0e2937900
2017-03-27 15:41:36 +02:00
Emile Vauge
0d657a09b0 bump lego 0e2937900
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-27 14:39:14 +02:00
Emile Vauge
07176396e2 Merge pull request #1331 from containous/k8s-fix-logging-when-objects-are-missing
k8s: Do not log service fields when GetService is failing.
2017-03-24 09:32:30 +01:00
Timo Reimann
5183c98fb7 k8s: Do not log service fields when GetService is failing.
Update tests too.
2017-03-22 18:59:39 +01:00
Emile Vauge
5a57515c6b Merge pull request #1318 from containous/prepare-release-v1.2.0
Prepare release v1.2.0
2017-03-21 10:37:24 +01:00
Emile Vauge
3490b9d35d Prepare release v1.2.0
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 23:26:52 +01:00
Emile Vauge
dd0a20a668 Merge pull request #1304 from Yshayy/filter-non-running-tasks
Add filter on task status in addition to desired status (Docker Provider - swarm)
2017-03-20 19:28:10 +01:00
Emile Vauge
d13cef6ff6 sub-tests + Fatalf/Errorf
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 17:54:09 +01:00
Emile Vauge
9f149977d6 Add Docker task list test
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 12:36:49 +01:00
yshay
e65544f414 Add check on task status in addition to desired status 2017-03-20 12:12:53 +01:00
Sebastian
8010758b29 Docker: Added warning if network could not be found (#1310)
* Added warning if network could not be found

* Removed regex import from master

* Corrected wrong function call
2017-03-19 18:40:09 +01:00
Regner Blok-Andersen
75d92c6967 Abort Kubernetes Ingress update if Kubernetes API call fails (#1295)
* Abort Kubernetes Ingress update if Kubernetes API call fails

Currently if a Kubernetes API call fails we potentially remove a working service from Traefik. This changes it so if a Kubernetes API call fails we abort out of the ingress update and use the current working config. Github issue: #1240

Also added a test to cover when requested resources (services and endpoints) that the user has specified don’t exist.

* Specifically capturing the tc range as documented here: https://blog.golang.org/subtests

* Updating service names in the mock data to be more clear

* Updated expected data to match what currently happens in the loadIngress

* Adding a blank Servers to the expected output so we compare against that instead of nil.

* Replacing the JSON test output with spew for the TestMissingResources test to help ensure we have useful output incase of failures

* Adding a temporary fix to the GetEndoints mocked function so we can override the return value for if the endpoints exist.

After the 1.2 release the use of properExists should be removed and the GetEndpoints function should return false for the second value indicating the endpoint doesn’t exist. However at this time that would break a lot of the tests.

* Adding quick TODO line about removing the properExists property

* Link to issue 1307 re: properExists flag.
2017-03-17 16:34:34 +01:00
Emile Vauge
0f67cc7818 Merge pull request #1291 from containous/small-fixes
Small fixes
2017-03-15 18:04:41 +01:00
Emile Vauge
f428a752a5 Refactor k8s client config
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Emile Vauge
3fe3784b6c Removed unused log
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Emile Vauge
e4d63331cf Fix default config in generic Mesos provider
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Vincent Demeester
8ae521db64 Merge pull request #1278 from akanto/update-oxy
Update Oxy, fix for #1199
2017-03-15 15:22:35 +01:00
Timo Reimann
24fde36b20 Revert "Pass context to ListReleases when checking for new versions."
This reverts commit 07db6a2df1.
2017-03-15 11:01:47 +01:00
Timo Reimann
7c55a4fd0c Update github.com/containous/oxy only. 2017-03-15 11:01:43 +01:00
Timo Reimann
7b1c0a97f7 Reset glide files to versions from upstream/v1.2. 2017-03-15 10:41:10 +01:00
Attila Kanto
8392846bd4 Update vulcand and pin deps in glide.yaml 2017-03-15 06:59:34 +01:00
Timo Reimann
07db6a2df1 Pass context to ListReleases when checking for new versions.
Required by go-github update.
2017-03-15 06:59:34 +01:00
Emile Vauge
cc9bb4b1f8 Merge pull request #1285 from timoreimann/rename-healthcheck-url-to-path
Rename health check URL parameter to path.
2017-03-14 23:39:24 +01:00
Timo Reimann
de91b99639 Rename health check URL parameter to path.
Also improve documentation.
2017-03-14 01:53:24 +01:00
Timo Reimann
c582ea5ff0 Merge pull request #1258 from matevzmihalic/fix/metrics
Fix metrics registering
2017-03-11 07:37:24 +01:00
Matevz Mihalic
b5de37e722 Fix metrics registering 2017-03-10 21:26:34 +01:00
Emile Vauge
ee9032f0bf Merge pull request #1209 from owen/ecs-chunk-taskarns
Chunk taskArns into groups of 100
2017-03-09 15:55:42 +01:00
Owen Marshall
11a68ce7f9 Chunk taskArns into groups of 100
If the ECS cluster has > 100 tasks, passing them to
ecs.DescribeTasksRequest() will result in the AWS API returning
errors.

This patch breaks them into chunks of at most 100, and calls
DescribeTasks for each chunk.

We also return early in case ListTasks returns no values; this
prevents DescribeTasks from throwing HTTP errors.
2017-03-07 20:52:33 -05:00
Emile Vauge
0dbac0af0d Merge pull request #1239 from timoreimann/update-maxidleconnsperhost-default-in-docs
Update DefaultMaxIdleConnsPerHost default in docs.
2017-03-07 15:20:07 +01:00
Timo Reimann
9541ee4cf6 Docs: Update default value for DefaultMaxIdleConnsPerHost. 2017-03-06 21:26:33 +01:00
Emile Vauge
2958a67ce5 Merge pull request #1225 from dtomcej/fix-670
Update WSS/WS Proto [Fixes #670]
2017-03-02 23:51:48 +01:00
dtomcej
eebbf6ebbb update oxy hash 2017-03-02 14:59:19 -07:00
Vincent Demeester
0133178b84 Merge pull request #1219 from SantoDE/feature-bump-gorancher
Bump go-rancher version
2017-03-02 13:13:44 +01:00
Manuel Laufenberg
9af5ba34ae Bump go-rancher version 2017-03-02 09:39:43 +01:00
Emile Vauge
9b24e13418 Merge pull request #1204 from containous/prepare-release-v1.2.0-rc2
Prepare release v1.2.0 rc2
2017-03-01 13:54:55 +01:00
Emile Vauge
8fd880a3f1 Prepare release v1.2.0-rc2
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-01 13:19:08 +01:00
Emile Vauge
89700666b9 Merge pull request #1198 from jangie/revert-1080
Revert "Ensure that we don't add balancees with no health check runs …
2017-02-27 21:59:47 +01:00
Bruce Lee
343e0547df Revert "Ensure that we don't add balancees with no health check runs if there is a health check defined on it"
This reverts commit ad12a7264e.
2017-02-27 13:56:53 -05:00
Emile Vauge
a7d5e6ce4f Merge pull request #1167 from christopherobin/bugfix-docker
Fix docker issues with global and dead tasks
2017-02-23 10:30:20 +01:00
Christophe Robin
d342ae68d8 Add task parser unit test for docker provider 2017-02-22 23:16:33 +01:00
Christophe Robin
a87cd3fc2c Fix docker issues with global and dead tasks 2017-02-22 23:16:33 +01:00
Emile Vauge
28a4d65b38 Merge pull request #1173 from SantoDE/feature-rancher-improvements-1125
Small fixes and improvments
2017-02-22 23:15:23 +01:00
Manuel Laufenberg
253dbfab94 Small fixes and improvments 2017-02-22 16:30:54 +01:00
Emile Vauge
727f79f432 Merge pull request #1141 from containous/fix-stats-race
Fix stats race condition
2017-02-22 16:29:48 +01:00
Emile Vauge
6a56eb480b Fix stats race condition
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-02-22 15:02:26 +01:00
Emile Vauge
155d0900bb Merge pull request #1143 from lpetre/ecs_nil_checks
Better ECS error checking
2017-02-22 15:01:42 +01:00
Luke Petre
a59a165cd7 Try harder to query all the possible ec2 instances, and filter on instance state / lack of IP address 2017-02-22 13:38:11 +01:00
Emile Vauge
5d540be81b Merge pull request #1132 from Juliens/healthcheck
Healthcheck tests and doc
2017-02-21 14:03:13 +01:00
Julien Salleyron
05f2449d84 Wrong tests docker images 2017-02-21 11:09:19 +01:00
Julien Salleyron
44fa364cdd Add doc 2017-02-21 11:09:19 +01:00
Julien Salleyron
04a25b841f Add some integration test 2017-02-21 11:09:19 +01:00
Julien Salleyron
efbbff671c Add healthcheck interval 2017-02-21 11:09:19 +01:00
Emile Vauge
27c2c721ed Merge pull request #1137 from rickard-von-essen/ecs-docs
ECS: Docs - info about cred. resolution and required access policies
2017-02-09 12:08:26 +01:00
Rickard von Essen
abe16d4480 ECS: Docs - info about cred. resolution and required access policies
Added information about how AWS credentials are resolved and which
access rights is needed the Traefik ECS provider.
2017-02-08 20:19:39 +01:00
Emile Vauge
445bb8189b Merge pull request #1128 from containous/fix-travis-deploy
Fix travis deploy
2017-02-06 21:36:30 +01:00
Emile Vauge
ff7dfdcd43 Fix travis deploy
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-02-06 21:32:57 +01:00
Emile Vauge
5e662c9dbd Merge pull request #1126 from containous/prepare-release-v1.2.0-rc1
Prepare release v1.2.0 rc1
2017-02-06 20:52:30 +01:00
Emile Vauge
78d60b3651 Changelog for v1.2.0-rc1
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-02-06 19:54:28 +01:00
Emile Vauge
7a7992a639 Add v1.2 codename
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-02-06 18:51:56 +01:00
6295 changed files with 37414 additions and 1463191 deletions

View File

@@ -1,3 +1,5 @@
dist/
vendor/
!dist/traefik
site/
**/*.test

2
.gitattributes vendored
View File

@@ -1 +1 @@
# vendor/github.com/go-acme/lego/providers/dns/cloudxns/cloudxns.go eol=crlf
glide.lock binary

24
.github/CODEOWNERS vendored
View File

@@ -1,24 +0,0 @@
provider/kubernetes/** @containous/kubernetes
provider/rancher/** @containous/rancher
provider/marathon/** @containous/marathon
provider/docker/** @containous/docker
docs/user-guide/kubernetes.md @containous/kubernetes
docs/user-guide/marathon.md @containous/marathon
docs/user-guide/swarm.md @containous/docker
docs/user-guide/swarm-mode.md @containous/docker
docs/configuration/backends/docker.md @containous/docker
docs/configuration/backends/kubernetes.md @containous/kubernetes
docs/configuration/backends/marathon.md @containous/marathon
docs/configuration/backends/rancher.md @containous/rancher
examples/k8s/ @containous/kubernetes
examples/compose-k8s.yaml @containous/kubernetes
examples/k8s.namespace.yaml @containous/kubernetes
examples/compose-rancher.yml @containous/rancher
examples/compose-marathon.yml @containous/marathon
vendor/github.com/gambol99/go-marathon @containous/marathon
vendor/github.com/rancher @containous/rancher
vendor/k8s.io/ @containous/kubernetes

146
.github/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,146 @@
# Contributing
### Building
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` and `glide` (Method 2) in order to build traefik.
#### Method 1: Using `Docker` and `Makefile`
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
```bash
$ make binary
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
Sending build context to Docker daemon 295.3 MB
Step 0 : FROM golang:1.7
---> 8c6473912976
Step 1 : RUN go get github.com/Masterminds/glide
[...]
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/emile/dev/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: binary (in .)
$ ls dist/
traefik*
```
#### Method 2: Using `go` and `glide`
###### Setting up your `go` environment
- You need `go` v1.7+
- It is recommended you clone Træfɪk into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
- This will allow your `GOPATH` and `PATH` variable to be set to `~/go` via:
```
$ export GOPATH=~/go
$ export PATH=$PATH:$GOPATH/bin
```
This can be verified via `$ go env`
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
###### Setting up your `glide` environment
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
The idea behind `glide` is the following :
- when checkout(ing) a project, run `$ glide install -v` from the cloned directory to install
(`go get …`) the dependencies in your `GOPATH`.
- if you need another dependency, import and use it in
the source, and run `$ glide get github.com/Masterminds/cookoo` to save it in
`vendor` and add it to your `glide.yaml`.
```bash
$ glide install --strip-vendor
# generate (Only required to integrate other components such as web dashboard)
$ go generate
# Standard go build
$ go build
# Using gox to build multiple platform
$ gox "linux darwin" "386 amd64 arm" \
-output="dist/traefik_{{.OS}}-{{.Arch}}"
# run other commands like tests
```
### Tests
##### Method 1: `Docker` and `make`
You can run unit tests using the `test-unit` target and the
integration test using the `test-integration` target.
```bash
$ make test-unit
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
# […]
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/vincent/src/github/vdemeester/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: test-unit (in .)
+ go test -cover -coverprofile=cover.out .
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
Test success
```
For development purposes, you can specify which tests to run by using:
```
# Run every tests in the MyTest suite
TESTFLAGS="-check.f MyTestSuite" make test-integration
# Run the test "MyTest" in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
# Run every tests starting with "My", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
# Run every tests ending with "Test", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
```
More: https://labix.org/gocheck
##### Method 2: `go` and `glide`
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
```
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
```
- Note that `$ go test ./...` will run all tests (including the ones in the vendor directory for the dependencies that glide have fetched). If you only want to run the tests for traefik use `$ go test $(glide novendor)` instead.
### Documentation
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
First make sure you have python and pip installed
```
$ python --version
Python 2.7.2
$ pip --version
pip 1.5.2
```
Then install mkdocs with pip
```
$ pip install mkdocs
```
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
```
$ mkdocs serve
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
[I 160505 22:31:24 handlers:59] Start watching changes
[I 160505 22:31:24 handlers:61] Start detecting changes
```

13
.github/ISSUE_TEMPLATE vendored Normal file
View File

@@ -0,0 +1,13 @@
### What version of Traefik are you using (`traefik version`)?
### What is your environment & configuration (arguments, toml...)?
### What did you do?
### What did you expect to see?
### What did you see instead?

View File

@@ -1,72 +0,0 @@
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://slack.traefik.io
-->
### Do you want to request a *feature* or report a *bug*?
<!--
If you intend to ask a support question: DO NOT FILE AN ISSUE.
-->
### What did you do?
<!--
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
### What did you see instead?
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
For the alpine Traefik Docker image:
docker run [IMAGE] traefik version
ex: docker run traefik traefik version
-->
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output at DEBUG level (`--logLevel=DEBUG` switch)
```
(paste your output here)
```

View File

@@ -1,77 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
---
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://slack.traefik.io
-->
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
<!--
HOW TO WRITE A GOOD BUG REPORT?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
### What did you see instead?
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
For the alpine Traefik Docker image:
docker run [IMAGE] traefik version
ex: docker run traefik traefik version
-->
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in DEBUG level (`--logLevel=DEBUG` switch)
```
(paste your output here)
```

View File

@@ -1,36 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
---
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://slack.traefik.io
-->
### Do you want to request a *feature* or report a *bug*?
Feature
### What did you expect to see?
<!--
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->

View File

@@ -1,36 +0,0 @@
<!--
PLEASE READ THIS MESSAGE.
HOW TO WRITE A GOOD PULL REQUEST?
- Make it small.
- Do only one thing.
- Avoid re-formatting.
- Make sure the code builds.
- Make sure all tests pass.
- Add tests.
- Write useful descriptions and titles.
- Address review comments in terms of additional commits.
- Do not amend/squash existing ones unless the PR is trivial.
- Read the contributing guide: https://github.com/containous/traefik/blob/master/CONTRIBUTING.md.
-->
### What does this PR do?
<!-- A brief description of the change being made with this pull request. -->
### Motivation
<!-- What inspired you to submit this pull request? -->
### More
- [ ] Added/updated tests
- [ ] Added/updated documentation
### Additional Notes
<!-- Anything else we should know when reviewing? -->

View File

@@ -1,7 +0,0 @@
### What does this PR do?
Merge v{{.Version}} into master
### Motivation
Be sync.

View File

@@ -1,7 +0,0 @@
### What does this PR do?
Prepare release v{{.Version}}.
### Motivation
Create a new release.

26
.github/cpr.sh vendored Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/sh
#
# git config --global alias.cpr '!sh .github/cpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- Checkout a Pull Request locally"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
git remote add $remote git@github.com:$remote/traefik.git
git fetch $remote $branch
git checkout -t $remote/$branch

27
.github/rmpr.sh vendored Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/sh
#
# git config --global alias.rmpr '!sh .github/rmpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- remove a Pull Request local branch & remote"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
# clean
git checkout $initial
git branch -D $branch
git remote remove $remote

36
.github/rpr.sh vendored Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/sh
#
# git config --global alias.rpr '!sh .github/rpr.sh'
set -e # stop on error
usage="$(basename "$0") pr remote/branch -- rebase a Pull Request against a remote branch"
if [ "$#" -ne 2 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
base=$2
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
clean ()
{
git checkout $initial
.github/rmpr.sh $pr
}
trap clean EXIT
.github/cpr.sh $pr
git rebase $base
git push -f $remote $branch

25
.gitignore vendored
View File

@@ -1,16 +1,15 @@
.idea/
.intellij/
*.iml
.vscode/
.DS_Store
/dist
/webui/.tmp/
/site/
/docs/site/
/static/
/autogen/
/traefik
/traefik.toml
gen.go
.idea
.intellij
*.iml
traefik
traefik.toml
*.test
vendor/
static/
.vscode/
site/
*.log
*.exe
cover.out
.DS_Store

View File

@@ -1,90 +0,0 @@
[run]
deadline = "10m"
skip-files = [
"^old/.*",
]
[linters-settings]
[linters-settings.govet]
check-shadowing = false
[linters-settings.golint]
min-confidence = 0.0
[linters-settings.gocyclo]
min-complexity = 14.0
[linters-settings.maligned]
suggest-new = true
[linters-settings.goconst]
min-len = 3.0
min-occurrences = 4.0
[linters-settings.misspell]
locale = "US"
[linters]
enable-all = true
disable = [
"gocyclo", # FIXME must be fixed
"gosec",
"dupl",
"maligned",
"lll",
"unparam",
"prealloc",
"scopelint",
"gochecknoinits",
"gochecknoglobals",
]
[issues]
exclude-use-default = false
max-per-linter = 0
max-same-issues = 0
exclude = [
"SA1019: http.CloseNotifier is deprecated: the CloseNotifier interface predates Go's context package. New code should use Request.Context instead.", # FIXME must be fixed
"Error return value of .((os\\.)?std(out|err)\\..*|.*Close|.*Flush|os\\.Remove(All)?|.*printf?|os\\.(Un)?Setenv). is not checked",
"should have a package comment, unless it's in another file for this package",
]
[[issues.exclude-rules]]
path = ".+_test.go"
linters = ["goconst"]
[[issues.exclude-rules]]
path = "provider/label/internal/.+_test.go"
text = "U1000: field `(foo|fuu)` is unused"
[[issues.exclude-rules]]
path = "middlewares/recovery/recovery.go"
text = "`logger` can be `github.com/containous/traefik/vendor/github.com/stretchr/testify/assert.TestingT`"
[[issues.exclude-rules]]
path = "integration/.+_test.go"
text = "Error return value of `cmd\\.Process\\.Kill` is not checked"
[[issues.exclude-rules]]
path = "integration/(consul_catalog_test|constraint_test).go"
text = "Error return value of `(s.deregisterService|s.deregisterAgentService)` is not checked"
[[issues.exclude-rules]]
path = "integration/grpc_test.go"
text = "Error return value of `closer` is not checked"
[[issues.exclude-rules]]
path = "provider/kubernetes/builder_(endpoint|service)_test.go"
text = "(U1000: func )?`(.+)` is unused"
[[issues.exclude-rules]]
path = "provider/docker/builder_test.go"
text = "(U1000: func )?`(.+)` is unused"
[[issues.exclude-rules]]
path = "cmd/configuration.go"
text = "string `traefik` has (\\d) occurrences, make it a constant"
[[issues.exclude-rules]]
path = "h2c/h2c.go"
text = "Error return value of `rw.Write` is not checked"
[[issues.exclude-rules]]
path = "server/service/bufferpool.go"
text = "SA6002: argument should be pointer-like to avoid allocations"
[[issues.exclude-rules]] # FIXME must be fixed
path = "acme/.+.go"
text = "(assignment copies lock value to domainsCerts|literal copies lock value from)"
[[issues.exclude-rules]] # FIXME must be fixed
path = "cmd/context.go"
text = "S1000: should use a simple channel send/receive instead of `select` with a single case"

View File

@@ -1,55 +0,0 @@
project_name: traefik
before:
hooks:
- go generate
builds:
- binary: traefik
main: ./cmd/traefik/traefik.go
env:
- CGO_ENABLED=0
ldflags:
- -s -w -X github.com/containous/traefik/pkg/version.Version={{.Version}} -X github.com/containous/traefik/pkg/version.Codename={{.Env.CODENAME}} -X github.com/containous/traefik/pkg/version.BuildDate={{.Date}}
goos:
- linux
- darwin
- windows
- freebsd
- openbsd
goarch:
- amd64
- 386
- arm
- arm64
- ppc64le
goarm:
- 7
- 6
- 5
ignore:
- goos: darwin
goarch: 386
- goos: openbsd
goarch: arm
- goos: freebsd
goarch: arm
changelog:
skip: true
archive:
name_template: '{{ .ProjectName }}_v{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ if .Arm
}}v{{ .Arm }}{{ end }}'
format: tar.gz
format_overrides:
- goos: windows
format: zip
files:
- LICENSE.md
- CHANGELOG.md
release:
disable: true

10
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,10 @@
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 44e1753f98b0da305332abe26856c3e621c5c439
hooks:
- id: detect-private-key
- repo: git://github.com/containous/pre-commit-hooks
sha: 35e641b5107671e94102b0ce909648559e568d61
hooks:
- id: goFmt
- id: goLint
- id: goErrcheck

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
set -e
sudo rm -rf static

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env bash
set -e
curl -O https://dl.google.com/go/go1.12.linux-amd64.tar.gz
tar -xvf go1.12.linux-amd64.tar.gz
rm -rf go1.12.linux-amd64.tar.gz
sudo mkdir -p /usr/local/golang/1.12/go
sudo mv go /usr/local/golang/1.12/
sudo rm /usr/local/bin/go
sudo chmod +x /usr/local/golang/1.12/go/bin/go
sudo ln -s /usr/local/golang/1.12/go/bin/go /usr/local/bin/go
export GOROOT="/usr/local/golang/1.12/go"
export GOTOOLDIR="/usr/local/golang/1.12/go/pkg/tool/linux_amd64"
go version

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
set -e
if [ -n "$SHOULD_TEST" ]; then ci_retry make pull-images; fi
if [ -n "$SHOULD_TEST" ]; then ci_retry make test-integration; fi

View File

@@ -1,8 +0,0 @@
#!/usr/bin/env bash
set -e
ci_retry make validate
if [ -n "$SHOULD_TEST" ]; then ci_retry make test-unit; fi
if [ -n "$SHOULD_TEST" ]; then make -j${N_MAKE_JOBS} crossbinary-default-parallel; fi

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env bash
set -e
export DOCKER_VERSION=17.03.1
source .semaphoreci/vars
if [ -z "${PULL_REQUEST_NUMBER}" ]; then SHOULD_TEST="-*-"; else TEMP_STORAGE=$(curl --silent https://patch-diff.githubusercontent.com/raw/containous/traefik/pull/${PULL_REQUEST_NUMBER}.diff | patch --dry-run -p1 -R); fi
if [ -n "$TEMP_STORAGE" ]; then SHOULD_TEST=$(echo "$TEMP_STORAGE" | grep -Ev '(.md|.yaml|.yml)' || :); fi
if [ -n "$SHOULD_TEST" ]; then sudo -E apt-get -yq update; fi
if [ -n "$SHOULD_TEST" ]; then sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*; fi
if [ -n "$SHOULD_TEST" ]; then docker version; fi

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -e
export REPO='containous/traefik'
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
then
export VERSION
else
export VERSION=''
fi
export CODENAME=faisselle
export N_MAKE_JOBS=2
function ci_retry {
local NRETRY=3
local NSLEEP=5
local n=0
until [ $n -ge $NRETRY ]
do
"$@" && break
n=$[$n+1]
echo "$@ failed, attempt ${n}/${NRETRY}"
sleep $NSLEEP
done
[ $n -lt $NRETRY ]
}
export -f ci_retry

View File

@@ -1,38 +1,51 @@
sudo: required
dist: trusty
git:
depth: false
services:
- docker
env:
global:
- secure: btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg=
- REPO: $TRAVIS_REPO_SLUG
- VERSION: $TRAVIS_TAG
- CODENAME: faisselle
- CODENAME: morbier
matrix:
fast_finish: true
include:
- env: DOCKER_VERSION=1.10.3
- env: DOCKER_VERSION=1.12.6
before_install:
- sudo -E apt-get -yq update
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-engine=${DOCKER_VERSION}*
install:
- docker version
- pip install --user -r requirements.txt
before_script:
- make validate
- make binary
script:
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
- if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then make docs; fi
- travis_retry make test-unit
- travis_retry make test-integration
after_failure:
- docker ps
after_success:
- make crossbinary
- make image
before_deploy:
- >
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
export BEFORE_DEPLOY_RUN=1;
sudo -E apt-get -yq update;
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
docker version;
make image;
if [ "$TRAVIS_TAG" ]; then
make release-packages;
fi;
curl -sfL https://raw.githubusercontent.com/containous/structor/master/godownloader.sh | bash -s -- -b "${GOPATH}/bin" v1.7.0
structor -o containous -r traefik --dockerfile-url="https://raw.githubusercontent.com/containous/traefik/v1.7/docs.Dockerfile" --menu.js-url="https://raw.githubusercontent.com/containous/structor/master/traefik-menu.js.gotmpl" --rqts-url="https://raw.githubusercontent.com/containous/structor/master/requirements-override.txt" --force-edit-url --exp-branch=master --debug;
fi
- mkdocs build --clean
- tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .
deploy:
- provider: pages
edge: true
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
tags: true
- provider: releases
api_key: ${GITHUB_TOKEN}
file: dist/traefik*
@@ -52,11 +65,3 @@ deploy:
skip_cleanup: true
on:
repo: containous/traefik
- provider: pages
edge: false
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
all_branches: true

BIN
.travis/traefik.id_rsa.enc Normal file

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -2,11 +2,17 @@
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience,nationality, personal appearance, race, religion, or sexual identity and orientation.
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
@@ -16,36 +22,53 @@ Examples of behavior that contributes to creating a positive environment include
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community.
Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at contact@containo.us
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
The project team is obligated to maintain confidentiality with regard to the reporter of an incident.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at contact@containo.us
All complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,3 +0,0 @@
# Contributing
See https://docs.traefik.io.

View File

@@ -2,5 +2,4 @@ FROM scratch
COPY script/ca-certificates.crt /etc/ssl/certs/
COPY dist/traefik /
EXPOSE 80
VOLUME ["/tmp"]
ENTRYPOINT ["/traefik"]

2510
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,282 +0,0 @@
# Gopkg.toml example
#
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
required = [
"k8s.io/code-generator/cmd/client-gen",
"k8s.io/code-generator/cmd/deepcopy-gen",
"k8s.io/code-generator/cmd/defaulter-gen",
"k8s.io/code-generator/cmd/lister-gen",
"k8s.io/code-generator/cmd/informer-gen",
]
[prune]
non-go = true
go-tests = true
unused-packages = true
[[prune.project]]
name = "k8s.io/code-generator"
non-go = false
unused-packages = false
[[constraint]]
branch = "master"
name = "github.com/ArthurHlt/go-eureka-client"
[[constraint]]
branch = "master"
name = "github.com/BurntSushi/toml"
[[constraint]]
branch = "master"
name = "github.com/BurntSushi/ty"
[[constraint]]
branch = "master"
name = "github.com/NYTimes/gziphandler"
[[constraint]]
branch = "containous-fork"
name = "github.com/abbot/go-http-auth"
source = "github.com/containous/go-http-auth"
[[constraint]]
branch = "master"
name = "github.com/armon/go-proxyproto"
[[constraint]]
name = "github.com/aws/aws-sdk-go"
version = "1.13.11"
[[constraint]]
name = "github.com/cenkalti/backoff"
version = "2.1.1"
[[constraint]]
name = "github.com/containous/flaeg"
version = "1.4.1"
[[constraint]]
branch = "master"
name = "github.com/containous/mux"
[[constraint]]
branch = "containous-fork"
name = "github.com/containous/alice"
[[constraint]]
name = "github.com/containous/staert"
version = "3.1.2"
#[[constraint]]
# name = "github.com/containous/traefik-extra-service-fabric"
# version = "1.3.0"
[[constraint]]
name = "github.com/coreos/go-systemd"
version = "14.0.0"
[[constraint]]
branch = "master"
name = "github.com/docker/leadership"
source = "github.com/containous/leadership"
[[constraint]]
name = "github.com/eapache/channels"
version = "1.1.0"
[[constraint]]
branch = "master"
name = "github.com/elazarl/go-bindata-assetfs"
[[constraint]]
branch = "fork-containous"
name = "github.com/go-check/check"
source = "github.com/containous/check"
[[override]]
branch = "fork-containous"
name = "github.com/go-check/check"
source = "github.com/containous/check"
[[constraint]]
name = "github.com/go-kit/kit"
version = "0.7.0"
[[constraint]]
branch = "master"
name = "github.com/gorilla/websocket"
[[constraint]]
name = "github.com/hashicorp/consul"
version = "1.0.6"
[[constraint]]
name = "github.com/influxdata/influxdb"
version = "1.3.7"
#[[constraint]]
# branch = "master"
# name = "github.com/jjcollinge/servicefabric"
[[constraint]]
branch = "master"
name = "github.com/abronan/valkeyrie"
[[constraint]]
name = "github.com/mesosphere/mesos-dns"
source = "https://github.com/containous/mesos-dns.git"
[[constraint]]
name = "github.com/opentracing/opentracing-go"
version = "1.0.2"
[[constraint]]
branch = "containous-fork"
name = "github.com/rancher/go-rancher-metadata"
source = "github.com/containous/go-rancher-metadata"
[[constraint]]
branch = "master"
name = "github.com/ryanuber/go-glob"
[[constraint]]
name = "github.com/satori/go.uuid"
version = "1.1.0"
[[constraint]]
branch = "master"
name = "github.com/stvp/go-udp-testing"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.2.1"
[[constraint]]
name = "github.com/uber/jaeger-client-go"
version = "2.15.0"
[[constraint]]
name = "github.com/uber/jaeger-lib"
version = "1.3.0"
[[constraint]]
branch = "v1"
name = "github.com/unrolled/secure"
[[constraint]]
name = "github.com/vdemeester/shakers"
version = "0.1.0"
[[constraint]]
branch = "master"
name = "github.com/vulcand/oxy"
[[constraint]]
branch = "master"
name = "github.com/go-acme/lego"
# version = "2.4.0"
[[constraint]]
name = "google.golang.org/grpc"
version = "1.5.2"
[[constraint]]
name = "gopkg.in/fsnotify.v1"
source = "github.com/fsnotify/fsnotify"
version = "1.4.2"
[[constraint]]
name = "k8s.io/client-go"
version = "8.0.0" # 8.0.0
[[constraint]]
name = "k8s.io/code-generator"
version = "kubernetes-1.11.7"
[[constraint]]
name = "k8s.io/api"
version = "kubernetes-1.11.7" # "kubernetes-1.11.7"
[[constraint]]
name = "k8s.io/apimachinery"
version = "kubernetes-1.11.7" # "kubernetes-1.11.7"
[[override]]
name = "github.com/json-iterator/go"
version = "1.1.6"
[[constraint]]
branch = "master"
name = "github.com/libkermit/docker"
[[constraint]]
branch = "master"
name = "github.com/libkermit/docker-check"
[[constraint]]
branch = "master"
name = "github.com/libkermit/compose"
[[constraint]]
name = "github.com/docker/docker"
revision = "7848b8beb9d38a98a78b75f78e05f8d2255f9dfe"
[[override]]
name = "github.com/docker/docker"
revision = "7848b8beb9d38a98a78b75f78e05f8d2255f9dfe"
[[override]]
name = "github.com/docker/cli"
revision = "6b63d7b96a41055baddc3fa71f381c7f60bd5d8e"
[[override]]
name = "github.com/docker/distribution"
revision = "edc3ab29cdff8694dd6feb85cfeb4b5f1b38ed9c"
[[override]]
branch = "master"
name = "github.com/docker/libcompose"
[[override]]
name = "github.com/Nvveen/Gotty"
revision = "a8b993ba6abdb0e0c12b0125c603323a71c7790c"
source = "github.com/ijc25/Gotty"
[[override]]
# ALWAYS keep this override
name = "github.com/mailgun/timetools"
revision = "7e6055773c5137efbeb3bd2410d705fe10ab6bfd"
[[override]]
version = "v1.1.1"
name = "github.com/miekg/dns"
[[constraint]]
name = "github.com/patrickmn/go-cache"
version = "2.1.0"
[[constraint]]
name = "gopkg.in/DataDog/dd-trace-go.v1"
version = "1.7.0"
[[constraint]]
name = "github.com/instana/go-sensor"
version = "1.4.12"

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016-2018 Containous SAS
Copyright (c) 2016 Containous SAS, Emile Vauge, emile@vauge.com
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,4 +1,4 @@
.PHONY: all docs docs-serve clear-static
.PHONY: all
TRAEFIK_ENVS := \
-e OS_ARCH_ARG \
@@ -6,31 +6,21 @@ TRAEFIK_ENVS := \
-e TESTFLAGS \
-e VERBOSE \
-e VERSION \
-e CODENAME \
-e TESTDIRS \
-e CI \
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
-e CODENAME
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
TAG_NAME := $(shell git tag -l --contains HEAD)
SHA := $(shell git rev-parse HEAD)
VERSION_GIT := $(if $(TAG_NAME),$(TAG_NAME),$(SHA))
VERSION := $(if $(VERSION),$(VERSION),$(VERSION_GIT))
SRCS = $(shell git ls-files '*.go' | grep -v '^external/')
BIND_DIR := "dist"
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(subst /,-,$(GIT_BRANCH)))
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -e "TEST_CONTAINER=1" -v "/var/run/docker.sock:/var/run/docker.sock")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
print-%: ; @echo $*=$($*)
@@ -42,12 +32,8 @@ all: generate-webui build ## validate all checks, build linux binary, run all te
binary: generate-webui build ## build the linux binary
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary
crossbinary-default: generate-webui build
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-default
crossbinary-default-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-default
crossbinary: generate-webui build ## cross build the non-linux binaries
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate crossbinary
test: build ## run the unit and integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
@@ -56,85 +42,45 @@ test-unit: build ## run the unit tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit
test-integration: build ## run the integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
TEST_HOST=1 ./script/make.sh test-integration
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-integration
validate: build ## validate code, vendor
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate validate-lint validate-misspell validate-vendor
validate: build ## validate gofmt, golint and go vet
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell
build: dist
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
build-webui:
docker build -t traefik-webui -f webui/Dockerfile webui
build-no-cache: dist
docker build --no-cache -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
shell: build ## start a shell inside the build env
$(DOCKER_RUN_TRAEFIK) /bin/bash
image-dirty: binary ## build a docker traefik image
image: build ## build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
image: clear-static binary ## clean up static directory and build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
docs:
make -C ./docs docs
docs-serve:
make -C ./docs docs-serve
dist:
mkdir dist
run-dev:
go generate
go build ./cmd/traefik
go build
./traefik
clear-static:
rm -rf static
build-webui:
docker build -t traefik-webui -f webui/Dockerfile webui
generate-webui: build-webui
if [ ! -d "static" ]; then \
mkdir -p static; \
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui npm run build; \
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui chown -R $(shell id -u):$(shell id -g) ../static; \
echo 'For more informations show `webui/readme.md`' > $$PWD/static/DONT-EDIT-FILES-IN-THIS-DIRECTORY.md; \
fi
generate-crd:
./script/update-generated-crd-code.sh
lint:
script/validate-lint
script/validate-golint
fmt:
gofmt -s -l -w $(SRCS)
pull-images:
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml | awk '{print $$2}' | sort | uniq | xargs -P 6 -n 1 docker pull
dep-ensure:
dep ensure -v
./script/prune-dep.sh
dep-prune:
./script/prune-dep.sh
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
release-packages: generate-webui build
rm -rf dist
$(DOCKER_RUN_TRAEFIK_NOTTY) goreleaser release --skip-publish
$(DOCKER_RUN_TRAEFIK_NOTTY) tar cfz dist/traefik-${VERSION}.src.tar.gz \
--exclude-vcs \
--exclude .idea \
--exclude .travis \
--exclude .semaphoreci \
--exclude .github \
--exclude dist .
$(DOCKER_RUN_TRAEFIK_NOTTY) chown -R $(shell id -u):$(shell id -g) dist/

221
README.md
View File

@@ -1,178 +1,169 @@
<p align="center">
<img src="docs/content/assets/img/traefik.logo.png" alt="Traefik" title="Traefik" />
<img src="docs/img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
</p>
[![Build Status SemaphoreCI](https://semaphoreci.com/api/v1/containous/traefik/branches/master/shields_badge.svg)](https://semaphoreci.com/containous/traefik)
[![Build Status](https://travis-ci.org/containous/traefik.svg?branch=master)](https://travis-ci.org/containous/traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://docs.traefik.io)
[![Go Report Card](https://goreportcard.com/badge/containous/traefik)](http://goreportcard.com/report/containous/traefik)
[![](https://images.microbadger.com/badges/image/traefik.svg)](https://microbadger.com/images/traefik)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/containous/traefik/blob/master/LICENSE.md)
[![Join the chat at https://slack.traefik.io](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://slack.traefik.io)
[![Twitter](https://img.shields.io/twitter/follow/traefik.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefik)
[![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Traefik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
Traefik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
Pointing Traefik at your orchestrator should be the _only_ configuration step you need.
---
. **[Overview](#overview)** .
**[Features](#features)** .
**[Supported backends](#supported-backends)** .
**[Quickstart](#quickstart)** .
**[Web UI](#web-ui)** .
**[Documentation](#documentation)** .
. **[Support](#support)** .
**[Release cycle](#release-cycle)** .
**[Contributing](#contributing)** .
**[Maintainers](#maintainers)** .
**[Credits](#credits)** .
---
:construction: As stated in the [1.7 release note](https://blog.containo.us/traefik-1-7-yet-another-slice-of-awesomeness-2a9c99737889#782d), a significant update is in progress on the [master](https://github.com/containous/traefik/tree/master) branch. This branch will remain in constant evolution and prone to change with little notice, so use it for test purposes only.
Træfɪk (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Kubernetes](http://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Mesos](https://github.com/apache/mesos), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Eureka](https://github.com/Netflix/eureka), Rest API, file...) to manage its configuration automatically and dynamically.
## Overview
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
Now you want users to access these microservices, and you need a reverse proxy.
Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice.
In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
- domain `api.domain.com` will point the microservice `api` in your private network
- path `domain.com/web` will point the microservice `web` in your private network
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
**This is when Traefik can help you!**
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
**Run Traefik and let it do the work for you!**
_(But if you'd rather configure some of your routes manually, Traefik supports that too!)_
Here enters Træfɪk.
![Architecture](docs/img/architecture.png)
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Routes to your services will be created instantly.
Run it and forget it!
![Architecture](docs/content/assets/img/traefik-architecture.png)
## Features
- Continuously updates its configuration (No restarts!)
- Supports multiple load balancing algorithms
- Provides HTTPS to your microservices by leveraging [Let's Encrypt](https://letsencrypt.org) (wildcard certificates support)
- Circuit breakers, retry
- See the magic through its clean web UI
- Websocket, HTTP/2, GRPC ready
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
- Keeps access logs (JSON, CLF)
- Fast
- Exposes a Rest API
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
## Supported Backends
- [Docker](https://docs.traefik.io/configuration/backends/docker) / [Swarm mode](https://docs.traefik.io/configuration/backends/docker#docker-swarm-mode)
- [Kubernetes](https://docs.traefik.io/configuration/backends/kubernetes)
- [Mesos](https://docs.traefik.io/configuration/backends/mesos) / [Marathon](https://docs.traefik.io/configuration/backends/marathon)
- [Rancher](https://docs.traefik.io/configuration/backends/rancher) (API, Metadata)
- [Azure Service Fabric](https://docs.traefik.io/configuration/backends/servicefabric)
- [Consul Catalog](https://docs.traefik.io/configuration/backends/consulcatalog)
- [Consul](https://docs.traefik.io/configuration/backends/consul) / [Etcd](https://docs.traefik.io/configuration/backends/etcd) / [Zookeeper](https://docs.traefik.io/configuration/backends/zookeeper) / [BoltDB](https://docs.traefik.io/configuration/backends/boltdb)
- [Eureka](https://docs.traefik.io/configuration/backends/eureka)
- [Amazon ECS](https://docs.traefik.io/configuration/backends/ecs)
- [Amazon DynamoDB](https://docs.traefik.io/configuration/backends/dynamodb)
- [File](https://docs.traefik.io/configuration/backends/file)
- [Rest](https://docs.traefik.io/configuration/backends/rest)
- [It's fast](http://docs.traefik.io/benchmarks)
- No dependency hell, single binary made with go
- Rest API
- Multiple backends supported: Docker, Swarm, Kubernetes, Marathon, Mesos, Consul, Etcd, and more to come
- Watchers for backends, can listen for changes in backends to apply a new configuration automatically
- Hot-reloading of configuration. No need to restart the process
- Graceful shutdown http connections
- Circuit breakers on backends
- Round Robin, rebalancer load-balancers
- Rest Metrics
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
- SSL backends support
- SSL frontend support (with SNI)
- Clean AngularJS Web UI
- Websocket support
- HTTP/2 support
- Retry request if network error
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
- High Availability with cluster mode
## Quickstart
To get your hands on Traefik, you can use the [5-Minute Quickstart](http://docs.traefik.io/#the-traefik-quickstart-using-docker) in our documentation (you will need Docker).
You can have a quick look at Træfɪk in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
Alternatively, if you don't want to install anything on your computer, you can try Traefik online in this great [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfɪk features and see some demos with Kubernetes.
If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
[![Traefik ContainerCamp UK](http://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
[![Traefik Devoxx France](http://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](http://www.youtube.com/watch?v=QvAz9mVx5TI)
## Web UI
You can access the simple HTML frontend of Traefik.
You can access to a simple HTML frontend of Træfik.
![Web UI Providers](docs/content/assets/img/dashboard-main.png)
![Web UI Health](docs/content/assets/img/dashboard-health.png)
![Web UI Providers](docs/img/web.frontend.png)
![Web UI Health](docs/img/traefik-health.png)
## Documentation
## Plumbing
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
A collection of contributions around Traefik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun guys
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
- [Negroni](https://github.com/codegangsta/negroni): web middlewares made simple
- [Manners](https://github.com/mailgun/manners): graceful shutdown of http.Handler servers
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
## Support
## Test it
To get community support, you can:
- join the Traefik community Slack channel: [![Join the chat at https://slack.traefik.io](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://slack.traefik.io)
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
If you need commercial support, please contact [Containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
## Download
- Grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
- The simple way: grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
```shell
./traefik --configFile=traefik.toml
```
- Or use the official tiny Docker image and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
- Use the tiny Docker image:
```shell
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
```
- Or get the sources:
- From sources:
```shell
git clone https://github.com/containous/traefik
```
## Introductory Videos
## Documentation
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at GopherCon 2017.
You will learn Traefik basics in less than 10 minutes.
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Traefik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](https://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
## Maintainers
[Information about process and maintainers](MAINTAINER.md)
You can find the complete documentation [here](https://docs.traefik.io).
## Contributing
If you'd like to contribute to the project, refer to the [contributing documentation](CONTRIBUTING.md).
Please refer to [this section](.github/CONTRIBUTING.md).
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md).
By participating in this project, you agree to abide by its terms.
## Code Of Conduct
## Release Cycle
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
- We release a new version (e.g. 1.1.0, 1.2.0, 1.3.0) every other month.
- Release Candidates are available before the release (e.g. 1.1.0-rc1, 1.1.0-rc2, 1.1.0-rc3, 1.1.0-rc4, before 1.1.0)
- Bug-fixes (e.g. 1.1.1, 1.1.2, 1.2.1, 1.2.3) are released as needed (no additional features are delivered in those versions, bug-fixes only)
## Support
Each version is supported until the next one is released (e.g. 1.1.x will be supported until 1.2.0 is out)
You can join [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com) to get basic support.
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
We use [Semantic Versioning](http://semver.org/)
## Træfɪk here and there
## Mailing lists
These projects use Træfɪk internally. If your company uses Træfɪk, we would be glad to get your feedback :) Contact us on [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- General announcements, new releases: mail at news+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/news)
- Security announcements: mail at security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
- Project [Mantl](https://mantl.io/) from Cisco
![Web UI Providers](docs/img/mantl-logo.png)
> Mantl is a modern platform for rapidly deploying globally distributed services. A container orchestrator, docker, a network stack, something to pool your logs, something to monitor health, a sprinkle of service discovery and some automation.
- Project [Apollo](http://capgemini.github.io/devops/apollo/) from Cap Gemini
![Web UI Providers](docs/img/apollo-logo.png)
> Apollo is an open source project to aid with building and deploying IAAS and PAAS services. It is particularly geared towards managing containerized applications across multiple hosts, and big data type workloads. Apollo leverages other open source components to provide basic mechanisms for deployment, maintenance, and scaling of infrastructure and applications.
## Partners
[![Zenika](docs/img/zenika.logo.png)](https://zenika.com)
Zenika is one of the leading providers of professional Open Source services and agile methodologies in
Europe. We provide consulting, development, training and support for the worlds leading Open Source
software products.
[![Asteris](docs/img/asteris.logo.png)](https://aster.is)
Founded in 2014, Asteris creates next-generation infrastructure software for the modern datacenter. Asteris writes software that makes it easy for companies to implement continuous delivery and realtime data pipelines. We support the HashiCorp stack, along with Kubernetes, Apache Mesos, Spark and Kafka. We're core committers on mantl.io, consul-cli and mesos-consul.
## Maintainers
- Emile Vauge [@emilevauge](https://github.com/emilevauge)
- Vincent Demeester [@vdemeester](https://github.com/vdemeester)
- Russell Clare [@Russell-IO](https://github.com/Russell-IO)
- Ed Robinson [@errm](https://github.com/errm)
- Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
- Manuel Laufenberg [@SantoDE](https://github.com/SantoDE)
## Credits
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo ![logo](docs/content/assets/img/traefik.icon.png).
Traefik's logo is licensed under the Creative Commons 3.0 Attributions license.
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo ![logo](docs/img/traefik.icon.png)

245
acme/account.go Normal file
View File

@@ -0,0 +1,245 @@
package acme
import (
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"errors"
"reflect"
"sort"
"strings"
"sync"
"time"
"github.com/containous/traefik/log"
"github.com/xenolf/lego/acme"
)
// Account is used to store lets encrypt registration info
type Account struct {
Email string
Registration *acme.RegistrationResource
PrivateKey []byte
DomainsCertificate DomainsCertificates
ChallengeCerts map[string]*ChallengeCert
}
// ChallengeCert stores a challenge certificate
type ChallengeCert struct {
Certificate []byte
PrivateKey []byte
certificate *tls.Certificate
}
// Init inits acccount struct
func (a *Account) Init() error {
err := a.DomainsCertificate.Init()
if err != nil {
return err
}
for _, cert := range a.ChallengeCerts {
if cert.certificate == nil {
certificate, err := tls.X509KeyPair(cert.Certificate, cert.PrivateKey)
if err != nil {
return err
}
cert.certificate = &certificate
}
if cert.certificate.Leaf == nil {
leaf, err := x509.ParseCertificate(cert.certificate.Certificate[0])
if err != nil {
return err
}
cert.certificate.Leaf = leaf
}
}
return nil
}
// NewAccount creates an account
func NewAccount(email string) (*Account, error) {
// Create a user. New accounts need an email and private key to start
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, err
}
domainsCerts := DomainsCertificates{Certs: []*DomainsCertificate{}}
domainsCerts.Init()
return &Account{
Email: email,
PrivateKey: x509.MarshalPKCS1PrivateKey(privateKey),
DomainsCertificate: DomainsCertificates{Certs: domainsCerts.Certs},
ChallengeCerts: map[string]*ChallengeCert{}}, nil
}
// GetEmail returns email
func (a *Account) GetEmail() string {
return a.Email
}
// GetRegistration returns lets encrypt registration resource
func (a *Account) GetRegistration() *acme.RegistrationResource {
return a.Registration
}
// GetPrivateKey returns private key
func (a *Account) GetPrivateKey() crypto.PrivateKey {
if privateKey, err := x509.ParsePKCS1PrivateKey(a.PrivateKey); err == nil {
return privateKey
}
log.Errorf("Cannot unmarshall private key %+v", a.PrivateKey)
return nil
}
// Certificate is used to store certificate info
type Certificate struct {
Domain string
CertURL string
CertStableURL string
PrivateKey []byte
Certificate []byte
}
// DomainsCertificates stores a certificate for multiple domains
type DomainsCertificates struct {
Certs []*DomainsCertificate
lock sync.RWMutex
}
func (dc *DomainsCertificates) Len() int {
return len(dc.Certs)
}
func (dc *DomainsCertificates) Swap(i, j int) {
dc.Certs[i], dc.Certs[j] = dc.Certs[j], dc.Certs[i]
}
func (dc *DomainsCertificates) Less(i, j int) bool {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[j].Domains) {
return dc.Certs[i].tlsCert.Leaf.NotAfter.After(dc.Certs[j].tlsCert.Leaf.NotAfter)
}
if dc.Certs[i].Domains.Main == dc.Certs[j].Domains.Main {
return strings.Join(dc.Certs[i].Domains.SANs, ",") < strings.Join(dc.Certs[j].Domains.SANs, ",")
}
return dc.Certs[i].Domains.Main < dc.Certs[j].Domains.Main
}
func (dc *DomainsCertificates) removeDuplicates() {
sort.Sort(dc)
for i := 0; i < len(dc.Certs); i++ {
for i2 := i + 1; i2 < len(dc.Certs); i2++ {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[i2].Domains) {
// delete
log.Warnf("Remove duplicate cert: %+v, expiration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
dc.Certs = append(dc.Certs[:i2], dc.Certs[i2+1:]...)
i2--
}
}
}
}
// Init inits DomainsCertificates
func (dc *DomainsCertificates) Init() error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
if err != nil {
return err
}
domainsCertificate.tlsCert = &tlsCert
if domainsCertificate.tlsCert.Leaf == nil {
leaf, err := x509.ParseCertificate(domainsCertificate.tlsCert.Certificate[0])
if err != nil {
return err
}
domainsCertificate.tlsCert.Leaf = leaf
}
}
dc.removeDuplicates()
return nil
}
func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain Domain) error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domain, domainsCertificate.Domains) {
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return err
}
domainsCertificate.Certificate = acmeCert
domainsCertificate.tlsCert = &tlsCert
return nil
}
}
return errors.New("Certificate to renew not found for domain " + domain.Main)
}
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
dc.lock.Lock()
defer dc.lock.Unlock()
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return nil, err
}
cert := DomainsCertificate{Domains: domain, Certificate: acmeCert, tlsCert: &tlsCert}
dc.Certs = append(dc.Certs, &cert)
return &cert, nil
}
func (dc *DomainsCertificates) getCertificateForDomain(domainToFind string) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
domains := []string{}
domains = append(domains, domainsCertificate.Domains.Main)
domains = append(domains, domainsCertificate.Domains.SANs...)
for _, domain := range domains {
if domain == domainToFind {
return domainsCertificate, true
}
}
}
return nil, false
}
func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domainToFind, domainsCertificate.Domains) {
return domainsCertificate, true
}
}
return nil, false
}
// DomainsCertificate contains a certificate for multiple domains
type DomainsCertificate struct {
Domains Domain
Certificate *Certificate
tlsCert *tls.Certificate
}
func (dc *DomainsCertificate) needRenew() bool {
for _, c := range dc.tlsCert.Certificate {
crt, err := x509.ParseCertificate(c)
if err != nil {
// If there's an error, we assume the cert is broken, and needs update
return true
}
// <= 30 days left, renew certificate
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 30 * time.Hour))) {
return true
}
}
return false
}

607
acme/acme.go Normal file
View File

@@ -0,0 +1,607 @@
package acme
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io/ioutil"
fmtlog "log"
"os"
"regexp"
"strings"
"time"
"github.com/BurntSushi/ty/fun"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/eapache/channels"
"github.com/xenolf/lego/acme"
"github.com/xenolf/lego/providers/dns"
)
var (
// OSCPMustStaple enables OSCP stapling as from https://github.com/xenolf/lego/issues/270
OSCPMustStaple = false
)
// ACME allows to connect to lets encrypt and retrieve certs
type ACME struct {
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
Storage string `description:"File or key used for certificates storage."`
StorageFile string // deprecated
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
DNSProvider string `description:"Use a DNS based challenge provider rather than HTTPS."`
DelayDontCheckDNS int `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
ACMELogging bool `description:"Enable debug logging of ACME actions."`
client *acme.Client
defaultCertificate *tls.Certificate
store cluster.Store
challengeProvider *challengeProvider
checkOnDemandDomain func(domain string) bool
jobs *channels.InfiniteChannel
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
}
//Domains parse []Domain
type Domains []Domain
//Set []Domain
func (ds *Domains) Set(str string) error {
fargs := func(c rune) bool {
return c == ',' || c == ';'
}
// get function
slice := strings.FieldsFunc(str, fargs)
if len(slice) < 1 {
return fmt.Errorf("Parse error ACME.Domain. Imposible to parse %s", str)
}
d := Domain{
Main: slice[0],
SANs: []string{},
}
if len(slice) > 1 {
d.SANs = slice[1:]
}
*ds = append(*ds, d)
return nil
}
//Get []Domain
func (ds *Domains) Get() interface{} { return []Domain(*ds) }
//String returns []Domain in string
func (ds *Domains) String() string { return fmt.Sprintf("%+v", *ds) }
//SetValue sets []Domain into the parser
func (ds *Domains) SetValue(val interface{}) {
*ds = Domains(val.([]Domain))
}
// Domain holds a domain name with SANs
type Domain struct {
Main string
SANs []string
}
func (a *ACME) init() error {
if a.ACMELogging {
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
} else {
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
}
// no certificates in TLS config, so we add a default one
cert, err := generateDefaultCertificate()
if err != nil {
return err
}
a.defaultCertificate = cert
// TODO: to remove in the futurs
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
log.Warnf("ACME.StorageFile is deprecated, use ACME.Storage instead")
a.Storage = a.StorageFile
}
a.jobs = channels.NewInfiniteChannel()
return nil
}
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a key for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
listener := func(object cluster.Object) error {
account := object.(*Account)
account.Init()
if !leadership.IsLeader() {
a.client, err = a.buildACMEClient(account)
if err != nil {
log.Errorf("Error building ACME client %+v: %s", object, err.Error())
}
}
return nil
}
datastore, err := cluster.NewDataStore(
leadership.Pool.Ctx(),
staert.KvSource{
Store: leadership.Store,
Prefix: a.Storage,
},
&Account{},
listener)
if err != nil {
return err
}
a.store = datastore
a.challengeProvider = &challengeProvider{store: a.store}
ticker := time.NewTicker(24 * time.Hour)
leadership.Pool.AddGoCtx(func(ctx context.Context) {
log.Infof("Starting ACME renew job...")
defer log.Infof("Stopped ACME renew job...")
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
a.renewCertificates()
}
}
})
leadership.AddListener(func(elected bool) error {
if elected {
object, err := a.store.Load()
if err != nil {
return err
}
transaction, object, err := a.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
account.Init()
var needRegister bool
if account == nil || len(account.Email) == 0 {
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
if err != nil {
return err
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Debugf("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
}
return nil
})
return nil
}
// CreateLocalConfig creates a tls.config using local ACME configuration
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a filename for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
localStore := NewLocalStore(a.Storage)
a.store = localStore
a.challengeProvider = &challengeProvider{store: a.store}
var needRegister bool
var account *Account
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
log.Infof("Loading ACME Account...")
// load account
object, err := localStore.Load()
if err != nil {
return err
}
account = object.(*Account)
} else {
log.Infof("Generating ACME Account...")
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Infof("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
// save account
transaction, _, err := a.store.Begin()
if err != nil {
return err
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
for range ticker.C {
a.renewCertificates()
}
})
return nil
}
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
//use regex to test for wildcard certs that might have been added into TLSConfig
for k := range a.TLSConfig.NameToCertificate {
selector := "^" + strings.Replace(k, "*.", ".*\\.?", -1) + "$"
match, _ := regexp.MatchString(selector, domain)
if match {
return a.TLSConfig.NameToCertificate[k], nil
}
}
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
log.Debugf("ACME got challenge %s", domain)
return challengeCert, nil
}
if domainCert, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
log.Debugf("ACME got domain cert %s", domain)
return domainCert.tlsCert, nil
}
if a.OnDemand {
if a.checkOnDemandDomain != nil && !a.checkOnDemandDomain(domain) {
return nil, nil
}
return a.loadCertificateOnDemand(clientHello)
}
log.Debugf("ACME got nothing %s", domain)
return nil, nil
}
func (a *ACME) retrieveCertificates() {
a.jobs.In() <- func() {
log.Infof("Retrieving ACME certificates...")
for _, domain := range a.Domains {
// check if cert isn't already loaded
account := a.store.Get().(*Account)
if _, exists := account.DomainsCertificate.exists(domain); !exists {
domains := []string{}
domains = append(domains, domain.Main)
domains = append(domains, domain.SANs...)
certificateResource, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificate for domain %s: %s", domains, err.Error())
continue
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating ACME store transaction from domain %s: %s", domain, err.Error())
continue
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificateResource, domain)
if err != nil {
log.Errorf("Error adding ACME certificate for domain %s: %s", domains, err.Error())
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
}
}
log.Infof("Retrieved ACME certificates")
}
}
func (a *ACME) renewCertificates() {
a.jobs.In() <- func() {
log.Debugf("Testing certificate renew...")
account := a.store.Get().(*Account)
for _, certificateResource := range account.DomainsCertificate.Certs {
if certificateResource.needRenew() {
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true, OSCPMustStaple)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
renewedACMECert := &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
account = object.(*Account)
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
}
}
}
}
func dnsOverrideDelay(delay int) error {
var err error
if delay > 0 {
log.Debugf("Delaying %d seconds rather than validating DNS propagation", delay)
acme.PreCheckDNS = func(_, _ string) (bool, error) {
time.Sleep(time.Duration(delay) * time.Second)
return true, nil
}
} else if delay < 0 {
err = fmt.Errorf("Invalid negative DelayDontCheckDNS: %d", delay)
}
return err
}
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
log.Debugf("Building ACME client...")
caServer := "https://acme-v01.api.letsencrypt.org/directory"
if len(a.CAServer) > 0 {
caServer = a.CAServer
}
client, err := acme.NewClient(caServer, account, acme.RSA4096)
if err != nil {
return nil, err
}
if len(a.DNSProvider) > 0 {
log.Debugf("Using DNS Challenge provider: %s", a.DNSProvider)
err = dnsOverrideDelay(a.DelayDontCheckDNS)
if err != nil {
return nil, err
}
var provider acme.ChallengeProvider
provider, err = dns.NewDNSChallengeProviderByName(a.DNSProvider)
if err != nil {
return nil, err
}
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
err = client.SetChallengeProvider(acme.DNS01, provider)
} else {
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
}
if err != nil {
return nil, err
}
return client, nil
}
func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
if certificateResource, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
return certificateResource.tlsCert, nil
}
certificate, err := a.getDomainsCertificates([]string{domain})
if err != nil {
return nil, err
}
log.Debugf("Got certificate on demand for domain %s", domain)
transaction, object, err := a.store.Begin()
if err != nil {
return nil, err
}
account = object.(*Account)
cert, err := account.DomainsCertificate.addCertificateForDomains(certificate, Domain{Main: domain})
if err != nil {
return nil, err
}
if err = transaction.Commit(account); err != nil {
return nil, err
}
return cert.tlsCert, nil
}
// LoadCertificateForDomains loads certificates from ACME for given domains
func (a *ACME) LoadCertificateForDomains(domains []string) {
a.jobs.In() <- func() {
log.Debugf("LoadCertificateForDomains %s...", domains)
domains = fun.Map(types.CanonicalDomain, domains).([]string)
operation := func() error {
if a.client == nil {
return fmt.Errorf("ACME client still not built")
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting ACME client: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 30 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting ACME client: %v", err)
return
}
account := a.store.Get().(*Account)
var domain Domain
if len(domains) == 0 {
// no domain
return
} else if len(domains) > 1 {
domain = Domain{Main: domains[0], SANs: domains[1:]}
} else {
domain = Domain{Main: domains[0]}
}
if _, exists := account.DomainsCertificate.exists(domain); exists {
// domain already exists
return
}
certificate, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
return
}
log.Debugf("Got certificate for domains %+v", domains)
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating transaction %+v : %v", domains, err)
return
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
if err != nil {
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
return
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %v", account, err)
return
}
}
}
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
domains = fun.Map(types.CanonicalDomain, domains).([]string)
log.Debugf("Loading ACME certificates %s...", domains)
bundle := true
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
if len(failures) > 0 {
log.Error(failures)
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
}
log.Debugf("Loaded ACME certificates %s", domains)
return &Certificate{
Domain: certificate.Domain,
CertURL: certificate.CertURL,
CertStableURL: certificate.CertStableURL,
PrivateKey: certificate.PrivateKey,
Certificate: certificate.Certificate,
}, nil
}
func (a *ACME) runJobs() {
safe.Go(func() {
for job := range a.jobs.Out() {
function := job.(func())
function()
}
})
}

279
acme/acme_test.go Normal file
View File

@@ -0,0 +1,279 @@
package acme
import (
"encoding/base64"
"net/http"
"net/http/httptest"
"reflect"
"sync"
"testing"
"time"
"github.com/xenolf/lego/acme"
)
func TestDomainsSet(t *testing.T) {
checkMap := map[string]Domains{
"": {},
"foo.com": {Domain{Main: "foo.com", SANs: []string{}}},
"foo.com,bar.net": {Domain{Main: "foo.com", SANs: []string{"bar.net"}}},
"foo.com,bar1.net,bar2.net,bar3.net": {Domain{Main: "foo.com", SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
}
for in, check := range checkMap {
ds := Domains{}
ds.Set(in)
if !reflect.DeepEqual(check, ds) {
t.Errorf("Expected %+v\nGot %+v", check, ds)
}
}
}
func TestDomainsSetAppend(t *testing.T) {
inSlice := []string{
"",
"foo1.com",
"foo2.com,bar.net",
"foo3.com,bar1.net,bar2.net,bar3.net",
}
checkSlice := []Domains{
{},
{
Domain{
Main: "foo1.com",
SANs: []string{}}},
{
Domain{
Main: "foo1.com",
SANs: []string{}},
Domain{
Main: "foo2.com",
SANs: []string{"bar.net"}}},
{
Domain{
Main: "foo1.com",
SANs: []string{}},
Domain{
Main: "foo2.com",
SANs: []string{"bar.net"}},
Domain{Main: "foo3.com",
SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
}
ds := Domains{}
for i, in := range inSlice {
ds.Set(in)
if !reflect.DeepEqual(checkSlice[i], ds) {
t.Errorf("Expected %s %+v\nGot %+v", in, checkSlice[i], ds)
}
}
}
func TestCertificatesRenew(t *testing.T) {
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
Main: "foo1.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
},
},
{
Domains: Domain{
Main: "foo2.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo2.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo2Key,
Certificate: foo2Cert,
},
},
},
}
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
newCertificate := &Certificate{
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
}
err := domainsCertificates.renewCertificates(
newCertificate,
Domain{
Main: "foo1.com",
SANs: []string{}})
if err != nil {
t.Errorf("Error in renewCertificates :%v", err)
}
if len(domainsCertificates.Certs) != 2 {
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
}
if !reflect.DeepEqual(domainsCertificates.Certs[0].Certificate, newCertificate) {
t.Errorf("Expected new certificate %+v \nGot %+v", newCertificate, domainsCertificates.Certs[0].Certificate)
}
}
func TestRemoveDuplicates(t *testing.T) {
now := time.Now()
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
barCert, barKey, _ := generateKeyPair("bar.com", now)
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo24Key,
Certificate: foo24Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: fooKey,
Certificate: fooCert,
},
},
{
Domains: Domain{
Main: "bar.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "bar.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: barKey,
Certificate: barCert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
},
}
domainsCertificates.Init()
if len(domainsCertificates.Certs) != 2 {
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
}
for _, cert := range domainsCertificates.Certs {
switch cert.Domains.Main {
case "bar.com":
continue
case "foo.com":
if !cert.tlsCert.Leaf.NotAfter.Equal(now.Add(48 * time.Hour).Truncate(1 * time.Second)) {
t.Errorf("Bad expiration %s date for domain %+v, now %s", cert.tlsCert.Leaf.NotAfter.String(), cert, now.Add(48*time.Hour).Truncate(1*time.Second).String())
}
default:
t.Errorf("Unknown domain %+v", cert)
}
}
}
func TestNoPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(0)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS != nil {
t.Errorf("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
}
}
func TestSillyPreCheckOverride(t *testing.T) {
err := dnsOverrideDelay(-5)
if err == nil {
t.Errorf("Missing expected error in dnsOverrideDelay!")
}
}
func TestPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(5)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcmeClientCreation(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
// Lengthy setup to avoid external web requests - oh for easier golang testing!
account := &Account{Email: "f@f"}
account.PrivateKey, _ = base64.StdEncoding.DecodeString(`
MIIBPAIBAAJBAMp2Ni92FfEur+CAvFkgC12LT4l9D53ApbBpDaXaJkzzks+KsLw9zyAxvlrfAyTCQ
7tDnEnIltAXyQ0uOFUUdcMCAwEAAQJAK1FbipATZcT9cGVa5x7KD7usytftLW14heQUPXYNV80r/3
lmnpvjL06dffRpwkYeN8DATQF/QOcy3NNNGDw/4QIhAPAKmiZFxA/qmRXsuU8Zhlzf16WrNZ68K64
asn/h3qZrAiEA1+wFR3WXCPIolOvd7AHjfgcTKQNkoMPywU4FYUNQ1AkCIQDv8yk0qPjckD6HVCPJ
llJh9MC0svjevGtNlxJoE3lmEQIhAKXy1wfZ32/XtcrnENPvi6lzxI0T94X7s5pP3aCoPPoJAiEAl
cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`{
"new-authz": "https://foo/acme/new-authz",
"new-cert": "https://foo/acme/new-cert",
"new-reg": "https://foo/acme/new-reg",
"revoke-cert": "https://foo/acme/revoke-cert"
}`))
}))
defer ts.Close()
a := ACME{DNSProvider: "manual", DelayDontCheckDNS: 10, CAServer: ts.URL}
client, err := a.buildACMEClient(account)
if err != nil {
t.Errorf("Error in buildACMEClient: %v", err)
}
if client == nil {
t.Errorf("No client from buildACMEClient!")
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}

97
acme/challengeProvider.go Normal file
View File

@@ -0,0 +1,97 @@
package acme
import (
"crypto/tls"
"fmt"
"strings"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
type challengeProvider struct {
store cluster.Store
lock sync.RWMutex
}
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
log.Debugf("Challenge GetCertificate %s", domain)
if !strings.HasSuffix(domain, ".acme.invalid") {
return nil, false
}
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.ChallengeCerts == nil {
return nil, false
}
account.Init()
var result *tls.Certificate
operation := func() error {
for _, cert := range account.ChallengeCerts {
for _, dns := range cert.certificate.Leaf.DNSNames {
if domain == dns {
result = cert.certificate
return nil
}
}
}
return fmt.Errorf("Cannot find challenge cert for domain %s", domain)
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting cert: %v", err)
return nil, false
}
return result, true
}
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
if err != nil {
return err
}
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if account.ChallengeCerts == nil {
account.ChallengeCerts = map[string]*ChallengeCert{}
}
account.ChallengeCerts[domain] = &cert
return transaction.Commit(account)
}
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
delete(account.ChallengeCerts, domain)
return transaction.Commit(account)
}
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
}

135
acme/crypto.go Normal file
View File

@@ -0,0 +1,135 @@
package acme
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/hex"
"encoding/pem"
"fmt"
"math/big"
"time"
)
func generateDefaultCertificate() (*tls.Certificate, error) {
randomBytes := make([]byte, 100)
_, err := rand.Read(randomBytes)
if err != nil {
return nil, err
}
zBytes := sha256.Sum256(randomBytes)
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
if err != nil {
return nil, err
}
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
return nil, err
}
return &certificate, nil
}
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, nil, err
}
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
if err != nil {
return nil, nil, err
}
return certPEM, keyPEM, nil
}
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
derBytes, err := generateDerCert(privKey, expiration, domain)
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}), nil
}
func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain string) ([]byte, error) {
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, err
}
if expiration.IsZero() {
expiration = time.Now().Add(365)
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: "TRAEFIK DEFAULT CERT",
},
NotBefore: time.Now(),
NotAfter: expiration,
KeyUsage: x509.KeyUsageKeyEncipherment,
BasicConstraintsValid: true,
DNSNames: []string{domain},
}
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
}
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
// generate a new RSA key for the certificates
var tempPrivKey crypto.PrivateKey
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return ChallengeCert{}, "", err
}
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
rsaPrivPEM := pemEncode(rsaPrivKey)
zBytes := sha256.Sum256([]byte(keyAuth))
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
if err != nil {
return ChallengeCert{}, "", err
}
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return ChallengeCert{}, "", err
}
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
}
func pemEncode(data interface{}) []byte {
var pemBlock *pem.Block
switch key := data.(type) {
case *ecdsa.PrivateKey:
keyBytes, _ := x509.MarshalECPrivateKey(key)
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
case *rsa.PrivateKey:
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
break
case *x509.CertificateRequest:
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
break
case []byte:
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
}
return pem.EncodeToMemory(pemBlock)
}

97
acme/localStore.go Normal file
View File

@@ -0,0 +1,97 @@
package acme
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"sync"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
)
var _ cluster.Store = (*LocalStore)(nil)
// LocalStore is a store using a file as storage
type LocalStore struct {
file string
storageLock sync.RWMutex
account *Account
}
// NewLocalStore create a LocalStore
func NewLocalStore(file string) *LocalStore {
return &LocalStore{
file: file,
}
}
// Get atomically a struct from the file storage
func (s *LocalStore) Get() cluster.Object {
s.storageLock.RLock()
defer s.storageLock.RUnlock()
return s.account
}
// Load loads file into store
func (s *LocalStore) Load() (cluster.Object, error) {
s.storageLock.Lock()
defer s.storageLock.Unlock()
account := &Account{}
err := checkPermissions(s.file)
if err != nil {
return nil, err
}
f, err := os.Open(s.file)
if err != nil {
return nil, err
}
defer f.Close()
file, err := ioutil.ReadAll(f)
if err != nil {
return nil, err
}
if err := json.Unmarshal(file, &account); err != nil {
return nil, err
}
account.Init()
s.account = account
log.Infof("Loaded ACME config from store %s", s.file)
return account, nil
}
// Begin creates a transaction with the KV store.
func (s *LocalStore) Begin() (cluster.Transaction, cluster.Object, error) {
s.storageLock.Lock()
return &localTransaction{LocalStore: s}, s.account, nil
}
var _ cluster.Transaction = (*localTransaction)(nil)
type localTransaction struct {
*LocalStore
dirty bool
}
// Commit allows to set an object in the file storage
func (t *localTransaction) Commit(object cluster.Object) error {
t.LocalStore.account = object.(*Account)
defer t.storageLock.Unlock()
if t.dirty {
return fmt.Errorf("transaction already used, please begin a new one")
}
// write account to file
data, err := json.MarshalIndent(object, "", " ")
if err != nil {
return err
}
err = ioutil.WriteFile(t.file, data, 0600)
if err != nil {
return err
}
t.dirty = true
return nil
}

25
acme/localStore_unix.go Normal file
View File

@@ -0,0 +1,25 @@
// +build !windows
package acme
import (
"fmt"
"os"
)
// Check file permissions
func checkPermissions(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
if fi.Mode().Perm()&0077 != 0 {
return fmt.Errorf("permissions %o for %s are too open, please use 600", fi.Mode().Perm(), name)
}
return nil
}

View File

@@ -0,0 +1,6 @@
package acme
// Do not check file permissions on Windows right now
func checkPermissions(name string) error {
return nil
}

34
adapters.go Normal file
View File

@@ -0,0 +1,34 @@
/*
Copyright
*/
package main
import (
"net/http"
"github.com/containous/traefik/log"
)
// OxyLogger implements oxy Logger interface with logrus.
type OxyLogger struct {
}
// Infof logs specified string as Debug level in logrus.
func (oxylogger *OxyLogger) Infof(format string, args ...interface{}) {
log.Debugf(format, args...)
}
// Warningf logs specified string as Warning level in logrus.
func (oxylogger *OxyLogger) Warningf(format string, args ...interface{}) {
log.Warningf(format, args...)
}
// Errorf logs specified string as Warningf level in logrus.
func (oxylogger *OxyLogger) Errorf(format string, args ...interface{}) {
log.Warningf(format, args...)
}
func notFoundHandler(w http.ResponseWriter, r *http.Request) {
http.NotFound(w, r)
//templatesRenderer.HTML(w, http.StatusNotFound, "notFound", nil)
}

View File

@@ -1,34 +1,36 @@
FROM golang:1.12-alpine
FROM golang:1.7
RUN apk --update upgrade \
&& apk --no-cache --no-progress add git mercurial bash gcc musl-dev curl tar \
&& rm -rf /var/cache/apk/*
# Download golangci-lint and misspell binary to bin folder in $GOPATH
RUN curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | bash -s -- -b $GOPATH/bin v1.15.0 \
&& go get github.com/client9/misspell/cmd/misspell
# Download goreleaser binary to bin folder in $GOPATH
RUN curl -sfL https://install.goreleaser.com/github.com/goreleaser/goreleaser.sh | sh
RUN go get github.com/jteeuwen/go-bindata/... \
&& go get github.com/golang/lint/golint \
&& go get github.com/kisielk/errcheck \
&& go get github.com/client9/misspell/cmd/misspell \
&& go get github.com/mattfarina/glide-hash
# Which docker version to test on
ARG DOCKER_VERSION=17.03.2
ARG DEP_VERSION=0.5.0
ARG DOCKER_VERSION=1.10.3
# Download go-bindata binary to bin folder in $GOPATH
RUN mkdir -p /usr/local/bin \
&& curl -fsSL -o /usr/local/bin/go-bindata https://github.com/containous/go-bindata/releases/download/v1.0.0/go-bindata \
&& chmod +x /usr/local/bin/go-bindata
# Download dep binary to bin folder in $GOPATH
# Which glide version to test on
ARG GLIDE_VERSION=v0.12.3
# Download glide
RUN mkdir -p /usr/local/bin \
&& curl -fsSL -o /usr/local/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod +x /usr/local/bin/dep
&& curl -fL https://github.com/Masterminds/glide/releases/download/${GLIDE_VERSION}/glide-${GLIDE_VERSION}-linux-amd64.tar.gz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
# Download docker
RUN mkdir -p /usr/local/bin \
&& curl -fL https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz \
&& curl -fL https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
WORKDIR /go/src/github.com/containous/traefik
COPY glide.yaml glide.yaml
COPY glide.lock glide.lock
RUN glide install -v
COPY integration/glide.yaml integration/glide.yaml
COPY integration/glide.lock integration/glide.lock
RUN cd integration && glide install
COPY . /go/src/github.com/containous/traefik

36
circle.yml Normal file
View File

@@ -0,0 +1,36 @@
machine:
pre:
- sudo docker -d -e lxc -s btrfs -H tcp://0.0.0.0:2375:
background: true
- curl --retry 15 --retry-delay 3 -v http://172.17.42.1:2375/version
environment:
REPO: $CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
DOCKER_HOST: tcp://172.17.42.1:2375
MAKE_DOCKER_HOST: $DOCKER_HOST
VERSION: v1.0.alpha.$CIRCLE_BUILD_NUM
dependencies:
pre:
- docker version
- go get github.com/tcnksm/ghr
- make validate
override:
- make binary
test:
override:
- make test-unit
- make test-integration
post:
- make crossbinary
- make image
deployment:
hub:
branch: master
commands:
- ghr -t $GITHUB_TOKEN -u $CIRCLE_PROJECT_USERNAME -r $CIRCLE_PROJECT_REPONAME --prerelease ${VERSION} dist/
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push ${REPO,,}:latest
- docker tag ${REPO,,}:latest ${REPO,,}:${VERSION}
- docker push ${REPO,,}:${VERSION}

255
cluster/datastore.go Normal file
View File

@@ -0,0 +1,255 @@
package cluster
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/job"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/docker/libkv/store"
"github.com/satori/go.uuid"
)
// Metadata stores Object plus metadata
type Metadata struct {
object Object
Object []byte
Lock string
}
// NewMetadata returns new Metadata
func NewMetadata(object Object) *Metadata {
return &Metadata{object: object}
}
// Marshall marshalls object
func (m *Metadata) Marshall() error {
var err error
m.Object, err = json.Marshal(m.object)
return err
}
func (m *Metadata) unmarshall() error {
if len(m.Object) == 0 {
return nil
}
return json.Unmarshal(m.Object, m.object)
}
// Listener is called when Object has been changed in KV store
type Listener func(Object) error
var _ Store = (*Datastore)(nil)
// Datastore holds a struct synced in a KV store
type Datastore struct {
kv staert.KvSource
ctx context.Context
localLock *sync.RWMutex
meta *Metadata
lockKey string
listener Listener
}
// NewDataStore creates a Datastore
func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object, listener Listener) (*Datastore, error) {
datastore := Datastore{
kv: kvSource,
ctx: ctx,
meta: &Metadata{object: object},
lockKey: kvSource.Prefix + "/lock",
localLock: &sync.RWMutex{},
listener: listener,
}
err := datastore.watchChanges()
if err != nil {
return nil, err
}
return &datastore, nil
}
func (d *Datastore) watchChanges() error {
stopCh := make(chan struct{})
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
if err != nil {
return err
}
go func() {
ctx, cancel := context.WithCancel(d.ctx)
operation := func() error {
for {
select {
case <-ctx.Done():
stopCh <- struct{}{}
return nil
case _, ok := <-kvCh:
if !ok {
cancel()
return err
}
err = d.reload()
if err != nil {
return err
}
// log.Debugf("Datastore object change received: %+v", d.meta)
if d.listener != nil {
err := d.listener(d.meta.object)
if err != nil {
log.Errorf("Error calling datastore listener: %s", err)
}
}
}
}
}
notify := func(err error, time time.Duration) {
log.Errorf("Error in watch datastore: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
log.Errorf("Error in watch datastore: %v", err)
}
}()
return nil
}
func (d *Datastore) reload() error {
log.Debugf("Datastore reload")
d.localLock.Lock()
err := d.kv.LoadConfig(d.meta)
if err != nil {
d.localLock.Unlock()
return err
}
err = d.meta.unmarshall()
if err != nil {
d.localLock.Unlock()
return err
}
d.localLock.Unlock()
return nil
}
// Begin creates a transaction with the KV store.
func (d *Datastore) Begin() (Transaction, Object, error) {
id := uuid.NewV4().String()
log.Debugf("Transaction %s begins", id)
remoteLock, err := d.kv.NewLock(d.lockKey, &store.LockOptions{TTL: 20 * time.Second, Value: []byte(id)})
if err != nil {
return nil, nil, err
}
stopCh := make(chan struct{})
ctx, cancel := context.WithCancel(d.ctx)
var errLock error
go func() {
_, errLock = remoteLock.Lock(stopCh)
cancel()
}()
select {
case <-ctx.Done():
if errLock != nil {
return nil, nil, errLock
}
case <-d.ctx.Done():
stopCh <- struct{}{}
return nil, nil, d.ctx.Err()
}
// we got the lock! Now make sure we are synced with KV store
operation := func() error {
meta := d.get()
if meta.Lock != id {
return fmt.Errorf("Object lock value: expected %s, got %s", id, meta.Lock)
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Datastore sync error: %v, retrying in %s", err, time)
err = d.reload()
if err != nil {
log.Errorf("Error reloading: %+v", err)
}
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
return nil, nil, fmt.Errorf("Datastore cannot sync: %v", err)
}
// we synced with KV store, we can now return Setter
return &datastoreTransaction{
Datastore: d,
remoteLock: remoteLock,
id: id,
}, d.meta.object, nil
}
func (d *Datastore) get() *Metadata {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta
}
// Load load atomically a struct from the KV store
func (d *Datastore) Load() (Object, error) {
d.localLock.Lock()
defer d.localLock.Unlock()
err := d.kv.LoadConfig(d.meta)
if err != nil {
return nil, err
}
err = d.meta.unmarshall()
if err != nil {
return nil, err
}
return d.meta.object, nil
}
// Get atomically a struct from the KV store
func (d *Datastore) Get() Object {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta.object
}
var _ Transaction = (*datastoreTransaction)(nil)
type datastoreTransaction struct {
*Datastore
remoteLock store.Locker
dirty bool
id string
}
// Commit allows to set an object in the KV store
func (s *datastoreTransaction) Commit(object Object) error {
s.localLock.Lock()
defer s.localLock.Unlock()
if s.dirty {
return fmt.Errorf("Transaction already used, please begin a new one")
}
s.Datastore.meta.object = object
err := s.Datastore.meta.Marshall()
if err != nil {
return fmt.Errorf("Marshall error: %s", err)
}
err = s.kv.StoreConfig(s.Datastore.meta)
if err != nil {
return fmt.Errorf("StoreConfig error: %s", err)
}
err = s.remoteLock.Unlock()
if err != nil {
return fmt.Errorf("Unlock error: %s", err)
}
s.dirty = true
log.Debugf("Transaction committed %s", s.id)
return nil
}

104
cluster/leadership.go Normal file
View File

@@ -0,0 +1,104 @@
package cluster
import (
"context"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/docker/leadership"
)
// Leadership allows leadership election using a KV store
type Leadership struct {
*safe.Pool
*types.Cluster
candidate *leadership.Candidate
leader *safe.Safe
listeners []LeaderListener
}
// NewLeadership creates a leadership
func NewLeadership(ctx context.Context, cluster *types.Cluster) *Leadership {
return &Leadership{
Pool: safe.NewPool(ctx),
Cluster: cluster,
candidate: leadership.NewCandidate(cluster.Store, cluster.Store.Prefix+"/leader", cluster.Node, 20*time.Second),
listeners: []LeaderListener{},
leader: safe.New(false),
}
}
// LeaderListener is called when leadership has changed
type LeaderListener func(elected bool) error
// Participate tries to be a leader
func (l *Leadership) Participate(pool *safe.Pool) {
pool.GoCtx(func(ctx context.Context) {
log.Debugf("Node %s running for election", l.Cluster.Node)
defer log.Debugf("Node %s no more running for election", l.Cluster.Node)
backOff := backoff.NewExponentialBackOff()
operation := func() error {
return l.run(ctx, l.candidate)
}
notify := func(err error, time time.Duration) {
log.Errorf("Leadership election error %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), backOff, notify)
if err != nil {
log.Errorf("Cannot elect leadership %+v", err)
}
})
}
// AddListener adds a leadership listerner
func (l *Leadership) AddListener(listener LeaderListener) {
l.listeners = append(l.listeners, listener)
}
// Resign resigns from being a leader
func (l *Leadership) Resign() {
l.candidate.Resign()
log.Infof("Node %s resigned", l.Cluster.Node)
}
func (l *Leadership) run(ctx context.Context, candidate *leadership.Candidate) error {
electedCh, errCh := candidate.RunForElection()
for {
select {
case elected := <-electedCh:
l.onElection(elected)
case err := <-errCh:
return err
case <-ctx.Done():
l.candidate.Resign()
return nil
}
}
}
func (l *Leadership) onElection(elected bool) {
if elected {
log.Infof("Node %s elected leader ♚", l.Cluster.Node)
l.leader.Set(true)
l.Start()
} else {
log.Infof("Node %s elected slave ♝", l.Cluster.Node)
l.leader.Set(false)
l.Stop()
}
for _, listener := range l.listeners {
err := listener(elected)
if err != nil {
log.Errorf("Error calling Leadership listener: %s", err)
}
}
}
// IsLeader returns true if current node is leader
func (l *Leadership) IsLeader() bool {
return l.leader.Get().(bool)
}

16
cluster/store.go Normal file
View File

@@ -0,0 +1,16 @@
package cluster
// Object is the struct to store
type Object interface{}
// Store is a generic interface to represents a storage
type Store interface {
Load() (Object, error)
Get() Object
Begin() (Transaction, Object, error)
}
// Transaction allows to set a struct in the KV store
type Transaction interface {
Commit(object Object) error
}

111
cmd/bug.go Normal file
View File

@@ -0,0 +1,111 @@
package cmd
import (
"bytes"
"encoding/json"
"fmt"
"net/url"
"os/exec"
"regexp"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/mvdan/xurls"
)
var (
bugtracker = "https://github.com/containous/traefik/issues/new"
bugTemplate = `### What version of Traefik are you using?
` + "```" + `
{{.Version}}
` + "```" + `
### What is your environment & configuration (arguments, toml...)?
` + "```" + `
{{.Configuration}}
` + "```" + `
### What did you do?
### What did you expect to see?
### What did you see instead?
`
)
// NewBugCmd builds a new Bug command
func NewBugCmd(traefikConfiguration interface{}, traefikPointersConfiguration interface{}) *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "bug",
Description: `Report an issue on Traefik bugtracker`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
var version bytes.Buffer
if err := getVersionPrint(&version); err != nil {
return err
}
tmpl, err := template.New("").Parse(bugTemplate)
if err != nil {
return err
}
configJSON, err := json.MarshalIndent(traefikConfiguration, "", " ")
if err != nil {
return err
}
v := struct {
Version string
Configuration string
}{
Version: version.String(),
Configuration: anonymize(string(configJSON)),
}
var bug bytes.Buffer
if err := tmpl.Execute(&bug, v); err != nil {
return err
}
body := bug.String()
url := bugtracker + "?body=" + url.QueryEscape(body)
if err := openBrowser(url); err != nil {
fmt.Print("Please file a new issue at " + bugtracker + " using this template:\n\n")
fmt.Print(body)
}
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func openBrowser(url string) error {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", url).Start()
case "windows":
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
case "darwin":
err = exec.Command("open", url).Start()
default:
err = fmt.Errorf("unsupported platform")
}
return err
}
func anonymize(input string) string {
replace := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, replace), replace)
}

View File

@@ -1,196 +0,0 @@
package cmd
import (
"time"
"github.com/containous/flaeg/parse"
"github.com/containous/traefik/pkg/config/static"
"github.com/containous/traefik/pkg/middlewares/accesslog"
"github.com/containous/traefik/pkg/ping"
"github.com/containous/traefik/pkg/provider/docker"
"github.com/containous/traefik/pkg/provider/file"
"github.com/containous/traefik/pkg/provider/kubernetes/ingress"
"github.com/containous/traefik/pkg/provider/marathon"
"github.com/containous/traefik/pkg/provider/rest"
"github.com/containous/traefik/pkg/tracing/datadog"
"github.com/containous/traefik/pkg/tracing/instana"
"github.com/containous/traefik/pkg/tracing/jaeger"
"github.com/containous/traefik/pkg/tracing/zipkin"
"github.com/containous/traefik/pkg/types"
jaegercli "github.com/uber/jaeger-client-go"
)
// TraefikConfiguration holds GlobalConfiguration and other stuff
type TraefikConfiguration struct {
static.Configuration `mapstructure:",squash" export:"true"`
ConfigFile string `short:"c" description:"Configuration file to use (TOML)." export:"true"`
}
// NewTraefikConfiguration creates a TraefikConfiguration with default values
func NewTraefikConfiguration() *TraefikConfiguration {
return &TraefikConfiguration{
Configuration: static.Configuration{
Global: &static.Global{
CheckNewVersion: true,
},
EntryPoints: make(static.EntryPoints),
Providers: &static.Providers{
ProvidersThrottleDuration: parse.Duration(2 * time.Second),
},
ServersTransport: &static.ServersTransport{
MaxIdleConnsPerHost: 200,
},
},
ConfigFile: "",
}
}
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
// default File
var defaultFile file.Provider
defaultFile.Watch = true
defaultFile.Filename = "" // needs equivalent to viper.ConfigFileUsed()
// default Ping
var defaultPing = ping.Handler{
EntryPoint: "traefik",
}
// default TraefikLog
defaultTraefikLog := types.TraefikLog{
Format: "common",
FilePath: "",
}
// default AccessLog
defaultAccessLog := types.AccessLog{
Format: accesslog.CommonFormat,
FilePath: "",
Filters: &types.AccessLogFilters{},
Fields: &types.AccessLogFields{
DefaultMode: types.AccessLogKeep,
Headers: &types.FieldHeaders{
DefaultMode: types.AccessLogKeep,
},
},
}
// default Tracing
defaultTracing := static.Tracing{
Backend: "jaeger",
ServiceName: "traefik",
SpanNameLimit: 0,
Jaeger: &jaeger.Config{
SamplingServerURL: "http://localhost:5778/sampling",
SamplingType: "const",
SamplingParam: 1.0,
LocalAgentHostPort: "127.0.0.1:6831",
Propagation: "jaeger",
Gen128Bit: false,
TraceContextHeaderName: jaegercli.TraceContextHeaderName,
},
Zipkin: &zipkin.Config{
HTTPEndpoint: "http://localhost:9411/api/v1/spans",
SameSpan: false,
ID128Bit: true,
Debug: false,
SampleRate: 1.0,
},
DataDog: &datadog.Config{
LocalAgentHostPort: "localhost:8126",
GlobalTag: "",
Debug: false,
PrioritySampling: false,
},
Instana: &instana.Config{
LocalAgentHost: "localhost",
LocalAgentPort: 42699,
LogLevel: "info",
},
}
// default ApiConfiguration
defaultAPI := static.API{
EntryPoint: "traefik",
Dashboard: true,
}
defaultAPI.Statistics = &types.Statistics{
RecentErrors: 10,
}
// default Metrics
defaultMetrics := types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
EntryPoint: static.DefaultInternalEntryPointName,
},
Datadog: &types.Datadog{
Address: "localhost:8125",
PushInterval: "10s",
},
StatsD: &types.Statsd{
Address: "localhost:8125",
PushInterval: "10s",
},
InfluxDB: &types.InfluxDB{
Address: "localhost:8089",
Protocol: "udp",
PushInterval: "10s",
},
}
defaultResolver := types.HostResolverConfig{
CnameFlattening: false,
ResolvConfig: "/etc/resolv.conf",
ResolvDepth: 5,
}
var defaultDocker docker.Provider
defaultDocker.Watch = true
defaultDocker.ExposedByDefault = true
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
defaultDocker.SwarmMode = false
defaultDocker.SwarmModeRefreshSeconds = 15
defaultDocker.DefaultRule = docker.DefaultTemplateRule
// default Rest
var defaultRest rest.Provider
defaultRest.EntryPoint = static.DefaultInternalEntryPointName
// default Marathon
var defaultMarathon marathon.Provider
defaultMarathon.Watch = true
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
defaultMarathon.ExposedByDefault = true
defaultMarathon.DialerTimeout = parse.Duration(5 * time.Second)
defaultMarathon.ResponseHeaderTimeout = parse.Duration(60 * time.Second)
defaultMarathon.TLSHandshakeTimeout = parse.Duration(5 * time.Second)
defaultMarathon.KeepAlive = parse.Duration(10 * time.Second)
defaultMarathon.DefaultRule = marathon.DefaultTemplateRule
// default Kubernetes
var defaultKubernetes ingress.Provider
defaultKubernetes.Watch = true
defaultProviders := static.Providers{
File: &defaultFile,
Docker: &defaultDocker,
Rest: &defaultRest,
Marathon: &defaultMarathon,
Kubernetes: &defaultKubernetes,
}
return &TraefikConfiguration{
Configuration: static.Configuration{
Providers: &defaultProviders,
Log: &defaultTraefikLog,
AccessLog: &defaultAccessLog,
Ping: &defaultPing,
API: &defaultAPI,
Metrics: &defaultMetrics,
Tracing: &defaultTracing,
HostResolver: &defaultResolver,
},
}
}

View File

@@ -1,22 +0,0 @@
package cmd
import (
"context"
"os"
"os/signal"
"syscall"
)
// ContextWithSignal creates a context canceled when SIGINT or SIGTERM are notified
func ContextWithSignal(ctx context.Context) context.Context {
newCtx, cancel := context.WithCancel(ctx)
signals := make(chan os.Signal)
signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
go func() {
select {
case <-signals:
cancel()
}
}()
return newCtx
}

View File

@@ -1,73 +0,0 @@
package healthcheck
import (
"errors"
"fmt"
"net/http"
"os"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/cmd"
"github.com/containous/traefik/pkg/config/static"
)
// NewCmd builds a new HealthCheck command
func NewCmd(traefikConfiguration *cmd.TraefikConfiguration, traefikPointersConfiguration *cmd.TraefikConfiguration) *flaeg.Command {
return &flaeg.Command{
Name: "healthcheck",
Description: `Calls traefik /ping to check health (web provider must be enabled)`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: runCmd(traefikConfiguration),
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func runCmd(traefikConfiguration *cmd.TraefikConfiguration) func() error {
return func() error {
traefikConfiguration.Configuration.SetEffectiveConfiguration(traefikConfiguration.ConfigFile)
resp, errPing := Do(traefikConfiguration.Configuration)
if errPing != nil {
fmt.Printf("Error calling healthcheck: %s\n", errPing)
os.Exit(1)
}
if resp.StatusCode != http.StatusOK {
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
os.Exit(1)
}
fmt.Printf("OK: %s\n", resp.Request.URL)
os.Exit(0)
return nil
}
}
// Do try to do a healthcheck
func Do(staticConfiguration static.Configuration) (*http.Response, error) {
if staticConfiguration.Ping == nil {
return nil, errors.New("please enable `ping` to use health check")
}
pingEntryPoint, ok := staticConfiguration.EntryPoints[staticConfiguration.Ping.EntryPoint]
if !ok {
return nil, errors.New("missing `ping` entrypoint")
}
client := &http.Client{Timeout: 5 * time.Second}
protocol := "http"
// FIXME Handle TLS on ping etc...
// if pingEntryPoint.TLS != nil {
// protocol = "https"
// tr := &http.Transport{
// TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
// }
// client.Transport = tr
// }
path := "/"
return client.Head(protocol + "://" + pingEntryPoint.Address + path + "ping")
}

View File

@@ -1,150 +0,0 @@
package storeconfig
import (
"encoding/json"
"fmt"
stdlog "log"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/cmd"
)
// NewCmd builds a new StoreConfig command
func NewCmd(traefikConfiguration *cmd.TraefikConfiguration, traefikPointersConfiguration *cmd.TraefikConfiguration) *flaeg.Command {
return &flaeg.Command{
Name: "storeconfig",
Description: `Stores the static traefik configuration into a Key-value stores. Traefik will not start.`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
HideHelp: true, // TODO storeconfig
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
// Run store config in KV
func Run(kv *staert.KvSource, traefikConfiguration *cmd.TraefikConfiguration) func() error {
return func() error {
if kv == nil {
return fmt.Errorf("error using command storeconfig, no Key-value store defined")
}
fileConfig := traefikConfiguration.Providers.File
if fileConfig != nil {
traefikConfiguration.Providers.File = nil
if len(fileConfig.Filename) == 0 && len(fileConfig.Directory) == 0 {
fileConfig.Filename = traefikConfiguration.ConfigFile
}
}
jsonConf, err := json.Marshal(traefikConfiguration.Configuration)
if err != nil {
return err
}
stdlog.Printf("Storing configuration: %s\n", jsonConf)
err = kv.StoreConfig(traefikConfiguration.Configuration)
if err != nil {
return err
}
if fileConfig != nil {
jsonConf, err = json.Marshal(fileConfig)
if err != nil {
return err
}
stdlog.Printf("Storing file configuration: %s\n", jsonConf)
config, err := fileConfig.BuildConfiguration()
if err != nil {
return err
}
stdlog.Print("Writing config to KV")
err = kv.StoreConfig(config)
if err != nil {
return err
}
}
// if traefikConfiguration.Configuration.ACME != nil {
// account := &acme.Account{}
//
// accountInitialized, err := keyExists(kv, traefikConfiguration.Configuration.ACME.Storage)
// if err != nil && err != store.ErrKeyNotFound {
// return err
// }
//
// // Check to see if ACME account object is already in kv store
// if traefikConfiguration.Configuration.ACME.OverrideCertificates || !accountInitialized {
//
// // Stores the ACME Account into the KV Store
// // Certificates in KV Stores will be overridden
// meta := cluster.NewMetadata(account)
// err = meta.Marshall()
// if err != nil {
// return err
// }
//
// source := staert.KvSource{
// Store: kv,
// Prefix: traefikConfiguration.Configuration.ACME.Storage,
// }
//
// err = source.StoreConfig(meta)
// if err != nil {
// return err
// }
// }
// }
return nil
}
}
// func keyExists(source *staert.KvSource, key string) (bool, error) {
// list, err := source.List(key, nil)
// if err != nil {
// return false, err
// }
//
// return len(list) > 0, nil
// }
// CreateKvSource creates KvSource
// TLS support is enable for Consul and Etcd backends
func CreateKvSource(traefikConfiguration *cmd.TraefikConfiguration) (*staert.KvSource, error) {
var kv *staert.KvSource
// var kvStore store.Store
var err error
// TODO kv store
// switch {
// case traefikConfiguration.Providers.Consul != nil:
// kvStore, err = traefikConfiguration.Providers.Consul.CreateStore()
// kv = &staert.KvSource{
// Store: kvStore,
// Prefix: traefikConfiguration.Providers.Consul.Prefix,
// }
// case traefikConfiguration.Providers.Etcd != nil:
// kvStore, err = traefikConfiguration.Providers.Etcd.CreateStore()
// kv = &staert.KvSource{
// Store: kvStore,
// Prefix: traefikConfiguration.Providers.Etcd.Prefix,
// }
// case traefikConfiguration.Providers.Zookeeper != nil:
// kvStore, err = traefikConfiguration.Providers.Zookeeper.CreateStore()
// kv = &staert.KvSource{
// Store: kvStore,
// Prefix: traefikConfiguration.Providers.Zookeeper.Prefix,
// }
// case traefikConfiguration.Providers.Boltdb != nil:
// kvStore, err = traefikConfiguration.Providers.Boltdb.CreateStore()
// kv = &staert.KvSource{
// Store: kvStore,
// Prefix: traefikConfiguration.Providers.Boltdb.Prefix,
// }
// }
return kv, err
}

View File

@@ -1,412 +0,0 @@
package main
import (
"context"
"encoding/json"
"fmt"
fmtlog "log"
"net/http"
"os"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/cenkalti/backoff"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/autogen/genstatic"
"github.com/containous/traefik/cmd"
"github.com/containous/traefik/cmd/healthcheck"
"github.com/containous/traefik/cmd/storeconfig"
cmdVersion "github.com/containous/traefik/cmd/version"
"github.com/containous/traefik/pkg/collector"
"github.com/containous/traefik/pkg/config"
"github.com/containous/traefik/pkg/config/static"
"github.com/containous/traefik/pkg/job"
"github.com/containous/traefik/pkg/log"
"github.com/containous/traefik/pkg/provider/aggregator"
"github.com/containous/traefik/pkg/provider/kubernetes/k8s"
"github.com/containous/traefik/pkg/safe"
"github.com/containous/traefik/pkg/server"
"github.com/containous/traefik/pkg/server/router"
traefiktls "github.com/containous/traefik/pkg/tls"
"github.com/containous/traefik/pkg/types"
"github.com/containous/traefik/pkg/version"
"github.com/coreos/go-systemd/daemon"
assetfs "github.com/elazarl/go-bindata-assetfs"
"github.com/ogier/pflag"
"github.com/sirupsen/logrus"
"github.com/vulcand/oxy/roundrobin"
)
func init() {
goDebug := os.Getenv("GODEBUG")
if len(goDebug) > 0 {
goDebug += ","
}
os.Setenv("GODEBUG", goDebug+"tls13=1")
}
// sliceOfStrings is the parser for []string
type sliceOfStrings []string
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (s *sliceOfStrings) String() string {
return strings.Join(*s, ",")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (s *sliceOfStrings) Set(value string) error {
parts := strings.Split(value, ",")
if len(parts) == 0 {
return fmt.Errorf("bad []string format: %s", value)
}
for _, entrypoint := range parts {
*s = append(*s, entrypoint)
}
return nil
}
// Get return the []string
func (s *sliceOfStrings) Get() interface{} {
return *s
}
// SetValue sets the []string with val
func (s *sliceOfStrings) SetValue(val interface{}) {
*s = val.([]string)
}
// Type is type of the struct
func (s *sliceOfStrings) Type() string {
return "sliceOfStrings"
}
func main() {
// traefik config inits
traefikConfiguration := cmd.NewTraefikConfiguration()
traefikPointersConfiguration := cmd.NewTraefikDefaultPointersConfiguration()
// traefik Command init
traefikCmd := &flaeg.Command{
Name: "traefik",
Description: `traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
Complete documentation is available at https://traefik.io`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
return runCmd(&traefikConfiguration.Configuration, traefikConfiguration.ConfigFile)
},
}
// storeconfig Command init
storeConfigCmd := storeconfig.NewCmd(traefikConfiguration, traefikPointersConfiguration)
// init flaeg source
f := flaeg.New(traefikCmd, os.Args[1:])
// add custom parsers
f.AddParser(reflect.TypeOf(static.EntryPoints{}), &static.EntryPoints{})
f.AddParser(reflect.SliceOf(reflect.TypeOf("")), &sliceOfStrings{})
f.AddParser(reflect.TypeOf(traefiktls.FilesOrContents{}), &traefiktls.FilesOrContents{})
f.AddParser(reflect.TypeOf(types.Constraints{}), &types.Constraints{})
f.AddParser(reflect.TypeOf(k8s.Namespaces{}), &k8s.Namespaces{})
f.AddParser(reflect.TypeOf([]types.Domain{}), &types.Domains{})
f.AddParser(reflect.TypeOf(types.DNSResolvers{}), &types.DNSResolvers{})
f.AddParser(reflect.TypeOf(types.Buckets{}), &types.Buckets{})
f.AddParser(reflect.TypeOf(types.StatusCodes{}), &types.StatusCodes{})
f.AddParser(reflect.TypeOf(types.FieldNames{}), &types.FieldNames{})
f.AddParser(reflect.TypeOf(types.FieldHeaderNames{}), &types.FieldHeaderNames{})
// add commands
f.AddCommand(cmdVersion.NewCmd())
f.AddCommand(storeConfigCmd)
f.AddCommand(healthcheck.NewCmd(traefikConfiguration, traefikPointersConfiguration))
usedCmd, err := f.GetCommand()
if err != nil {
fmtlog.Println(err)
os.Exit(1)
}
if _, err := f.Parse(usedCmd); err != nil {
if err == pflag.ErrHelp {
os.Exit(0)
}
fmtlog.Printf("Error parsing command: %s\n", err)
os.Exit(1)
}
// staert init
s := staert.NewStaert(traefikCmd)
// init TOML source
toml := staert.NewTomlSource("traefik", []string{traefikConfiguration.ConfigFile, "/etc/traefik/", "$HOME/.traefik/", "."})
// add sources to staert
s.AddSource(toml)
s.AddSource(f)
if _, err := s.LoadConfig(); err != nil {
fmtlog.Printf("Error reading TOML config file %s : %s\n", toml.ConfigFileUsed(), err)
os.Exit(1)
}
traefikConfiguration.ConfigFile = toml.ConfigFileUsed()
kv, err := storeconfig.CreateKvSource(traefikConfiguration)
if err != nil {
fmtlog.Printf("Error creating kv store: %s\n", err)
os.Exit(1)
}
storeConfigCmd.Run = storeconfig.Run(kv, traefikConfiguration)
// if a KV Store is enable and no sub-command called in args
if kv != nil && usedCmd == traefikCmd {
s.AddSource(kv)
operation := func() error {
_, err := s.LoadConfig()
return err
}
notify := func(err error, time time.Duration) {
log.WithoutContext().Errorf("Load config error: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
fmtlog.Printf("Error loading configuration: %s\n", err)
os.Exit(1)
}
}
if err := s.Run(); err != nil {
fmtlog.Printf("Error running traefik: %s\n", err)
os.Exit(1)
}
os.Exit(0)
}
func runCmd(staticConfiguration *static.Configuration, configFile string) error {
configureLogging(staticConfiguration)
if len(configFile) > 0 {
log.WithoutContext().Infof("Using TOML configuration file %s", configFile)
}
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
if err := roundrobin.SetDefaultWeight(0); err != nil {
log.WithoutContext().Errorf("Could not set roundrobin default weight: %v", err)
}
staticConfiguration.SetEffectiveConfiguration(configFile)
staticConfiguration.ValidateConfiguration()
log.WithoutContext().Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
jsonConf, err := json.Marshal(staticConfiguration)
if err != nil {
log.WithoutContext().Errorf("Could not marshal static configuration: %v", err)
log.WithoutContext().Debugf("Static configuration loaded [struct] %#v", staticConfiguration)
} else {
log.WithoutContext().Debugf("Static configuration loaded %s", string(jsonConf))
}
if staticConfiguration.API != nil && staticConfiguration.API.Dashboard {
staticConfiguration.API.DashboardAssets = &assetfs.AssetFS{Asset: genstatic.Asset, AssetInfo: genstatic.AssetInfo, AssetDir: genstatic.AssetDir, Prefix: "static"}
}
if staticConfiguration.Global.CheckNewVersion {
checkNewVersion()
}
stats(staticConfiguration)
providerAggregator := aggregator.NewProviderAggregator(*staticConfiguration.Providers)
acmeProvider, err := staticConfiguration.InitACMEProvider()
if err != nil {
log.WithoutContext().Errorf("Unable to initialize ACME provider: %v", err)
} else if acmeProvider != nil {
if err := providerAggregator.AddProvider(acmeProvider); err != nil {
log.WithoutContext().Errorf("Unable to add ACME provider to the providers list: %v", err)
acmeProvider = nil
}
}
serverEntryPointsTCP := make(server.TCPEntryPoints)
for entryPointName, config := range staticConfiguration.EntryPoints {
ctx := log.With(context.Background(), log.Str(log.EntryPointName, entryPointName))
serverEntryPointsTCP[entryPointName], err = server.NewTCPEntryPoint(ctx, config)
if err != nil {
return fmt.Errorf("error while building entryPoint %s: %v", entryPointName, err)
}
serverEntryPointsTCP[entryPointName].RouteAppenderFactory = router.NewRouteAppenderFactory(*staticConfiguration, entryPointName, acmeProvider)
}
tlsManager := traefiktls.NewManager()
if acmeProvider != nil {
acmeProvider.SetTLSManager(tlsManager)
if acmeProvider.TLSChallenge != nil &&
acmeProvider.HTTPChallenge == nil &&
acmeProvider.DNSChallenge == nil {
tlsManager.TLSAlpnGetter = acmeProvider.GetTLSALPNCertificate
}
}
svr := server.NewServer(*staticConfiguration, providerAggregator, serverEntryPointsTCP, tlsManager)
if acmeProvider != nil && acmeProvider.OnHostRule {
acmeProvider.SetConfigListenerChan(make(chan config.Configuration))
svr.AddListener(acmeProvider.ListenConfiguration)
}
ctx := cmd.ContextWithSignal(context.Background())
if staticConfiguration.Ping != nil {
staticConfiguration.Ping.WithContext(ctx)
}
svr.Start(ctx)
defer svr.Close()
sent, err := daemon.SdNotify(false, "READY=1")
if !sent && err != nil {
log.WithoutContext().Errorf("Failed to notify: %v", err)
}
t, err := daemon.SdWatchdogEnabled(false)
if err != nil {
log.WithoutContext().Errorf("Could not enable Watchdog: %v", err)
} else if t != 0 {
// Send a ping each half time given
t /= 2
log.WithoutContext().Infof("Watchdog activated with timer duration %s", t)
safe.Go(func() {
tick := time.Tick(t)
for range tick {
_, errHealthCheck := healthcheck.Do(*staticConfiguration)
if staticConfiguration.Ping == nil || errHealthCheck == nil {
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.WithoutContext().Error("Fail to tick watchdog")
}
} else {
log.WithoutContext().Error(errHealthCheck)
}
}
})
}
svr.Wait()
log.WithoutContext().Info("Shutting down")
logrus.Exit(0)
return nil
}
func configureLogging(staticConfiguration *static.Configuration) {
// configure default log flags
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
// configure log level
// an explicitly defined log level always has precedence. if none is
// given and debug mode is disabled, the default is ERROR, and DEBUG
// otherwise.
var levelStr string
if staticConfiguration.Log != nil {
levelStr = strings.ToLower(staticConfiguration.Log.LogLevel)
}
if levelStr == "" {
levelStr = "error"
if staticConfiguration.Global.Debug {
levelStr = "debug"
}
}
level, err := logrus.ParseLevel(levelStr)
if err != nil {
log.WithoutContext().Errorf("Error getting level: %v", err)
}
log.SetLevel(level)
var logFile string
if staticConfiguration.Log != nil && len(staticConfiguration.Log.FilePath) > 0 {
logFile = staticConfiguration.Log.FilePath
}
// configure log format
var formatter logrus.Formatter
if staticConfiguration.Log != nil && staticConfiguration.Log.Format == "json" {
formatter = &logrus.JSONFormatter{}
} else {
disableColors := len(logFile) > 0
formatter = &logrus.TextFormatter{DisableColors: disableColors, FullTimestamp: true, DisableSorting: true}
}
log.SetFormatter(formatter)
if len(logFile) > 0 {
dir := filepath.Dir(logFile)
if err := os.MkdirAll(dir, 0755); err != nil {
log.WithoutContext().Errorf("Failed to create log path %s: %s", dir, err)
}
err = log.OpenFile(logFile)
logrus.RegisterExitHandler(func() {
if err := log.CloseFile(); err != nil {
log.WithoutContext().Errorf("Error while closing log: %v", err)
}
})
if err != nil {
log.WithoutContext().Errorf("Error while opening log file %s: %v", logFile, err)
}
}
}
func checkNewVersion() {
ticker := time.Tick(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
version.CheckNewVersion()
}
})
}
func stats(staticConfiguration *static.Configuration) {
if staticConfiguration.Global.SendAnonymousUsage == nil {
log.WithoutContext().Error(`
You haven't specify the sendAnonymousUsage option, it will be enable by default.
`)
sendAnonymousUsage := true
staticConfiguration.Global.SendAnonymousUsage = &sendAnonymousUsage
}
if *staticConfiguration.Global.SendAnonymousUsage {
log.WithoutContext().Info(`
Stats collection is enabled.
Many thanks for contributing to Traefik's improvement by allowing us to receive anonymous information from your configuration.
Help us improve Traefik by leaving this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
`)
collect(staticConfiguration)
} else {
log.WithoutContext().Info(`
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
`)
}
}
func collect(staticConfiguration *static.Configuration) {
ticker := time.Tick(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
if err := collector.Collect(staticConfiguration); err != nil {
log.WithoutContext().Debug(err)
}
}
})
}

63
cmd/version.go Normal file
View File

@@ -0,0 +1,63 @@
package cmd
import (
"fmt"
"io"
"os"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/containous/traefik/version"
)
var versionTemplate = `Version: {{.Version}}
Codename: {{.Codename}}
Go version: {{.GoVersion}}
Built: {{.BuildTime}}
OS/Arch: {{.Os}}/{{.Arch}}`
// NewVersionCmd builds a new Version command
func NewVersionCmd() *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "version",
Description: `Print version`,
Config: struct{}{},
DefaultPointersConfig: struct{}{},
Run: func() error {
if err := getVersionPrint(os.Stdout); err != nil {
return err
}
fmt.Printf("\n")
return nil
},
}
}
func getVersionPrint(wr io.Writer) error {
tmpl, err := template.New("").Parse(versionTemplate)
if err != nil {
return err
}
v := struct {
Version string
Codename string
GoVersion string
BuildTime string
Os string
Arch string
}{
Version: version.Version,
Codename: version.Codename,
GoVersion: runtime.Version(),
BuildTime: version.BuildDate,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
return tmpl.Execute(wr, v)
}

View File

@@ -1,62 +0,0 @@
package version
import (
"fmt"
"io"
"os"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/containous/traefik/pkg/version"
)
var versionTemplate = `Version: {{.Version}}
Codename: {{.Codename}}
Go version: {{.GoVersion}}
Built: {{.BuildTime}}
OS/Arch: {{.Os}}/{{.Arch}}`
// NewCmd builds a new Version command
func NewCmd() *flaeg.Command {
return &flaeg.Command{
Name: "version",
Description: `Print version`,
Config: struct{}{},
DefaultPointersConfig: struct{}{},
Run: func() error {
if err := GetPrint(os.Stdout); err != nil {
return err
}
fmt.Print("\n")
return nil
},
}
}
// GetPrint write Printable version
func GetPrint(wr io.Writer) error {
tmpl, err := template.New("").Parse(versionTemplate)
if err != nil {
return err
}
v := struct {
Version string
Codename string
GoVersion string
BuildTime string
Os string
Arch string
}{
Version: version.Version,
Codename: version.Codename,
GoVersion: runtime.Version(),
BuildTime: version.BuildDate,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
return tmpl.Execute(wr, v)
}

456
configuration.go Normal file
View File

@@ -0,0 +1,456 @@
package main
import (
"crypto/tls"
"errors"
"fmt"
"os"
"regexp"
"strings"
"time"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/provider"
"github.com/containous/traefik/types"
)
// TraefikConfiguration holds GlobalConfiguration and other stuff
type TraefikConfiguration struct {
GlobalConfiguration `mapstructure:",squash"`
ConfigFile string `short:"c" description:"Configuration file to use (TOML)."`
}
// GlobalConfiguration holds global configuration (with providers, etc.).
// It's populated from the traefik configuration file passed as an argument to the binary.
type GlobalConfiguration struct {
GraceTimeOut int64 `short:"g" description:"Duration to give active requests a chance to finish during hot-reload"`
Debug bool `short:"d" description:"Enable debug mode"`
CheckNewVersion bool `description:"Periodically check if a new version has been released"`
AccessLogsFile string `description:"Access logs file"`
TraefikLogsFile string `description:"Traefik logs file"`
LogLevel string `short:"l" description:"Log level"`
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'"`
Cluster *types.Cluster `description:"Enable clustering"`
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags"`
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL"`
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint"`
ProvidersThrottleDuration time.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time."`
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used"`
InsecureSkipVerify bool `description:"Disable SSL certificate verification"`
Retry *Retry `description:"Enable retry sending request if network error"`
Docker *provider.Docker `description:"Enable Docker backend"`
File *provider.File `description:"Enable File backend"`
Web *WebProvider `description:"Enable Web backend"`
Marathon *provider.Marathon `description:"Enable Marathon backend"`
Consul *provider.Consul `description:"Enable Consul backend"`
ConsulCatalog *provider.ConsulCatalog `description:"Enable Consul catalog backend"`
Etcd *provider.Etcd `description:"Enable Etcd backend"`
Zookeeper *provider.Zookepper `description:"Enable Zookeeper backend"`
Boltdb *provider.BoltDb `description:"Enable Boltdb backend"`
Kubernetes *provider.Kubernetes `description:"Enable Kubernetes backend"`
Mesos *provider.Mesos `description:"Enable Mesos backend"`
Eureka *provider.Eureka `description:"Enable Eureka backend"`
ECS *provider.ECS `description:"Enable ECS backend"`
Rancher *provider.Rancher `description:"Enable Rancher backend"`
}
// DefaultEntryPoints holds default entry points
type DefaultEntryPoints []string
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (dep *DefaultEntryPoints) String() string {
return strings.Join(*dep, ",")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (dep *DefaultEntryPoints) Set(value string) error {
entrypoints := strings.Split(value, ",")
if len(entrypoints) == 0 {
return errors.New("Bad DefaultEntryPoints format: " + value)
}
for _, entrypoint := range entrypoints {
*dep = append(*dep, entrypoint)
}
return nil
}
// Get return the EntryPoints map
func (dep *DefaultEntryPoints) Get() interface{} {
return DefaultEntryPoints(*dep)
}
// SetValue sets the EntryPoints map with val
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
}
// Type is type of the struct
func (dep *DefaultEntryPoints) Type() string {
return fmt.Sprint("defaultentrypoints")
}
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
type EntryPoints map[string]*EntryPoint
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (ep *EntryPoints) String() string {
return fmt.Sprintf("%+v", *ep)
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (ep *EntryPoints) Set(value string) error {
regex := regexp.MustCompile("(?:Name:(?P<Name>\\S*))\\s*(?:Address:(?P<Address>\\S*))?\\s*(?:TLS:(?P<TLS>\\S*))?\\s*((?P<TLSACME>TLS))?\\s*(?:CA:(?P<CA>\\S*))?\\s*(?:Redirect.EntryPoint:(?P<RedirectEntryPoint>\\S*))?\\s*(?:Redirect.Regex:(?P<RedirectRegex>\\S*))?\\s*(?:Redirect.Replacement:(?P<RedirectReplacement>\\S*))?\\s*(?:Compress:(?P<Compress>\\S*))?")
match := regex.FindAllStringSubmatch(value, -1)
if match == nil {
return errors.New("Bad EntryPoints format: " + value)
}
matchResult := match[0]
result := make(map[string]string)
for i, name := range regex.SubexpNames() {
if i != 0 {
result[name] = matchResult[i]
}
}
var tls *TLS
if len(result["TLS"]) > 0 {
certs := Certificates{}
if err := certs.Set(result["TLS"]); err != nil {
return err
}
tls = &TLS{
Certificates: certs,
}
} else if len(result["TLSACME"]) > 0 {
tls = &TLS{
Certificates: Certificates{},
}
}
if len(result["CA"]) > 0 {
files := strings.Split(result["CA"], ",")
tls.ClientCAFiles = files
}
var redirect *Redirect
if len(result["RedirectEntryPoint"]) > 0 || len(result["RedirectRegex"]) > 0 || len(result["RedirectReplacement"]) > 0 {
redirect = &Redirect{
EntryPoint: result["RedirectEntryPoint"],
Regex: result["RedirectRegex"],
Replacement: result["RedirectReplacement"],
}
}
compress := false
if len(result["Compress"]) > 0 {
compress = strings.EqualFold(result["Compress"], "enable") || strings.EqualFold(result["Compress"], "on")
}
(*ep)[result["Name"]] = &EntryPoint{
Address: result["Address"],
TLS: tls,
Redirect: redirect,
Compress: compress,
}
return nil
}
// Get return the EntryPoints map
func (ep *EntryPoints) Get() interface{} {
return EntryPoints(*ep)
}
// SetValue sets the EntryPoints map with val
func (ep *EntryPoints) SetValue(val interface{}) {
*ep = EntryPoints(val.(EntryPoints))
}
// Type is type of the struct
func (ep *EntryPoints) Type() string {
return fmt.Sprint("entrypoints")
}
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
type EntryPoint struct {
Network string
Address string
TLS *TLS
Redirect *Redirect
Auth *types.Auth
Compress bool
}
// Redirect configures a redirection of an entry point to another, or to an URL
type Redirect struct {
EntryPoint string
Regex string
Replacement string
}
// TLS configures TLS for an entry point
type TLS struct {
MinVersion string
CipherSuites []string
Certificates Certificates
ClientCAFiles []string
}
// Map of allowed TLS minimum versions
var minVersion = map[string]uint16{
`VersionTLS10`: tls.VersionTLS10,
`VersionTLS11`: tls.VersionTLS11,
`VersionTLS12`: tls.VersionTLS12,
}
// Map of TLS CipherSuites from crypto/tls
var cipherSuites = map[string]uint16{
`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
`TLS_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
`TLS_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
`TLS_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_RSA_WITH_AES_128_CBC_SHA,
`TLS_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_RSA_WITH_AES_256_CBC_SHA,
`TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
`TLS_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
}
// Certificates defines traefik certificates type
// Certs and Keys could be either a file path, or the file content itself
type Certificates []Certificate
//CreateTLSConfig creates a TLS config from Certificate structures
func (certs *Certificates) CreateTLSConfig() (*tls.Config, error) {
config := &tls.Config{}
config.Certificates = []tls.Certificate{}
certsSlice := []Certificate(*certs)
for _, v := range certsSlice {
isAPath := false
_, errCert := os.Stat(v.CertFile)
_, errKey := os.Stat(v.KeyFile)
if errCert == nil {
if errKey == nil {
isAPath = true
} else {
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
}
} else if errKey == nil {
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
}
cert := tls.Certificate{}
var err error
if isAPath {
cert, err = tls.LoadX509KeyPair(v.CertFile, v.KeyFile)
if err != nil {
return nil, err
}
} else {
cert, err = tls.X509KeyPair([]byte(v.CertFile), []byte(v.KeyFile))
if err != nil {
return nil, err
}
}
config.Certificates = append(config.Certificates, cert)
}
return config, nil
}
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (certs *Certificates) String() string {
if len(*certs) == 0 {
return ""
}
var result []string
for _, certificate := range *certs {
result = append(result, certificate.CertFile+","+certificate.KeyFile)
}
return strings.Join(result, ";")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (certs *Certificates) Set(value string) error {
certificates := strings.Split(value, ";")
for _, certificate := range certificates {
files := strings.Split(certificate, ",")
if len(files) != 2 {
return errors.New("Bad certificates format: " + value)
}
*certs = append(*certs, Certificate{
CertFile: files[0],
KeyFile: files[1],
})
}
return nil
}
// Type is type of the struct
func (certs *Certificates) Type() string {
return fmt.Sprint("certificates")
}
// Certificate holds a SSL cert/key pair
// Certs and Key could be either a file path, or the file content itself
type Certificate struct {
CertFile string
KeyFile string
}
// Retry contains request retry config
type Retry struct {
Attempts int `description:"Number of attempts"`
}
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
//default Docker
var defaultDocker provider.Docker
defaultDocker.Watch = true
defaultDocker.ExposedByDefault = true
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
defaultDocker.SwarmMode = false
// default File
var defaultFile provider.File
defaultFile.Watch = true
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
// default Web
var defaultWeb WebProvider
defaultWeb.Address = ":8080"
defaultWeb.Statistics = &types.Statistics{
RecentErrors: 10,
}
// default Metrics
defaultWeb.Metrics = &types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
},
}
// default Marathon
var defaultMarathon provider.Marathon
defaultMarathon.Watch = true
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
defaultMarathon.ExposedByDefault = true
defaultMarathon.Constraints = types.Constraints{}
defaultMarathon.DialerTimeout = 60
defaultMarathon.KeepAlive = 10
// default Consul
var defaultConsul provider.Consul
defaultConsul.Watch = true
defaultConsul.Endpoint = "127.0.0.1:8500"
defaultConsul.Prefix = "traefik"
defaultConsul.Constraints = types.Constraints{}
// default ConsulCatalog
var defaultConsulCatalog provider.ConsulCatalog
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
defaultConsulCatalog.Constraints = types.Constraints{}
// default Etcd
var defaultEtcd provider.Etcd
defaultEtcd.Watch = true
defaultEtcd.Endpoint = "127.0.0.1:2379"
defaultEtcd.Prefix = "/traefik"
defaultEtcd.Constraints = types.Constraints{}
//default Zookeeper
var defaultZookeeper provider.Zookepper
defaultZookeeper.Watch = true
defaultZookeeper.Endpoint = "127.0.0.1:2181"
defaultZookeeper.Prefix = "/traefik"
defaultZookeeper.Constraints = types.Constraints{}
//default Boltdb
var defaultBoltDb provider.BoltDb
defaultBoltDb.Watch = true
defaultBoltDb.Endpoint = "127.0.0.1:4001"
defaultBoltDb.Prefix = "/traefik"
defaultBoltDb.Constraints = types.Constraints{}
//default Kubernetes
var defaultKubernetes provider.Kubernetes
defaultKubernetes.Watch = true
defaultKubernetes.Endpoint = ""
defaultKubernetes.LabelSelector = ""
defaultKubernetes.Constraints = types.Constraints{}
// default Mesos
var defaultMesos provider.Mesos
defaultMesos.Watch = true
defaultMesos.Endpoint = "http://127.0.0.1:5050"
defaultMesos.ExposedByDefault = true
defaultMesos.Constraints = types.Constraints{}
defaultMesos.RefreshSeconds = 30
defaultMesos.ZkDetectionTimeout = 30
defaultMesos.StateTimeoutSecond = 30
//default ECS
var defaultECS provider.ECS
defaultECS.Watch = true
defaultECS.ExposedByDefault = true
defaultECS.RefreshSeconds = 15
defaultECS.Cluster = "default"
defaultECS.Constraints = types.Constraints{}
//default Rancher
var defaultRancher provider.Rancher
defaultRancher.Watch = true
defaultRancher.ExposedByDefault = true
defaultConfiguration := GlobalConfiguration{
Docker: &defaultDocker,
File: &defaultFile,
Web: &defaultWeb,
Marathon: &defaultMarathon,
Consul: &defaultConsul,
ConsulCatalog: &defaultConsulCatalog,
Etcd: &defaultEtcd,
Zookeeper: &defaultZookeeper,
Boltdb: &defaultBoltDb,
Kubernetes: &defaultKubernetes,
Mesos: &defaultMesos,
ECS: &defaultECS,
Rancher: &defaultRancher,
Retry: &Retry{},
}
//default Rancher
//@TODO: ADD
return &TraefikConfiguration{
GlobalConfiguration: defaultConfiguration,
}
}
// NewTraefikConfiguration creates a TraefikConfiguration with default values
func NewTraefikConfiguration() *TraefikConfiguration {
return &TraefikConfiguration{
GlobalConfiguration: GlobalConfiguration{
GraceTimeOut: 10,
AccessLogsFile: "",
TraefikLogsFile: "",
LogLevel: "ERROR",
EntryPoints: map[string]*EntryPoint{},
Constraints: types.Constraints{},
DefaultEntryPoints: []string{},
ProvidersThrottleDuration: time.Duration(2 * time.Second),
MaxIdleConnsPerHost: 200,
CheckNewVersion: true,
},
ConfigFile: "",
}
}
type configs map[string]*types.Configuration

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,171 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2017 Brian 'redbeard' Harrington <redbeard@dead-city.org>
#
# dumpcerts.sh - A simple utility to explode a Traefik acme.json file into a
# directory of certificates and a private key
#
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
#
# Dependencies -
# util-linux
# openssl
# jq
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# Exit codes:
# 1 - A component is missing or could not be read
# 2 - There was a problem reading acme.json
# 4 - The destination certificate directory does not exist
# 8 - Missing private key
set -o errexit
set -o pipefail
set -o nounset
USAGE="$(basename "$0") <path to acme> <destination cert directory>"
# Platform variations
case "$(uname)" in
'Linux')
# On Linux, -d should always work. --decode does not work with Alpine's busybox-binary
CMD_DECODE_BASE64="base64 -d"
;;
*)
# Max OS-X supports --decode and -D, but --decode may be supported by other platforms as well.
CMD_DECODE_BASE64="base64 --decode"
;;
esac
# Allow us to exit on a missing jq binary
exit_jq() {
echo "
You must have the binary 'jq' to use this.
jq is available at: https://stedolan.github.io/jq/download/
${USAGE}" >&2
exit 1
}
bad_acme() {
echo "
There was a problem parsing your acme.json file.
${USAGE}" >&2
exit 2
}
if [ $# -ne 2 ]; then
echo "
Insufficient number of parameters.
${USAGE}" >&2
exit 1
fi
readonly acmefile="${1}"
readonly certdir="${2%/}"
if [ ! -r "${acmefile}" ]; then
echo "
There was a problem reading from '${acmefile}'
We need to read this file to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 2
fi
if [ ! -d "${certdir}" ]; then
echo "
Path ${certdir} does not seem to be a directory
We need a directory in which to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 4
fi
jq=$(command -v jq) || exit_jq
priv=$(${jq} -e -r '.Account.PrivateKey' "${acmefile}") || bad_acme
if [ ! -n "${priv}" ]; then
echo "
There didn't seem to be a private key in ${acmefile}.
Please ensure that there is a key in this file and try again." >&2
exit 8
fi
# If they do not exist, create the needed subdirectories for our assets
# and place each in a variable for later use, normalizing the path
mkdir -p "${certdir}"/{certs,private}
pdir="${certdir}/private/"
cdir="${certdir}/certs/"
# Save the existing umask, change the default mode to 600, then
# after writing the private key switch it back to the default
oldumask=$(umask)
umask 177
trap 'umask ${oldumask}' EXIT
# traefik stores the private key in stripped base64 format but the certificates
# bundled as a base64 object without stripping headers. This normalizes the
# headers and formatting.
#
# In testing this out it was a balance between the following mechanisms:
# gawk:
# echo ${priv} | awk 'BEGIN {print "-----BEGIN RSA PRIVATE KEY-----"}
# {gsub(/.{64}/,"&\n")}1
# END {print "-----END RSA PRIVATE KEY-----"}' > "${pdir}/letsencrypt.key"
#
# openssl:
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
#
# and sed:
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
# echo ${priv} | sed -E 's/(.{64})/\1\n/g' >> "${pdir}/letsencrypt.key"
# sed -i '$ d' "${pdir}/letsencrypt.key"
# echo "-----END RSA PRIVATE KEY-----" >> "${pdir}/letsencrypt.key"
# openssl rsa -noout -in "${pdir}/letsencrypt.key" -check # To check if the key is valid
# In the end, openssl was chosen because most users will need this script
# *because* of openssl combined with the fact that it will refuse to write the
# key if it does not parse out correctly. The other mechanisms were left as
# comments so that the user can choose the mechanism most appropriate to them.
echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
| openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
# Process the certificates for each of the domains in acme.json
domains=$(jq -r '.Certificates[].Domain.Main' ${acmefile}) || bad_acme
for domain in $domains; do
# Traefik stores a cert bundle for each domain. Within this cert
# bundle there is both proper the certificate and the Let's Encrypt CA
echo "Extracting cert bundle for ${domain}"
cert=$(jq -e -r --arg domain "$domain" '.Certificates[] |
select (.Domain.Main == $domain )| .Certificate' ${acmefile}) || bad_acme
echo "${cert}" | ${CMD_DECODE_BASE64} > "${cdir}/${domain}.crt"
echo "Extracting private key for ${domain}"
key=$(jq -e -r --arg domain "$domain" '.Certificates[] |
select (.Domain.Main == $domain )| .Key' ${acmefile}) || bad_acme
echo "${key}" | ${CMD_DECODE_BASE64} > "${pdir}/${domain}.key"
done

View File

@@ -1,41 +1,11 @@
[Unit]
Description=Traefik
Documentation=https://docs.traefik.io
#After=network-online.target
#AssertFileIsExecutable=/usr/bin/traefik
#AssertPathExists=/etc/traefik/traefik.toml
[Service]
# Run traefik as its own user (create new user with: useradd -r -s /bin/false -U -M traefik)
#User=traefik
#AmbientCapabilities=CAP_NET_BIND_SERVICE
# configure service behavior
Type=notify
#ExecStart=/usr/bin/traefik --configFile=/etc/traefik/traefik.toml
ExecStart=/usr/bin/traefik --configFile=/etc/traefik.toml
Restart=always
WatchdogSec=1s
# lock down system access
# prohibit any operating system and configuration modification
#ProtectSystem=strict
# create separate, new (and empty) /tmp and /var/tmp filesystems
#PrivateTmp=true
# make /home directories inaccessible
#ProtectHome=true
# turns off access to physical devices (/dev/...)
#PrivateDevices=true
# make kernel settings (procfs and sysfs) read-only
#ProtectKernelTunables=true
# make cgroups /sys/fs/cgroup read-only
#ProtectControlGroups=true
# allow writing of acme.json
#ReadWritePaths=/etc/traefik/acme.json
# depending on log and entrypoint configuration, you may need to allow writing to other paths, too
# limit number of processes in this unit
#LimitNPROC=1
[Install]
WantedBy=multi-user.target

View File

@@ -1 +0,0 @@
site/

View File

@@ -1,9 +0,0 @@
{
"no-hard-tabs": false,
"MD007": { "indent": 4 },
"MD009": false,
"MD013": false,
"MD026": false,
"MD033": false,
"MD034": false
}

View File

@@ -1,52 +0,0 @@
#######
# This Makefile contains all targets related to the documentation
#######
DOCS_VERIFY_SKIP ?= false
DOCS_LINT_SKIP ?= false
TRAEFIK_DOCS_BUILD_IMAGE ?= traefik-docs
TRAEFIK_DOCS_CHECK_IMAGE ?= $(TRAEFIK_DOCS_BUILD_IMAGE)-check
SITE_DIR := $(CURDIR)/site
DOCKER_RUN_DOC_PORT := 8000
DOCKER_RUN_DOC_MOUNTS := -v $(CURDIR):/mkdocs
DOCKER_RUN_DOC_OPTS := --rm $(DOCKER_RUN_DOC_MOUNTS) -p $(DOCKER_RUN_DOC_PORT):8000
# Default: generates the documentation into $(SITE_DIR)
docs: docs-clean docs-image docs-lint docs-build docs-verify
# Writer Mode: build and serve docs on http://localhost:8000 with livereload
docs-serve: docs-image
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) mkdocs serve
# Utilities Targets for each step
docs-image:
docker build -t $(TRAEFIK_DOCS_BUILD_IMAGE) -f docs.Dockerfile ./
docs-build: docs-image
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) sh -c "mkdocs build \
&& chown -R $(shell id -u):$(shell id -g) ./site"
docs-verify: docs-build
@if [ "$(DOCS_VERIFY_SKIP)" != "true" ]; then \
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./; \
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /verify.sh; \
else \
echo "DOCS_VERIFY_SKIP is true: no verification done."; \
fi
docs-lint:
@if [ "$(DOCS_LINT_SKIP)" != "true" ]; then \
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./ && \
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /lint.sh; \
else \
echo "DOCS_LINT_SKIP is true: no linting done."; \
fi
docs-clean:
rm -rf $(SITE_DIR)
.PHONY: all docs-verify docs docs-clean docs-build docs-lint

386
docs/basics.md Normal file
View File

@@ -0,0 +1,386 @@
# Concepts
Let's take our example from the [overview](https://docs.traefik.io/#overview) again:
> Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
> If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
> - domain `api.domain.com` will point the microservice `api` in your private network
> - path `domain.com/web` will point the microservice `web` in your private network
> - domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
> ![Architecture](img/architecture.png)
Let's zoom on Træfɪk and have an overview of its internal architecture:
![Architecture](img/internal.png)
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfɪk (listening port, SSL, traffic redirection...).
- Traffic is then forwarded to a matching [frontend](#frontends). A frontend defines routes from [entrypoints](#entrypoints) to [backends](#backends).
Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can match or not a request.
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
- Finally, the [server](#servers) will forward the request to the corresponding microservice in the private network.
## Entrypoints
Entrypoints are the network entry points into Træfɪk.
They can be defined using:
- a port (80, 443...)
- SSL (Certificates, Keys, authentication with a client certificate signed by a trusted CA...)
- redirection to another entrypoint (redirect `HTTP` to `HTTPS`)
Here is an example of entrypoints definition:
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- Two entrypoints are defined `http` and `https`.
- `http` listens on port `80` and `https` on port `443`.
- We enable SSL on `https` by giving a certificate and a key.
- We also redirect all the traffic from entrypoint `http` to `https`.
And here is another example with client certificate authentication:
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- We enable SSL on `https` by giving a certificate and a key.
- One or several files containing Certificate Authorities in PEM format are added.
- It is possible to have multiple CA:s in the same file or keep them in separate files.
## Frontends
A frontend is a set of rules that forwards the incoming traffic from an entrypoint to a backend.
Frontends can be defined using the following rules:
- `Headers: Content-Type, application/json`: Headers adds a matcher for request header values. It accepts a sequence of key/value pairs to be matched.
- `HeadersRegexp: Content-Type, application/(text|json)`: Regular expressions can be used with headers as well. It accepts a sequence of key/value pairs, where the value has regex support.
- `Host: traefik.io, www.traefik.io`: Match request host with given host list.
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Adds a matcher for the URL hosts. It accepts templates with zero or more URL variables enclosed by `{}`. Variables can define an optional regexp pattern to be matched.
- `Method: GET, POST, PUT`: Method adds a matcher for HTTP methods. It accepts a sequence of one or more methods to be matched.
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Path adds a matcher for the URL paths. It accepts templates with zero or more URL variables enclosed by `{}`.
- `PathStrip`: Same as `Path` but strip the given prefix from the request URL's Path.
- `PathPrefix`: PathPrefix adds a matcher for the URL path prefixes. This matches if the given template is a prefix of the full URL path.
- `PathPrefixStrip`: Same as `PathPrefix` but strip the given prefix from the request URL's Path.
- `AddPrefix` : Add prefix from the request URL's Path.
You can use multlple values for a rule by separating them with `,`.
You can use multiple rules by separating them by `;`.
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
Here is an example of frontends definition:
```toml
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost,test2.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "HostRegexp:localhost,{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
- Three frontends are defined: `frontend1`, `frontend2` and `frontend3`
- `frontend1` will forward the traffic to the `backend2` if the rule `Host:test.localhost,test2.localhost` is matched
- `frontend2` will forward the traffic to the `backend1` if the rule `Host:localhost,{subdomain:[a-z]+}.localhost` is matched (forwarding client `Host` header to the backend)
- `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched
### Combining multiple rules
As seen in the previous example, you can combine multiple rules.
In TOML file, you can use multiple routes:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Path:/test"
```
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
You can also use the notation using a `;` separator, same result:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
Finally, you can create a rule to bind multiple domains or Path to a frontend, using the `,` separator:
```toml
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
```
### Priorities
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
`PathPrefix:/12345` will be matched before `PathPrefix:/1234` that will be matched before `PathPrefix:/1`.
You can customize priority by frontend:
```
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
```
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
## Backends
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
Various methods of load-balancing are supported:
- `wrr`: Weighted Round Robin
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others. It also rolls back to original weights if the servers have changed.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
Initial state is Standby. CB observes the statistics and does not modify the request.
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
Once Tripped timer expires, CB enters Recovering state and resets all stats.
In case the condition does not match and recovery timer expires, CB enters Standby state.
It can be configured using:
- Methods: `LatencyAtQuantileMS`, `NetworkErrorRatio`, `ResponseCodeRatio`
- Operators: `AND`, `OR`, `EQ`, `NEQ`, `LT`, `LE`, `GT`, `GE`
For example:
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in range [500-600) to [0-600)
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can
also be applied to each backend.
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and
`maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to
evaluate the maximum connections.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
```
- `backend1` will return `HTTP code 429 Too Many Requests` if there are already 10 requests in progress for the same Host header.
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
Sticky sessions are supported with both load balancers. When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial
request. On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend
will be assigned.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.loadbalancer]
sticky = true
```
A health check can be configured in order to remove a backend from LB rotation
as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
requests periodically carried out by Traefik. The check is defined by a path
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
often the health check should be executed (the default being 30 seconds). Each
backend must respond to the health check within 5 seconds.
A recovering backend returning 200 OK responses again is being returned to the
LB rotation pool.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
```
## Servers
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
Here is an example of backends and servers definition:
```toml
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
```
- Two backends are defined: `backend1` and `backend2`
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1` using default `wrr` load-balancing strategy.
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
# Configuration
Træfɪk's configuration has two parts:
- The [static Træfɪk configuration](/basics#static-trfk-configuration) which is loaded only at the beginning.
- The [dynamic Træfɪk configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
## Static Træfɪk configuration
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
Træfɪk can be configured using many configuration sources with the following precedence order.
Each item takes precedence over the item below it:
- [Key-value Store](/basics/#key-value-stores)
- [Arguments](/basics/#arguments)
- [Configuration file](/basics/#configuration-file)
- Default
It means that arguments override configuration file, and Key-value Store overrides arguments.
### Configuration file
By default, Træfɪk will try to find a `traefik.toml` in the following places:
- `/etc/traefik/`
- `$HOME/.traefik/`
- `.` *the working directory*
You can override this by setting a `configFile` argument:
```bash
$ traefik --configFile=foo/bar/myconfigfile.toml
```
Please refer to the [global configuration](/toml/#global-configuration) section to get documentation on it.
### Arguments
Each argument (and command) is described in the help section:
```bash
$ traefik --help
```
Note that all default values will be displayed as well.
### Key-value stores
Træfɪk supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
## Dynamic Træfɪk configuration
The dynamic configuration concerns :
- [Frontends](/basics/#frontends)
- [Backends](/basics/#backends)
- [Servers](/basics/#servers)
Træfɪk can hot-reload those rules which could be provided by [multiple configuration backends](/toml/#configuration-backends).
We only need to enable `watch` option to make Træfɪk watch configuration backend changes and generate its configuration automatically.
Routes to services will be created and updated instantly at any changes.
Please refer to the [configuration backends](/toml/#configuration-backends) section to get documentation on it.
# Commands
Usage: `traefik [command] [--flag=flag_argument]`
List of Træfɪk available commands with description :                                                             
- `version` : Print version 
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfɪk configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
Each command may have related flags.
All those related flags will be displayed with :
```bash
$ traefik [command] --help
```
Note that each command is described at the beginning of the help section:
```bash
$ traefik --help
```

View File

@@ -14,8 +14,8 @@ I used 4 VMs for the tests with the following configuration:
## Setup
1. One VM used to launch the benchmarking tool [wrk](https://github.com/wg/wrk)
2. One VM for Traefik (v1.0.0-beta.416) / nginx (v1.4.6)
3. Two VMs for 2 backend servers in go [whoami](https://github.com/containous/whoami/)
2. One VM for traefik (v1.0.0-beta.416) / nginx (v1.4.6)
3. Two VMs for 2 backend servers in go [whoami](https://github.com/emilevauge/whoamI/)
Each VM has been tuned using the following limits:
@@ -65,8 +65,8 @@ http {
keepalive_requests 10000;
types_hash_max_size 2048;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
@@ -117,8 +117,8 @@ server {
Here is the `traefik.toml` file used:
```toml
maxIdleConnsPerHost = 100000
```
MaxIdleConnsPerHost = 100000
defaultEntryPoints = ["http"]
[entryPoints]
@@ -145,7 +145,7 @@ defaultEntryPoints = ["http"]
## Results
### whoami:
```shell
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
Running 1m test @ http://IP-whoami:80/bench
20 threads and 1000 connections
@@ -164,7 +164,7 @@ Transfer/sec: 6.40MB
```
### nginx:
```shell
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
Running 1m test @ http://IP-nginx:8001/bench
20 threads and 1000 connections
@@ -182,9 +182,8 @@ Requests/sec: 33591.67
Transfer/sec: 4.97MB
```
### Traefik:
```shell
### traefik:
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
Running 1m test @ http://IP-traefik:8000/bench
20 threads and 1000 connections
@@ -210,5 +209,5 @@ Not bad for young project :) !
Some areas of possible improvements:
- Use [GO_REUSEPORT](https://github.com/kavu/go_reuseport) listener
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with Traefik than with nginx)
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with traefik than with nginx)

View File

@@ -1,43 +0,0 @@
FROM alpine:3.9 as alpine
# The "build-dependencies" virtual package provides build tools for html-proofer installation.
# It compile ruby-nokogiri, because alpine native version is always out of date
# This virtual package is cleaned at the end.
RUN apk --no-cache --no-progress add \
libcurl \
ruby \
ruby-bigdecimal \
ruby-etc \
ruby-ffi \
ruby-json \
&& apk add --no-cache --virtual build-dependencies \
build-base \
libcurl \
libxml2-dev \
libxslt-dev \
ruby-dev \
&& gem install --no-document html-proofer -v 3.10.2 \
&& apk del build-dependencies
# After Ruby, some NodeJS YAY!
RUN apk --no-cache --no-progress add \
git \
nodejs \
npm \
&& npm install markdownlint@0.12.0 markdownlint-cli@0.13.0 --global
# Finally the shell tools we need for later
# tini helps to terminate properly all the parallelized tasks when sending CTRL-C
RUN apk --no-cache --no-progress add \
ca-certificates \
curl \
tini
COPY ./scripts/verify.sh /verify.sh
COPY ./scripts/lint.sh /lint.sh
WORKDIR /app
VOLUME ["/tmp","/app"]
ENTRYPOINT ["/sbin/tini","-g","sh"]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 208 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 376 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 354 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 339 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 452 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

View File

@@ -1,4 +0,0 @@
/* Highlight */
(function(hljs) {
hljs.initHighlightingOnLoad();
})(hljs);

View File

@@ -1,24 +0,0 @@
Copyright (c) 2006, Ivan Sagalaev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of highlight.js nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More