1
0
mirror of https://github.com/containous/traefik.git synced 2025-09-17 21:44:29 +03:00

Compare commits

..

14 Commits
v1.3.4 ... v1.0

Author SHA1 Message Date
mmatur
49a107422c fix: docs build trusted host 2024-05-16 21:10:36 +02:00
mmatur
1a6384d7e9 fix: docs build alpine version 2024-05-16 16:28:27 +02:00
mmatur
0b4fb0de33 feat: add dockerfile for documentation 2021-10-05 10:31:52 +02:00
Michael
a3ab3043f5 [doc] add requirements.txt
To be able to generate versionned documentation
2018-01-19 15:33:33 +01:00
Emile Vauge
51f2433ba5 Merge pull request #695 from containous/prepare-release-v1.0.3
Prepare release v1.0.3
2016-09-22 15:00:43 +02:00
Emile Vauge
df710fc89e Prepare release v1.0.3
Signed-off-by: Emile Vauge <emile@vauge.com>
2016-09-22 14:15:53 +02:00
Emile Vauge
cb8a8b5cbb Merge pull request #693 from containous/fix-health-race
Fix health race
2016-09-22 14:09:20 +02:00
Emile Vauge
7dfcdcec10 Fix health race
Signed-off-by: Emile Vauge <emile@vauge.com>
2016-09-22 13:47:06 +02:00
Emile Vauge
35c74ba56c Merge pull request #585 from containous/prepare-release-1.0.2
Prepare release v1.0.2
2016-08-02 19:20:43 +02:00
Emile Vauge
89fc0d2d0e Prepare release v1.0.2
Signed-off-by: Emile Vauge <emile@vauge.com>
2016-08-02 13:47:03 +02:00
Emile Vauge
5306981923 Merge pull request #582 from containous/fix-acme-tos
Fix ACME TOS
2016-08-02 13:41:05 +02:00
Emile Vauge
b466413b11 Fix ACME TOS
Signed-off-by: Emile Vauge <emile@vauge.com>
2016-08-02 12:30:05 +02:00
Emile Vauge
2c411767de Merge pull request #584 from containous/bump-oxy-version
Bump oxy version, fix streaming
2016-08-02 12:23:37 +02:00
Emile Vauge
54e80492bd Bump oxy version, fix streaming
Signed-off-by: Emile Vauge <emile@vauge.com>
2016-08-02 10:17:18 +02:00
5339 changed files with 6709 additions and 1857560 deletions

View File

@@ -1,3 +1,5 @@
dist/
vendor/
!dist/traefik
site/
**/*.test

View File

@@ -2,9 +2,16 @@
### Building
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build traefik. For changes to its dependencies, the `glide` dependency management tool and `glide-vc` plugin are required.
You need either [Docker](https://github.com/docker/docker) and `make`, or `go` and `glide` in order to build traefik.
#### Method 1: Using `Docker` and `Makefile`
#### Setting up your `go` environment
- You need `go` v1.5
- You need to set `export GO15VENDOREXPERIMENT=1` environment variable
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `go get github.com/jteeuwen/go-bindata/...`.
- If you clone Træfɪk into something like `~/go/src/github.com/traefik`, your `GOPATH` variable will have to be set to `~/go`: export `GOPATH=~/go`.
#### Using `Docker` and `Makefile`
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
@@ -12,7 +19,7 @@ You need to run the `binary` target. This will create binaries for Linux platfor
$ make binary
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
Sending build context to Docker daemon 295.3 MB
Step 0 : FROM golang:1.7
Step 0 : FROM golang:1.5
---> 8c6473912976
Step 1 : RUN go get github.com/Masterminds/glide
[...]
@@ -26,56 +33,32 @@ $ ls dist/
traefik*
```
#### Method 2: Using `go`
#### Using `glide`
###### Setting up your `go` environment
The idea behind `glide` is the following :
- You need `go` v1.7+
- It is recommended you clone Træfik into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
- This will allow your `GOPATH` and `PATH` variable to be set to `~/go` via:
```bash
$ export GOPATH=~/go
$ export PATH=$PATH:$GOPATH/bin
```
This can be verified via `$ go env`
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
#### Setting up `glide` and `glide-vc` for dependency management
- Glide is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
- The glide plugin `glide-vc` must be installed from source: `go get github.com/sgotti/glide-vc`
If you want to add a dependency, use `$ glide get` to have glide put it into the vendor folder and update the glide manifest/lock files (`glide.yaml` and `glide.lock`, respectively). A following `glide-vc` run should be triggered to trim down the size of the vendor folder. The final result must be committed into VCS.
Dependencies for the integration tests in the `integration` folder are managed in a separate `integration/glide.yaml` file using the same toolset.
Care must be taken to choose the right arguments to `glide` when dealing with either main or integration test dependencies, or otherwise risk ending up with a broken build. For that reason, the helper script `script/glide.sh` encapsulates the gory details and conveniently calls `glide-vc` as well. Call it without parameters for basic usage instructions.
Here's a full example:
- when checkout(ing) a project, **run `glide install`** to install
(`go get …`) the dependencies in the `GOPATH`.
- if you need another dependency, import and use it in
the source, and **run `glide get github.com/Masterminds/cookoo`** to save it in
`vendor` and add it to your `glide.yaml`.
```bash
# install the new main dependency github.com/foo/bar and minimize vendor size
$ ./script/glide.sh get github.com/foo/bar
# install another dependency, this time for the integration tests
$ ( cd integration && ../script/glide.sh get github.com/baz/quuz )
# generate (Only required to integrate other components such as web dashboard)
$ glide install
# generate
$ go generate
# Standard go build
# Simple go build
$ go build
# Using gox to build multiple platform
$ gox "linux darwin" "386 amd64 arm" \
-output="dist/traefik_{{.OS}}-{{.Arch}}" \
./cmd/traefik
-output="dist/traefik_{{.OS}}-{{.Arch}}"
# run other commands like tests
$ go test ./...
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
```
### Tests
##### Method 1: `Docker` and `make`
You can run unit tests using the `test-unit` target and the
integration test using the `test-integration` target.
@@ -94,8 +77,8 @@ ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
Test success
```
For development purposes, you can specify which tests to run by using:
```bash
For development purpose, you can specifiy which tests to run by using:
```
# Run every tests in the MyTest suite
TESTFLAGS="-check.f MyTestSuite" make test-integration
@@ -111,20 +94,13 @@ TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
More: https://labix.org/gocheck
##### Method 2: `go`
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
```
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
```
### Documentation
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
First make sure you have python and pip installed
```shell
```
$ python --version
Python 2.7.2
$ pip --version
@@ -133,17 +109,17 @@ pip 1.5.2
Then install mkdocs with pip
```shell
```
$ pip install mkdocs
```
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
To test documentaion localy run `mkdocs serve` in the root directory, this should start a server localy to preview your changes.
```shell
```
$ mkdocs serve
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
[I 160505 22:31:24 handlers:59] Start watching changes
[I 160505 22:31:24 handlers:61] Start detecting changes

View File

@@ -1,58 +0,0 @@
<!--
PLEASE READ THIS MESSAGE.
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For other type of questions, consider using one of:
- the Traefik community Slack channel: https://traefik.herokuapp.com
- StackOverflow: https://stackoverflow.com/questions/tagged/traefik
HOW TO WRITE A GOOD ISSUE?
- if it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### Do you want to request a *feature* or report a *bug*?
### What did you do?
### What did you expect to see?
### What did you see instead?
### Output of `traefik version`: (_What version of Traefik are you using?_)
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in debug mode (`--debug` switch)
```
(paste your output here)
```

View File

@@ -1,23 +0,0 @@
<!--
PLEASE READ THIS MESSAGE.
HOW TO WRITE A GOOD PULL REQUEST?
- Make it small.
- Do only one thing.
- Avoid re-formatting.
- Make sure the code builds.
- Make sure all tests pass.
- Add tests.
- Write useful descriptions and titles.
- Address review comments in terms of additional commits.
- Do not amend/squash existing ones unless the PR is trivial.
- Read the contributing guide: https://github.com/containous/traefik/blob/master/.github/CONTRIBUTING.md.
-->
### Description
<!--
Briefly describe the pull request in a few paragraphs.
-->

26
.github/cpr.sh vendored
View File

@@ -1,26 +0,0 @@
#!/bin/sh
#
# git config --global alias.cpr '!sh .github/cpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- Checkout a Pull Request locally"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
git remote add $remote git@github.com:$remote/traefik.git
git fetch $remote $branch
git checkout -t -b "$pr--$branch" $remote/$branch

27
.github/rmpr.sh vendored
View File

@@ -1,27 +0,0 @@
#!/bin/sh
#
# git config --global alias.rmpr '!sh .github/rmpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- remove a Pull Request local branch & remote"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
# clean
git checkout $initial
git branch -D "$pr--$branch"
git remote remove $remote

36
.github/rpr.sh vendored
View File

@@ -1,36 +0,0 @@
#!/bin/sh
#
# git config --global alias.rpr '!sh .github/rpr.sh'
set -e # stop on error
usage="$(basename "$0") pr remote/branch -- rebase a Pull Request against a remote branch"
if [ "$#" -ne 2 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
base=$2
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
clean ()
{
git checkout $initial
.github/rmpr.sh $pr
}
trap clean EXIT
.github/cpr.sh $pr
git rebase $base
git push --force-with-lease $remote "$pr--$branch"

14
.gitignore vendored
View File

@@ -1,13 +1,15 @@
/dist
/autogen/gen.go
gen.go
.idea
.intellij
log
*.iml
/traefik
/traefik.toml
/static/
traefik
traefik.toml
*.test
vendor/
static/
.vscode/
/site/
site/
*.log
*.exe
.DS_Store

View File

@@ -1,11 +0,0 @@
#!/usr/bin/env bash
set -e
sudo -E apt-get -yq update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*
docker version
pip install --user -r requirements.txt
make pull-images
ci_retry make validate

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
set -e
make test-unit
ci_retry make test-integration
make -j${N_MAKE_JOBS} crossbinary-default-parallel

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env bash
set -e
export secure='btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg='
export REPO='containous/traefik'
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
then
export VERSION
else
export VERSION=''
fi
export CODENAME=raclette
export N_MAKE_JOBS=2
function ci_retry {
local NRETRY=3
local NSLEEP=5
local n=0
until [ $n -ge $NRETRY ]
do
"$@" && break
n=$[$n+1]
echo "$@ failed, attempt ${n}/${NRETRY}"
sleep $NSLEEP
done
[ $n -lt $NRETRY ]
}
export -f ci_retry

View File

@@ -1,58 +1,33 @@
sudo: required
dist: trusty
services:
- docker
branches:
env:
global:
- secure: btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg=
- REPO: $TRAVIS_REPO_SLUG
- VERSION: $TRAVIS_TAG
- CODENAME: raclette
- N_MAKE_JOBS: 2
- CODENAME: reblochon
matrix:
- DOCKER_VERSION=1.9.1
- DOCKER_VERSION=1.10.1
sudo: required
services:
- docker
install:
- sudo service docker stop
- sudo curl https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION} -o /usr/bin/docker
- sudo chmod +x /usr/bin/docker
- sudo service docker start
- sleep 5
- docker version
- pip install --user mkdocs
- pip install --user pymdown-extensions
before_script:
- make validate
- make binary
script:
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
before_deploy:
- >
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
export BEFORE_DEPLOY_RUN=1;
sudo -E apt-get -yq update;
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
docker version;
pip install --user -r requirements.txt;
make -j${N_MAKE_JOBS} crossbinary-parallel;
make image;
mkdocs build --clean;
tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .;
fi
deploy:
- provider: pages
edge: true
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
tags: true
- provider: releases
api_key: ${GITHUB_TOKEN}
file: dist/traefik*
skip_cleanup: true
file_glob: true
on:
repo: containous/traefik
tags: true
- provider: script
script: sh script/deploy.sh
skip_cleanup: true
on:
repo: containous/traefik
tags: true
- provider: script
script: sh script/deploy-docker.sh
skip_cleanup: true
on:
repo: containous/traefik
- make test-unit
- make test-integration
- make crossbinary
- make image
after_success:
- make deploy
- make deploy-pr

File diff suppressed because it is too large Load Diff

View File

@@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at contact@containo.us
All complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016-2017 Containous SAS
Copyright (c) 2016 Containous SAS, Emile Vauge, emile@vauge.com
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -6,25 +6,21 @@ TRAEFIK_ENVS := \
-e TESTFLAGS \
-e VERBOSE \
-e VERSION \
-e CODENAME \
-e TESTDIRS
-e CODENAME
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/' | grep -v '^integration/vendor/')
SRCS = $(shell git ls-files '*.go' | grep -v '^external/')
BIND_DIR := "dist"
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
print-%: ; @echo $*=$($*)
@@ -39,24 +35,6 @@ binary: generate-webui build ## build the linux binary
crossbinary: generate-webui build ## cross build the non-linux binaries
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate crossbinary
crossbinary-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-default crossbinary-others
crossbinary-default: generate-webui build
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-default
crossbinary-default-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-default
crossbinary-others: generate-webui build
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-others
crossbinary-others-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-others
test: build ## run the unit and integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
@@ -64,10 +42,10 @@ test-unit: build ## run the unit tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit
test-integration: build ## run the integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-integration
validate: build ## validate gofmt, golint and go vet
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell validate-vendor
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-gofmt validate-govet validate-golint
build: dist
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
@@ -81,7 +59,7 @@ build-no-cache: dist
shell: build ## start a shell inside the build env
$(DOCKER_RUN_TRAEFIK) /bin/bash
image: binary ## build a docker traefik image
image: build ## build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
dist:
@@ -95,7 +73,7 @@ run-dev:
generate-webui: build-webui
if [ ! -d "static" ]; then \
mkdir -p static; \
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui npm run build; \
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui gulp; \
echo 'For more informations show `webui/readme.md`' > $$PWD/static/DONT-EDIT-FILES-IN-THIS-DIRECTORY.md; \
fi
@@ -105,10 +83,11 @@ lint:
fmt:
gofmt -s -l -w $(SRCS)
pull-images:
for f in $(shell find ./integration/resources/compose/ -type f); do \
docker-compose -f $$f pull; \
done
deploy:
./script/deploy.sh
deploy-pr:
./script/deploy-pr.sh
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

View File

@@ -1,19 +1,19 @@
<p align="center">
<img src="docs/img/traefik.logo.png" alt="Træfik" title="Træfik" />
<img src="docs/img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
</p>
[![Build Status](https://travis-ci.org/containous/traefik.svg?branch=master)](https://travis-ci.org/containous/traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://docs.traefik.io)
[![Go Report Card](https://goreportcard.com/badge/containous/traefik)](http://goreportcard.com/report/containous/traefik)
[![](https://images.microbadger.com/badges/image/traefik.svg)](https://microbadger.com/images/traefik)
[![Go Report Card](https://goreportcard.com/badge/kubernetes/helm)](http://goreportcard.com/report/containous/traefik)
[![Image Layer](https://badge.imagelayers.io/traefik:latest.svg)](https://imagelayers.io/?images=traefik)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/containous/traefik/blob/master/LICENSE.md)
[![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Træfik (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Kubernetes](http://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Mesos](https://github.com/apache/mesos), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Eureka](https://github.com/Netflix/eureka), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), Rest API, file...) to manage its configuration automatically and dynamically.
Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Mesos/Marathon](https://mesosphere.github.io/marathon/), [Kubernetes](http://kubernetes.io/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), Rest API, file...) to manage its configuration automatically and dynamically.
## Overview
@@ -28,11 +28,11 @@ But a microservices architecture is dynamic... Services are added, removed, kill
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
Here enters Træfik.
Here enters Træfɪk.
![Architecture](docs/img/architecture.png)
Træfik can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Routes to your services will be created instantly.
Run it and forget it!
@@ -45,14 +45,14 @@ Run it and forget it!
- [It's fast](http://docs.traefik.io/benchmarks)
- No dependency hell, single binary made with go
- Rest API
- Multiple backends supported: Docker, Swarm, Kubernetes, Marathon, Mesos, Consul, Etcd, and more to come
- Watchers for backends, can listen for changes in backends to apply a new configuration automatically
- Multiple backends supported: Docker, Mesos/Marathon, Consul, Etcd, and more to come
- Watchers for backends, can listen change in backends to apply a new configuration automatically
- Hot-reloading of configuration. No need to restart the process
- Graceful shutdown http connections
- Circuit breakers on backends
- Round Robin, rebalancer load-balancers
- Rest Metrics
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
- [Tiny](https://imagelayers.io/?images=traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
- SSL backends support
- SSL frontend support (with SNI)
- Clean AngularJS Web UI
@@ -60,25 +60,18 @@ Run it and forget it!
- HTTP/2 support
- Retry request if network error
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
- High Availability with cluster mode
## Quickstart
## Demo
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](http://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfik features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
Here is a talk (in french) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Lets'Encrypt.
[![Traefik Devoxx France](http://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](http://www.youtube.com/watch?v=QvAz9mVx5TI)
## Web UI
You can access the simple HTML frontend of Træfik.
You can access to a simple HTML frontend of Træfik.
![Web UI Providers](docs/img/web.frontend.png)
![Web UI Health](docs/img/traefik-health.png)
@@ -88,9 +81,10 @@ You can access the simple HTML frontend of Træfik.
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun guys
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
- [Negroni](https://github.com/codegangsta/negroni): web middlewares made simple
- [Manners](https://github.com/mailgun/manners): graceful shutdown of http.Handler servers
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
## Test it
## Quick start
- The simple way: grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
@@ -118,30 +112,46 @@ You can find the complete documentation [here](https://docs.traefik.io).
Please refer to [this section](.github/CONTRIBUTING.md).
## Code Of Conduct
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
## Support
You can join [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com) to get basic support.
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
If you prefer a commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
## Træfɪk here and there
These projects use Træfɪk internally. If your company uses Træfɪk, we would be glad to get your feedback :) Contact us on [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- Project [Mantl](https://mantl.io/) from Cisco
![Web UI Providers](docs/img/mantl-logo.png)
> Mantl is a modern platform for rapidly deploying globally distributed services. A container orchestrator, docker, a network stack, something to pool your logs, something to monitor health, a sprinkle of service discovery and some automation.
- Project [Apollo](http://capgemini.github.io/devops/apollo/) from Cap Gemini
![Web UI Providers](docs/img/apollo-logo.png)
> Apollo is an open source project to aid with building and deploying IAAS and PAAS services. It is particularly geared towards managing containerized applications across multiple hosts, and big data type workloads. Apollo leverages other open source components to provide basic mechanisms for deployment, maintenance, and scaling of infrastructure and applications.
## Partners
[![Zenika](docs/img/zenika.logo.png)](https://zenika.com)
Zenika is one of the leading providers of professional Open Source services and agile methodologies in
Europe. We provide consulting, development, training and support for the worlds leading Open Source
software products.
[![Asteris](docs/img/asteris.logo.png)](https://aster.is)
Founded in 2014, Asteris creates next-generation infrastructure software for the modern datacenter. Asteris writes software that makes it easy for companies to implement continuous delivery and realtime data pipelines. We support the HashiCorp stack, along with Kubernetes, Apache Mesos, Spark and Kafka. We're core committers on mantl.io, consul-cli and mesos-consul.
## Maintainers
- Emile Vauge [@emilevauge](https://github.com/emilevauge)
- Vincent Demeester [@vdemeester](https://github.com/vdemeester)
- Samuel Berthe [@samber](https://github.com/samber)
- Russell Clare [@Russell-IO](https://github.com/Russell-IO)
- Ed Robinson [@errm](https://github.com/errm)
- Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
- Manuel Laufenberg [@SantoDE](https://github.com/SantoDE)
- Thomas Recloux [@trecloux](https://github.com/trecloux)
- Timo Reimann [@timoreimann](https://github.com/timoreimann)
## Credits
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo ![logo](docs/img/traefik.icon.png).
Traefik's logo licensed under the Creative Commons 3.0 Attributions license.
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo ![logo](docs/img/traefik.icon.png)

View File

@@ -1,245 +0,0 @@
package acme
import (
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"errors"
"reflect"
"sort"
"strings"
"sync"
"time"
"github.com/containous/traefik/log"
"github.com/xenolf/lego/acme"
)
// Account is used to store lets encrypt registration info
type Account struct {
Email string
Registration *acme.RegistrationResource
PrivateKey []byte
DomainsCertificate DomainsCertificates
ChallengeCerts map[string]*ChallengeCert
}
// ChallengeCert stores a challenge certificate
type ChallengeCert struct {
Certificate []byte
PrivateKey []byte
certificate *tls.Certificate
}
// Init inits acccount struct
func (a *Account) Init() error {
err := a.DomainsCertificate.Init()
if err != nil {
return err
}
for _, cert := range a.ChallengeCerts {
if cert.certificate == nil {
certificate, err := tls.X509KeyPair(cert.Certificate, cert.PrivateKey)
if err != nil {
return err
}
cert.certificate = &certificate
}
if cert.certificate.Leaf == nil {
leaf, err := x509.ParseCertificate(cert.certificate.Certificate[0])
if err != nil {
return err
}
cert.certificate.Leaf = leaf
}
}
return nil
}
// NewAccount creates an account
func NewAccount(email string) (*Account, error) {
// Create a user. New accounts need an email and private key to start
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, err
}
domainsCerts := DomainsCertificates{Certs: []*DomainsCertificate{}}
domainsCerts.Init()
return &Account{
Email: email,
PrivateKey: x509.MarshalPKCS1PrivateKey(privateKey),
DomainsCertificate: DomainsCertificates{Certs: domainsCerts.Certs},
ChallengeCerts: map[string]*ChallengeCert{}}, nil
}
// GetEmail returns email
func (a *Account) GetEmail() string {
return a.Email
}
// GetRegistration returns lets encrypt registration resource
func (a *Account) GetRegistration() *acme.RegistrationResource {
return a.Registration
}
// GetPrivateKey returns private key
func (a *Account) GetPrivateKey() crypto.PrivateKey {
if privateKey, err := x509.ParsePKCS1PrivateKey(a.PrivateKey); err == nil {
return privateKey
}
log.Errorf("Cannot unmarshall private key %+v", a.PrivateKey)
return nil
}
// Certificate is used to store certificate info
type Certificate struct {
Domain string
CertURL string
CertStableURL string
PrivateKey []byte
Certificate []byte
}
// DomainsCertificates stores a certificate for multiple domains
type DomainsCertificates struct {
Certs []*DomainsCertificate
lock sync.RWMutex
}
func (dc *DomainsCertificates) Len() int {
return len(dc.Certs)
}
func (dc *DomainsCertificates) Swap(i, j int) {
dc.Certs[i], dc.Certs[j] = dc.Certs[j], dc.Certs[i]
}
func (dc *DomainsCertificates) Less(i, j int) bool {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[j].Domains) {
return dc.Certs[i].tlsCert.Leaf.NotAfter.After(dc.Certs[j].tlsCert.Leaf.NotAfter)
}
if dc.Certs[i].Domains.Main == dc.Certs[j].Domains.Main {
return strings.Join(dc.Certs[i].Domains.SANs, ",") < strings.Join(dc.Certs[j].Domains.SANs, ",")
}
return dc.Certs[i].Domains.Main < dc.Certs[j].Domains.Main
}
func (dc *DomainsCertificates) removeDuplicates() {
sort.Sort(dc)
for i := 0; i < len(dc.Certs); i++ {
for i2 := i + 1; i2 < len(dc.Certs); i2++ {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[i2].Domains) {
// delete
log.Warnf("Remove duplicate cert: %+v, expiration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
dc.Certs = append(dc.Certs[:i2], dc.Certs[i2+1:]...)
i2--
}
}
}
}
// Init inits DomainsCertificates
func (dc *DomainsCertificates) Init() error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
if err != nil {
return err
}
domainsCertificate.tlsCert = &tlsCert
if domainsCertificate.tlsCert.Leaf == nil {
leaf, err := x509.ParseCertificate(domainsCertificate.tlsCert.Certificate[0])
if err != nil {
return err
}
domainsCertificate.tlsCert.Leaf = leaf
}
}
dc.removeDuplicates()
return nil
}
func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain Domain) error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domain, domainsCertificate.Domains) {
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return err
}
domainsCertificate.Certificate = acmeCert
domainsCertificate.tlsCert = &tlsCert
return nil
}
}
return errors.New("Certificate to renew not found for domain " + domain.Main)
}
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
dc.lock.Lock()
defer dc.lock.Unlock()
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return nil, err
}
cert := DomainsCertificate{Domains: domain, Certificate: acmeCert, tlsCert: &tlsCert}
dc.Certs = append(dc.Certs, &cert)
return &cert, nil
}
func (dc *DomainsCertificates) getCertificateForDomain(domainToFind string) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
domains := []string{}
domains = append(domains, domainsCertificate.Domains.Main)
domains = append(domains, domainsCertificate.Domains.SANs...)
for _, domain := range domains {
if domain == domainToFind {
return domainsCertificate, true
}
}
}
return nil, false
}
func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domainToFind, domainsCertificate.Domains) {
return domainsCertificate, true
}
}
return nil, false
}
// DomainsCertificate contains a certificate for multiple domains
type DomainsCertificate struct {
Domains Domain
Certificate *Certificate
tlsCert *tls.Certificate
}
func (dc *DomainsCertificate) needRenew() bool {
for _, c := range dc.tlsCert.Certificate {
crt, err := x509.ParseCertificate(c)
if err != nil {
// If there's an error, we assume the cert is broken, and needs update
return true
}
// <= 30 days left, renew certificate
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 30 * time.Hour))) {
return true
}
}
return false
}

View File

@@ -1,54 +1,174 @@
package acme
import (
"context"
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
log "github.com/Sirupsen/logrus"
"github.com/containous/traefik/safe"
"github.com/xenolf/lego/acme"
"io/ioutil"
fmtlog "log"
"os"
"regexp"
"reflect"
"strings"
"sync"
"time"
"github.com/BurntSushi/ty/fun"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/eapache/channels"
"github.com/xenolf/lego/acme"
"github.com/xenolf/lego/providers/dns"
)
var (
// OSCPMustStaple enables OSCP stapling as from https://github.com/xenolf/lego/issues/270
OSCPMustStaple = false
)
// Account is used to store lets encrypt registration info
type Account struct {
Email string
Registration *acme.RegistrationResource
PrivateKey []byte
DomainsCertificate DomainsCertificates
}
// GetEmail returns email
func (a Account) GetEmail() string {
return a.Email
}
// GetRegistration returns lets encrypt registration resource
func (a Account) GetRegistration() *acme.RegistrationResource {
return a.Registration
}
// GetPrivateKey returns private key
func (a Account) GetPrivateKey() crypto.PrivateKey {
if privateKey, err := x509.ParsePKCS1PrivateKey(a.PrivateKey); err == nil {
return privateKey
}
log.Errorf("Cannot unmarshall private key %+v", a.PrivateKey)
return nil
}
// Certificate is used to store certificate info
type Certificate struct {
Domain string
CertURL string
CertStableURL string
PrivateKey []byte
Certificate []byte
}
// DomainsCertificates stores a certificate for multiple domains
type DomainsCertificates struct {
Certs []*DomainsCertificate
lock *sync.RWMutex
}
func (dc *DomainsCertificates) init() error {
if dc.lock == nil {
dc.lock = &sync.RWMutex{}
}
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
if err != nil {
return err
}
domainsCertificate.tlsCert = &tlsCert
}
return nil
}
func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain Domain) error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domain, domainsCertificate.Domains) {
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return err
}
domainsCertificate.Certificate = acmeCert
domainsCertificate.tlsCert = &tlsCert
return nil
}
}
return errors.New("Certificate to renew not found for domain " + domain.Main)
}
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
dc.lock.Lock()
defer dc.lock.Unlock()
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return nil, err
}
cert := DomainsCertificate{Domains: domain, Certificate: acmeCert, tlsCert: &tlsCert}
dc.Certs = append(dc.Certs, &cert)
return &cert, nil
}
func (dc *DomainsCertificates) getCertificateForDomain(domainToFind string) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
domains := []string{}
domains = append(domains, domainsCertificate.Domains.Main)
domains = append(domains, domainsCertificate.Domains.SANs...)
for _, domain := range domains {
if domain == domainToFind {
return domainsCertificate, true
}
}
}
return nil, false
}
func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domainToFind, domainsCertificate.Domains) {
return domainsCertificate, true
}
}
return nil, false
}
// DomainsCertificate contains a certificate for multiple domains
type DomainsCertificate struct {
Domains Domain
Certificate *Certificate
tlsCert *tls.Certificate
}
func (dc *DomainsCertificate) needRenew() bool {
for _, c := range dc.tlsCert.Certificate {
crt, err := x509.ParseCertificate(c)
if err != nil {
// If there's an error, we assume the cert is broken, and needs update
return true
}
// <= 7 days left, renew certificate
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 7 * time.Hour))) {
return true
}
}
return false
}
// ACME allows to connect to lets encrypt and retrieve certs
type ACME struct {
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
Storage string `description:"File or key used for certificates storage."`
StorageFile string // deprecated
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
DNSProvider string `description:"Use a DNS based challenge provider rather than HTTPS."`
DelayDontCheckDNS int `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
ACMELogging bool `description:"Enable debug logging of ACME actions."`
client *acme.Client
defaultCertificate *tls.Certificate
store cluster.Store
challengeProvider *challengeProvider
checkOnDemandDomain func(domain string) bool
jobs *channels.InfiniteChannel
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
StorageFile string `description:"File used for certificates storage."`
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
storageLock sync.RWMutex
}
//Domains parse []Domain
@@ -92,193 +212,60 @@ type Domain struct {
SANs []string
}
func (a *ACME) init() error {
if a.ACMELogging {
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
} else {
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
}
// no certificates in TLS config, so we add a default one
cert, err := generateDefaultCertificate()
if err != nil {
return err
}
a.defaultCertificate = cert
// TODO: to remove in the futurs
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
log.Warnf("ACME.StorageFile is deprecated, use ACME.Storage instead")
a.Storage = a.StorageFile
}
a.jobs = channels.NewInfiniteChannel()
return nil
}
// CreateConfig creates a tls.config from using ACME configuration
func (a *ACME) CreateConfig(tlsConfig *tls.Config, CheckOnDemandDomain func(domain string) bool) error {
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a key for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
listener := func(object cluster.Object) error {
account := object.(*Account)
account.Init()
if !leadership.IsLeader() {
a.client, err = a.buildACMEClient(account)
if err != nil {
log.Errorf("Error building ACME client %+v: %s", object, err.Error())
}
}
return nil
if len(a.StorageFile) == 0 {
return errors.New("Empty StorageFile, please provide a filename for certs storage")
}
datastore, err := cluster.NewDataStore(
leadership.Pool.Ctx(),
staert.KvSource{
Store: leadership.Store,
Prefix: a.Storage,
},
&Account{},
listener)
if err != nil {
return err
}
a.store = datastore
a.challengeProvider = &challengeProvider{store: a.store}
ticker := time.NewTicker(24 * time.Hour)
leadership.Pool.AddGoCtx(func(ctx context.Context) {
log.Infof("Starting ACME renew job...")
defer log.Infof("Stopped ACME renew job...")
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
a.renewCertificates()
}
}
})
leadership.AddListener(func(elected bool) error {
if elected {
object, err := a.store.Load()
if err != nil {
return err
}
transaction, object, err := a.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
account.Init()
var needRegister bool
if account == nil || len(account.Email) == 0 {
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
if err != nil {
return err
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Debugf("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
}
return nil
})
return nil
}
// CreateLocalConfig creates a tls.config using local ACME configuration
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a filename for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
localStore := NewLocalStore(a.Storage)
a.store = localStore
a.challengeProvider = &challengeProvider{store: a.store}
var needRegister bool
var account *Account
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
log.Infof("Loading ACME Account...")
// load account
object, err := localStore.Load()
log.Debugf("Generating default certificate...")
if len(tlsConfig.Certificates) == 0 {
// no certificates in TLS config, so we add a default one
cert, err := generateDefaultCertificate()
if err != nil {
return err
}
tlsConfig.Certificates = append(tlsConfig.Certificates, *cert)
}
var account *Account
var needRegister bool
// if certificates in storage, load them
if fileInfo, err := os.Stat(a.StorageFile); err == nil && fileInfo.Size() != 0 {
log.Infof("Loading ACME certificates...")
// load account
account, err = a.loadAccount(a)
if err != nil {
return err
}
account = object.(*Account)
} else {
log.Infof("Generating ACME Account...")
account, err = NewAccount(a.Email)
// Create a user. New accounts need an email and private key to start
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return err
}
account = &Account{
Email: a.Email,
PrivateKey: x509.MarshalPKCS1PrivateKey(privateKey),
}
account.DomainsCertificate = DomainsCertificates{Certs: []*DomainsCertificate{}, lock: &sync.RWMutex{}}
needRegister = true
}
a.client, err = a.buildACMEClient(account)
client, err := a.buildACMEClient(account)
if err != nil {
return err
}
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
wrapperChallengeProvider := newWrapperChallengeProvider()
client.SetChallengeProvider(acme.TLSSNI01, wrapperChallengeProvider)
if needRegister {
// New users will need to register; be sure to save it
log.Infof("Register...")
reg, err := a.client.Register()
reg, err := client.Register()
if err != nil {
return err
}
@@ -287,330 +274,196 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
err = a.client.AgreeToTOS()
err = client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
reg, err := client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
err = client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
// save account
transaction, _, err := a.store.Begin()
if err != nil {
return err
}
err = transaction.Commit(account)
err = a.saveAccount(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
safe.Go(func() {
a.retrieveCertificates(client, account)
if err := a.renewCertificates(client, account); err != nil {
log.Errorf("Error renewing ACME certificate %+v: %s", account, err.Error())
}
})
tlsConfig.GetCertificate = func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
if challengeCert, ok := wrapperChallengeProvider.getCertificate(clientHello.ServerName); ok {
return challengeCert, nil
}
if domainCert, ok := account.DomainsCertificate.getCertificateForDomain(clientHello.ServerName); ok {
return domainCert.tlsCert, nil
}
if a.OnDemand {
if CheckOnDemandDomain != nil && !CheckOnDemandDomain(clientHello.ServerName) {
return nil, nil
}
return a.loadCertificateOnDemand(client, account, clientHello)
}
return nil, nil
}
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
for range ticker.C {
a.renewCertificates()
for {
select {
case <-ticker.C:
if err := a.renewCertificates(client, account); err != nil {
log.Errorf("Error renewing ACME certificate %+v: %s", account, err.Error())
}
}
}
})
return nil
}
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
if providedCertificate := a.getProvidedCertificate([]string{domain}); providedCertificate != nil {
return providedCertificate, nil
}
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
log.Debugf("ACME got challenge %s", domain)
return challengeCert, nil
}
if domainCert, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
log.Debugf("ACME got domain cert %s", domain)
return domainCert.tlsCert, nil
}
if a.OnDemand {
if a.checkOnDemandDomain != nil && !a.checkOnDemandDomain(domain) {
return nil, nil
}
return a.loadCertificateOnDemand(clientHello)
}
log.Debugf("ACME got nothing %s", domain)
return nil, nil
}
func (a *ACME) retrieveCertificates() {
a.jobs.In() <- func() {
log.Infof("Retrieving ACME certificates...")
for _, domain := range a.Domains {
// check if cert isn't already loaded
account := a.store.Get().(*Account)
if _, exists := account.DomainsCertificate.exists(domain); !exists {
domains := []string{}
domains = append(domains, domain.Main)
domains = append(domains, domain.SANs...)
certificateResource, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificate for domain %s: %s", domains, err.Error())
continue
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating ACME store transaction from domain %s: %s", domain, err.Error())
continue
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificateResource, domain)
if err != nil {
log.Errorf("Error adding ACME certificate for domain %s: %s", domains, err.Error())
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
func (a *ACME) retrieveCertificates(client *acme.Client, account *Account) {
log.Infof("Retrieving ACME certificates...")
for _, domain := range a.Domains {
// check if cert isn't already loaded
if _, exists := account.DomainsCertificate.exists(domain); !exists {
domains := []string{}
domains = append(domains, domain.Main)
domains = append(domains, domain.SANs...)
certificateResource, err := a.getDomainsCertificates(client, domains)
if err != nil {
log.Errorf("Error getting ACME certificate for domain %s: %s", domains, err.Error())
continue
}
}
log.Infof("Retrieved ACME certificates")
}
}
func (a *ACME) renewCertificates() {
a.jobs.In() <- func() {
log.Debugf("Testing certificate renew...")
account := a.store.Get().(*Account)
for _, certificateResource := range account.DomainsCertificate.Certs {
if certificateResource.needRenew() {
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true, OSCPMustStaple)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
renewedACMECert := &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
account = object.(*Account)
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
_, err = account.DomainsCertificate.addCertificateForDomains(certificateResource, domain)
if err != nil {
log.Errorf("Error adding ACME certificate for domain %s: %s", domains, err.Error())
continue
}
if err = a.saveAccount(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
}
}
log.Infof("Retrieved ACME certificates")
}
func dnsOverrideDelay(delay int) error {
var err error
if delay > 0 {
log.Debugf("Delaying %d seconds rather than validating DNS propagation", delay)
acme.PreCheckDNS = func(_, _ string) (bool, error) {
time.Sleep(time.Duration(delay) * time.Second)
return true, nil
func (a *ACME) renewCertificates(client *acme.Client, account *Account) error {
log.Debugf("Testing certificate renew...")
for _, certificateResource := range account.DomainsCertificate.Certs {
if certificateResource.needRenew() {
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
renewedCert, err := client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
renewedACMECert := &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
}
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
if err = a.saveAccount(account); err != nil {
log.Errorf("Error saving ACME account: %v", err)
continue
}
}
} else if delay < 0 {
err = fmt.Errorf("Invalid negative DelayDontCheckDNS: %d", delay)
}
return err
return nil
}
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
log.Debugf("Building ACME client...")
func (a *ACME) buildACMEClient(Account *Account) (*acme.Client, error) {
caServer := "https://acme-v01.api.letsencrypt.org/directory"
if len(a.CAServer) > 0 {
caServer = a.CAServer
}
client, err := acme.NewClient(caServer, account, acme.RSA4096)
client, err := acme.NewClient(caServer, Account, acme.RSA4096)
if err != nil {
return nil, err
}
if len(a.DNSProvider) > 0 {
log.Debugf("Using DNS Challenge provider: %s", a.DNSProvider)
err = dnsOverrideDelay(a.DelayDontCheckDNS)
if err != nil {
return nil, err
}
var provider acme.ChallengeProvider
provider, err = dns.NewDNSChallengeProviderByName(a.DNSProvider)
if err != nil {
return nil, err
}
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
err = client.SetChallengeProvider(acme.DNS01, provider)
} else {
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
}
if err != nil {
return nil, err
}
return client, nil
}
func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
if certificateResource, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
func (a *ACME) loadCertificateOnDemand(client *acme.Client, Account *Account, clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
if certificateResource, ok := Account.DomainsCertificate.getCertificateForDomain(clientHello.ServerName); ok {
return certificateResource.tlsCert, nil
}
certificate, err := a.getDomainsCertificates([]string{domain})
Certificate, err := a.getDomainsCertificates(client, []string{clientHello.ServerName})
if err != nil {
return nil, err
}
log.Debugf("Got certificate on demand for domain %s", domain)
transaction, object, err := a.store.Begin()
log.Debugf("Got certificate on demand for domain %s", clientHello.ServerName)
cert, err := Account.DomainsCertificate.addCertificateForDomains(Certificate, Domain{Main: clientHello.ServerName})
if err != nil {
return nil, err
}
account = object.(*Account)
cert, err := account.DomainsCertificate.addCertificateForDomains(certificate, Domain{Main: domain})
if err != nil {
return nil, err
}
if err = transaction.Commit(account); err != nil {
if err = a.saveAccount(Account); err != nil {
return nil, err
}
return cert.tlsCert, nil
}
// LoadCertificateForDomains loads certificates from ACME for given domains
func (a *ACME) LoadCertificateForDomains(domains []string) {
a.jobs.In() <- func() {
log.Debugf("LoadCertificateForDomains %v...", domains)
if len(domains) == 0 {
// no domain
return
}
domains = fun.Map(types.CanonicalDomain, domains).([]string)
// Check provided certificates
if a.getProvidedCertificate(domains) != nil {
return
}
operation := func() error {
if a.client == nil {
return fmt.Errorf("ACME client still not built")
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting ACME client: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 30 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting ACME client: %v", err)
return
}
account := a.store.Get().(*Account)
var domain Domain
if len(domains) > 1 {
domain = Domain{Main: domains[0], SANs: domains[1:]}
} else {
domain = Domain{Main: domains[0]}
}
if _, exists := account.DomainsCertificate.exists(domain); exists {
// domain already exists
return
}
certificate, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
return
}
log.Debugf("Got certificate for domains %+v", domains)
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating transaction %+v : %v", domains, err)
return
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
if err != nil {
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
return
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %v", account, err)
return
}
func (a *ACME) loadAccount(acmeConfig *ACME) (*Account, error) {
a.storageLock.RLock()
defer a.storageLock.RUnlock()
Account := Account{
DomainsCertificate: DomainsCertificates{},
}
file, err := ioutil.ReadFile(acmeConfig.StorageFile)
if err != nil {
return nil, err
}
if err := json.Unmarshal(file, &Account); err != nil {
return nil, err
}
err = Account.DomainsCertificate.init()
if err != nil {
return nil, err
}
log.Infof("Loaded ACME config from storage %s", acmeConfig.StorageFile)
return &Account, nil
}
// Get provided certificate which check a domains list (Main and SANs)
func (a *ACME) getProvidedCertificate(domains []string) *tls.Certificate {
// Use regex to test for provided certs that might have been added into TLSConfig
providedCertMatch := false
log.Debugf("Look for provided certificate to validate %s...", domains)
for k := range a.TLSConfig.NameToCertificate {
selector := "^" + strings.Replace(k, "*.", "[^\\.]*\\.?", -1) + "$"
for _, domainToCheck := range domains {
providedCertMatch, _ = regexp.MatchString(selector, domainToCheck)
if !providedCertMatch {
break
}
}
if providedCertMatch {
log.Debugf("Got provided certificate for domains %s", domains)
return a.TLSConfig.NameToCertificate[k]
}
func (a *ACME) saveAccount(Account *Account) error {
a.storageLock.Lock()
defer a.storageLock.Unlock()
// write account to file
data, err := json.MarshalIndent(Account, "", " ")
if err != nil {
return err
}
log.Debugf("No provided certificate found for domains %s, get ACME certificate.", domains)
return nil
return ioutil.WriteFile(a.StorageFile, data, 0644)
}
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
domains = fun.Map(types.CanonicalDomain, domains).([]string)
func (a *ACME) getDomainsCertificates(client *acme.Client, domains []string) (*Certificate, error) {
log.Debugf("Loading ACME certificates %s...", domains)
bundle := true
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
certificate, failures := client.ObtainCertificate(domains, bundle, nil)
if len(failures) > 0 {
log.Error(failures)
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
@@ -624,12 +477,3 @@ func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
Certificate: certificate.Certificate,
}, nil
}
func (a *ACME) runJobs() {
safe.Go(func() {
for job := range a.jobs.Out() {
function := job.(func())
function()
}
})
}

View File

@@ -1,17 +1,9 @@
package acme
import (
"crypto/tls"
"encoding/base64"
"net/http"
"net/http/httptest"
"reflect"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/xenolf/lego/acme"
)
func TestDomainsSet(t *testing.T) {
@@ -70,10 +62,8 @@ func TestDomainsSetAppend(t *testing.T) {
}
func TestCertificatesRenew(t *testing.T) {
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
lock: &sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
@@ -83,8 +73,55 @@ func TestCertificatesRenew(t *testing.T) {
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
PrivateKey: []byte(`
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA6OqHGdwGy20+3Jcz9IgfN4IR322X2Hhwk6n8Hss/Ws7FeTZo
PvXW8uHeI1bmQJsy9C6xo3odzO64o7prgMZl5eDw5fk1mmUij3J3nM3gwtc/Cc+8
ADXGldauASdHBFTRvWQge0Pv/Q5U0fyL2VCHoR9mGv4CQ7nRNKPus0vYJMbXoTbO
8z4sIbNz3Ov9o/HGMRb8D0rNPTMdC62tHSbiO1UoxLXr9dcBOGt786AsiRTJ8bq9
GCVQgzd0Wftb8z6ddW2YuWrmExlkHdfC4oG0D5SU1QB4ldPyl7fhVWlfHwC1NX+c
RnDSEeYkAcdvvIekdM/yH+z62XhwToM0E9TCzwIDAQABAoIBACq3EC3S50AZeeTU
qgeXizoP1Z1HKQjfFa5PB1jSZ30M3LRdIQMi7NfASo/qmPGSROb5RUS42YxC34PP
ZXXJbNiaxzM13/m/wHXURVFxhF3XQc1X1p+nPRMvutulS2Xk9E4qdbaFgBbFsRKN
oUwqc6U97+jVWq72/gIManNhXnNn1n1SRLBEkn+WStMPn6ZvWRlpRMjhy0c1mpwg
u6em92HvMvfKPQ60naUhdKp+q0rsLp2YKWjiytos9ENSYI5gAGLIDhKeqiD8f92E
4FGPmNRipwxCE2SSvZFlM26tRloWVcBPktRN79hUejE8iopiqVS0+4h/phZ2wG0D
18cqVpECgYEA+qmagnhm0LLvwVkUN0B2nRARQEFinZDM4Hgiv823bQvc9I8dVTqJ
aIQm5y4Y5UA3xmyDsRoO7GUdd0oVeh9GwTONzMRCOny/mOuOC51wXPhKHhI0O22u
sfbOHszl+bxl6ZQMUJa2/I8YIWBLU5P+fTgrfNwBEgZ3YPwUV5tyHNcCgYEA7eAv
pjQkbJNRq/fv/67sojN7N9QoH84egN5cZFh5d8PJomnsvy5JDV4WaG1G6mJpqjdD
YRVdFw5oZ4L8yCVdCeK9op896Uy51jqvfSe3+uKmNqE0qDHgaLubQNI8yYc5sacW
fYJBmDR6rNIeE7Q2240w3CdKfREuXdDnhyTTEskCgYBFeAnFTP8Zqe2+hSSQJ4J4
BwLw7u4Yww+0yja/N5E1XItRD/TOMRnx6GYrvd/ScVjD2kEpLRKju2ZOMC8BmHdw
hgwvitjcAsTK6cWFPI3uhjVsXhkxuzUmR0Naz+iQrQEFmi1LjGmMV1AVt+1IbYSj
SZTr1sFJMJeXPmWY3hDjIwKBgQC4H9fCJoorIL0PB5NVreishHzT8fw84ibqSTPq
2DDtazcf6C3AresN1c4ydqN1uUdg4fXdp9OujRBzTwirQ4CIrmFrBye89g7CrBo6
Hgxivh06G/3OUw0JBG5f9lvnAiy+Pj9CVxi+36A1NU7ioZP0zY0MW71koW/qXlFY
YkCfQQKBgBqwND/c3mPg7iY4RMQ9XjrKfV9o6FMzA51lAinjujHlNgsBmqiR951P
NA3kWZQ73D3IxeLEMaGHpvS7andPN3Z2qPhe+FbJKcF6ZZNTrFQkh/Fpz3wmYPo1
GIL4+09kNgMRWapaROqI+/3+qJQ+GVJZIPfYC0poJOO6vYqifWe8
-----END RSA PRIVATE KEY-----
`),
Certificate: []byte(`
-----BEGIN CERTIFICATE-----
MIIC+TCCAeGgAwIBAgIJAK78ukR/Qu4rMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCGZvbzEuY29tMB4XDTE2MDYxOTIyMDMyM1oXDTI2MDYxNzIyMDMyM1owEzER
MA8GA1UEAwwIZm9vMS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDo6ocZ3AbLbT7clzP0iB83ghHfbZfYeHCTqfweyz9azsV5Nmg+9dby4d4jVuZA
mzL0LrGjeh3M7rijumuAxmXl4PDl+TWaZSKPcneczeDC1z8Jz7wANcaV1q4BJ0cE
VNG9ZCB7Q+/9DlTR/IvZUIehH2Ya/gJDudE0o+6zS9gkxtehNs7zPiwhs3Pc6/2j
8cYxFvwPSs09Mx0Lra0dJuI7VSjEtev11wE4a3vzoCyJFMnxur0YJVCDN3RZ+1vz
Pp11bZi5auYTGWQd18LigbQPlJTVAHiV0/KXt+FVaV8fALU1f5xGcNIR5iQBx2+8
h6R0z/If7PrZeHBOgzQT1MLPAgMBAAGjUDBOMB0GA1UdDgQWBBRFLH1wF6BT51uq
yWNqBnCrPFIglzAfBgNVHSMEGDAWgBRFLH1wF6BT51uqyWNqBnCrPFIglzAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQAr7aH3Db6TeAZkg4Zd7SoF2q11
erzv552PgQUyezMZcRBo2q1ekmUYyy2600CBiYg51G+8oUqjJKiKnBuaqbMX7pFa
FsL7uToZCGA57cBaVejeB+p24P5bxoJGKCMeZcEBe5N93Tqu5WBxNEX7lQUo6TSs
gSN2Olf3/grNKt5V4BduSIQZ+YHlPUWLTaz5B1MXKSUqjmabARP9lhjO14u9USvi
dMBDFskJySQ6SUfz3fyoXELoDOVbRZETuSodpw+aFCbEtbcQCLT3A0FG+BEPayZH
tt19zKUlr6e+YFpyjQPGZ7ZkY7iMgHEkhKrXx2DiZ1+cif3X1xfXWQr0S5+E
-----END CERTIFICATE-----
`),
},
},
{
@@ -95,19 +132,113 @@ func TestCertificatesRenew(t *testing.T) {
Domain: "foo2.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo2Key,
Certificate: foo2Cert,
PrivateKey: []byte(`
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA7rIVuSrZ3FfYXhR3qaWwfVcgiqKS//yXFzNqkJS6mz9nRCNT
lPawvrCFIRKdR7UO7xD7A5VTcbrGOAaTvrEaH7mB/4FGL+gN4AiTbVFpKXngAYEW
A3//zeBZ7XUSWaQ+CNC+l796JeoDvQD++KwCke4rVD1pGN1hpVEeGhwzyKOYPKLo
4+AGVe1LFWw4U/v8Iil1/gBBehZBILuhASpXy4W132LJPl76/EbGqh0nVz2UlFqU
HRxO+2U2ba4YIpI+0/VOQ9Cq/TzHSUdTTLfBHE/Qb+aDBfptMWTRvAngLqUglOcZ
Fi6SAljxEkJO6z6btmoVUWsoKBpbIHDC5++dZwIDAQABAoIBAAD8rYhRfAskNdnV
vdTuwXcTOCg6md8DHWDULpmgc9EWhwfKGZthFcQEGNjVKd9VCVXFvTP7lxe+TPmI
VW4Rb2k4LChxUWf7TqthfbKTBptMTLfU39Ft4xHn3pdTx5qlSjhhHJimCwxDFnbe
nS9MDsqpsHYtttSKfc/gMP6spS4sNPZ/r9zseT3eWkBEhn+FQABxJiuPcQ7q7S+Q
uOghmr7f3FeYvizQOhBtULsLrK/hsmQIIB4amS1QlpNWKbIoiUPNPjCA5PVQyAER
waYjuc7imBbeD98L/z8bRTlEskSKjtPSEXGVHa9OYdBU+02Ci6TjKztUp6Ho7JE9
tcHj+eECgYEA+9Ntv6RqIdpT/4/52JYiR+pOem3U8tweCOmUqm/p/AWyfAJTykqt
cJ8RcK1MfM+uoa5Sjm8hIcA2XPVEqH2J50PC4w04Q3xtfsz3xs7KJWXQCoha8D0D
ZIFNroEPnld0qOuJzpIIteXTrCLhSu17ZhN+Wk+5gJ7Ewu/QMM5OPjECgYEA8qbw
zfwSjE6jkrqO70jzqSxgi2yjo0vMqv+BNBuhxhDTBXnKQI1KsHoiS0FkSLSJ9+DS
CT3WEescD2Lumdm2s9HXvaMmnDSKBY58NqCGsNzZifSgmj1H/yS9FX8RXfSjXcxq
RDvTbD52/HeaCiOxHZx8JjmJEb+ZKJC4MDvjtxcCgYBM516GvgEjYXdxfliAiijh
6W4Z+Vyk5g/ODPc3rYG5U0wUjuljx7Z7xDghPusy2oGsIn5XvRxTIE35yXU0N1Jb
69eiWzEpeuA9bv7kGdal4RfNf6K15wwYL1y3w/YvFuorg/LLwNEkK5Ge6e//X9Ll
c2KM1fgCjXntRitAHGDMoQKBgDnkgodioLpA+N3FDN0iNqAiKlaZcOFA8G/LzfO0
tAAhe3dO+2YzT6KTQSNbUqXWDSTKytHRowVbZrJ1FCA4xVJZunNQPaH/Fv8EY7ZU
zk3cIzq61qZ2AHtrNIGwc2BLQb7bSm9FJsgojxLlJidNJLC/6Q7lo0JMyCnZfVhk
sYu5AoGAZt/MfyFTKm674UddSNgGEt86PyVYbLMnRoAXOaNB38AE12kaYHPil1tL
FnL8OQLpbX5Qo2JGgeZRlpMJ4Jxw2zzvUKr/n+6khaLxHmtX48hMu2QM7ZvnkZCs
Kkgz6v+Wcqm94ugtl3HSm+u9xZzVQxN6gu/jZQv3VpQiAZHjPYc=
-----END RSA PRIVATE KEY-----
`),
Certificate: []byte(`
-----BEGIN CERTIFICATE-----
MIIC+TCCAeGgAwIBAgIJAK25/Z9Jz6IBMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCGZvbzIuY29tMB4XDTE2MDYyMDA5MzUyNloXDTI2MDYxODA5MzUyNlowEzER
MA8GA1UEAwwIZm9vMi5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDushW5KtncV9heFHeppbB9VyCKopL//JcXM2qQlLqbP2dEI1OU9rC+sIUhEp1H
tQ7vEPsDlVNxusY4BpO+sRofuYH/gUYv6A3gCJNtUWkpeeABgRYDf//N4FntdRJZ
pD4I0L6Xv3ol6gO9AP74rAKR7itUPWkY3WGlUR4aHDPIo5g8oujj4AZV7UsVbDhT
+/wiKXX+AEF6FkEgu6EBKlfLhbXfYsk+Xvr8RsaqHSdXPZSUWpQdHE77ZTZtrhgi
kj7T9U5D0Kr9PMdJR1NMt8EcT9Bv5oMF+m0xZNG8CeAupSCU5xkWLpICWPESQk7r
Ppu2ahVRaygoGlsgcMLn751nAgMBAAGjUDBOMB0GA1UdDgQWBBQ6FZWqB9qI4NN+
2jFY6xH8uoUTnTAfBgNVHSMEGDAWgBQ6FZWqB9qI4NN+2jFY6xH8uoUTnTAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQCRhuf2dQhIEOmSOGgtRELF2wB6
NWXt0lCty9x4u+zCvITXV8Z0C34VQGencO3H2bgyC3ZxNpPuwZfEc2Pxe8W6bDc/
OyLckk9WLo00Tnr2t7rDOeTjEGuhXFZkhIbJbKdAH8cEXrxKR8UXWtZgTv/b8Hv/
g6tbeH6TzBsdMoFtUCsyWxygYwnLU+quuYvE2s9FiCegf2mdYTCh/R5J5n/51gfB
uC+NakKMfaCvNg3mOAFSYC/0r0YcKM/5ldKGTKTCVJAMhnmBnyRc/70rKkVRFy2g
iIjUFs+9aAgfCiL0WlyyXYAtIev2gw4FHUVlcT/xKks+x8Kgj6e5LTIrRRwW
-----END CERTIFICATE-----
`),
},
},
},
}
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
newCertificate := &Certificate{
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
PrivateKey: []byte(`
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA1OdSuXK2zeSLf0UqgrI4pjkpaqhra++pnda4Li4jXo151svi
Sn7DSynJOoq1jbfRJAoyDhxsBC4S4RuD54U5elJ4wLPZXmHRsvb+NwiHs9VmDqwu
It21btuqeNMebkab5cnDnC6KKufMhXRcRAlluYXyCkQe/+N+LlUQd6Js34TixMpk
eQOX4/OVrokSyVRnIq4u+o0Ufe7z5+41WVH63tcy7Hwi7244aLUzZCs+QQa2Dw6f
qEwjbonr974fM68UxDjTZEQy9u24yDzajhDBp1OTAAklh7U+li3g9dSyNVBFXqEu
nW2fyBvLqeJOSTihqfcrACB/YYhYOX94vMXELQIDAQABAoIBAFYK3t3fxI1VTiMz
WsjTKh3TgC+AvVkz1ILbojfXoae22YS7hUrCDD82NgMYx+LsZPOBw1T8m5Lc4/hh
3F8W8nHDHtYSWUjRk6QWOgsXwXAmUEahw0uH+qlA0ZZfDC9ZDexCLHHURTat03Qj
4J4GhjwCLB2GBlk4IWisLCmNVR7HokrpfIw4oM1aB5E21Tl7zh/x7ikRijEkUsKw
7YhaMeLJqBnMnAdV63hhF7FaDRjl8P2s/3octz/6pqDIABrDrUW3KAkNYCZIWdhF
Kk0wRMbZ/WrYT9GIGoJe7coQC7ezTrlrEkAFEIPGHCLkgXB/0TyuSy0yY59e4zmi
VvHoWUECgYEA/rOL2KJ/p+TZW7+YbsUzs0+F+M+G6UCr0nWfYN9MKmNtmns3eLDG
+pIpBMc5mjqeJR/sCCdkD8OqHC202Y8e4sr0pKSBeBofh2BmXtpyu3QQ50Pa63RS
SK6mYUrFqPmFFDbNGpFI4sIeI+Vf6hm96FQPnyPtUTGqk39m0RbWM/UCgYEA1f04
Nf3wbqwqIHZjYpPmymfjleyMn3hGUjpi7pmI6inXGMk3nkeG1cbOhnfPxL5BWD12
3RqHI2B4Z4r0BMyjctDNb1TxhMIpm5+PKm5KeeKfoYA85IS0mEeq6VdMm3mL1x/O
3LYvcUvAEVf6pWX/+ZFLMudqhF3jbTrdNOC6ZFkCgYBKpEeJdyW+CD0CvEVpwPUD
yXxTjE3XMZKpHLtWYlop2fWW3iFFh1jouci3k8L3xdHuw0oioZibXhYOJ/7l+yFs
CVpknakrj0xKGiAmEBKriLojbClN80rh7fzoakc+29D6OY0mCgm4GndGwcO4EU8s
NOZXFupHbyy0CRQSloSzuQKBgQC1Z/MtIlefGuijmHlsakGuuR+gS2ZzEj1bHBAe
gZ4mFM46PuqdjblqpR0TtaI3AarXqVOI4SJLBU9NR+jR4MF3Zjeh9/q/NvKa8Usn
B1Svu0TkXphAiZenuKnVIqLY8tNvzZFKXlAd1b+/dDwR10SHR3rebnxINmfEg7Bf
UVvyEQKBgAEjI5O6LSkLNpbVn1l2IO8u8D2RkFqs/Sbx78uFta3f9Gddzb4wMnt3
jVzymghCLp4Qf1ump/zC5bcQ8L97qmnjJ+H8X9HwmkqetuI362JNnz+12YKVDIWi
wI7SJ8BwDqYMrLw6/nE+degn39KedGDH8gz5cZcdlKTZLjbuBOfU
-----END RSA PRIVATE KEY-----
`),
Certificate: []byte(`
-----BEGIN CERTIFICATE-----
MIIC+TCCAeGgAwIBAgIJAPQiOiQcwYaRMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCGZvbzEuY29tMB4XDTE2MDYxOTIyMTE1NFoXDTI2MDYxNzIyMTE1NFowEzER
MA8GA1UEAwwIZm9vMS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDU51K5crbN5It/RSqCsjimOSlqqGtr76md1rguLiNejXnWy+JKfsNLKck6irWN
t9EkCjIOHGwELhLhG4PnhTl6UnjAs9leYdGy9v43CIez1WYOrC4i3bVu26p40x5u
RpvlycOcLooq58yFdFxECWW5hfIKRB7/434uVRB3omzfhOLEymR5A5fj85WuiRLJ
VGciri76jRR97vPn7jVZUfre1zLsfCLvbjhotTNkKz5BBrYPDp+oTCNuiev3vh8z
rxTEONNkRDL27bjIPNqOEMGnU5MACSWHtT6WLeD11LI1UEVeoS6dbZ/IG8up4k5J
OKGp9ysAIH9hiFg5f3i8xcQtAgMBAAGjUDBOMB0GA1UdDgQWBBQPfkS5ehpstmSb
8CGJE7GxSCxl2DAfBgNVHSMEGDAWgBQPfkS5ehpstmSb8CGJE7GxSCxl2DAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQA99A+itS9ImdGRGgHZ5fSusiEq
wkK5XxGyagL1S0f3VM8e78VabSvC0o/xdD7DHVg6Az8FWxkkksH6Yd7IKfZZUzvs
kXQhlOwWpxgmguSmAs4uZTymIoMFRVj3nG664BcXkKu4Yd9UXKNOWP59zgvrCJMM
oIsmYiq5u0MFpM31BwfmmW3erqIcfBI9OJrmr1XDzlykPZNWtUSSfVuNQ8d4bim9
XH8RfVLeFbqDydSTCHIFvYthH/ESbpRCiGJHoJ8QLfOkhD1k2fI0oJZn5RVtG2W8
bZME3gHPYCk1QFZUptriMCJ5fMjCgxeOTR+FAkstb/lTRuCc4UyILJguIMar
-----END CERTIFICATE-----
`),
}
err := domainsCertificates.renewCertificates(
@@ -125,172 +256,3 @@ func TestCertificatesRenew(t *testing.T) {
t.Errorf("Expected new certificate %+v \nGot %+v", newCertificate, domainsCertificates.Certs[0].Certificate)
}
}
func TestRemoveDuplicates(t *testing.T) {
now := time.Now()
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
barCert, barKey, _ := generateKeyPair("bar.com", now)
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo24Key,
Certificate: foo24Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: fooKey,
Certificate: fooCert,
},
},
{
Domains: Domain{
Main: "bar.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "bar.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: barKey,
Certificate: barCert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
},
}
domainsCertificates.Init()
if len(domainsCertificates.Certs) != 2 {
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
}
for _, cert := range domainsCertificates.Certs {
switch cert.Domains.Main {
case "bar.com":
continue
case "foo.com":
if !cert.tlsCert.Leaf.NotAfter.Equal(now.Add(48 * time.Hour).Truncate(1 * time.Second)) {
t.Errorf("Bad expiration %s date for domain %+v, now %s", cert.tlsCert.Leaf.NotAfter.String(), cert, now.Add(48*time.Hour).Truncate(1*time.Second).String())
}
default:
t.Errorf("Unknown domain %+v", cert)
}
}
}
func TestNoPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(0)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS != nil {
t.Errorf("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
}
}
func TestSillyPreCheckOverride(t *testing.T) {
err := dnsOverrideDelay(-5)
if err == nil {
t.Errorf("Missing expected error in dnsOverrideDelay!")
}
}
func TestPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(5)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcmeClientCreation(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
// Lengthy setup to avoid external web requests - oh for easier golang testing!
account := &Account{Email: "f@f"}
account.PrivateKey, _ = base64.StdEncoding.DecodeString(`
MIIBPAIBAAJBAMp2Ni92FfEur+CAvFkgC12LT4l9D53ApbBpDaXaJkzzks+KsLw9zyAxvlrfAyTCQ
7tDnEnIltAXyQ0uOFUUdcMCAwEAAQJAK1FbipATZcT9cGVa5x7KD7usytftLW14heQUPXYNV80r/3
lmnpvjL06dffRpwkYeN8DATQF/QOcy3NNNGDw/4QIhAPAKmiZFxA/qmRXsuU8Zhlzf16WrNZ68K64
asn/h3qZrAiEA1+wFR3WXCPIolOvd7AHjfgcTKQNkoMPywU4FYUNQ1AkCIQDv8yk0qPjckD6HVCPJ
llJh9MC0svjevGtNlxJoE3lmEQIhAKXy1wfZ32/XtcrnENPvi6lzxI0T94X7s5pP3aCoPPoJAiEAl
cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`{
"new-authz": "https://foo/acme/new-authz",
"new-cert": "https://foo/acme/new-cert",
"new-reg": "https://foo/acme/new-reg",
"revoke-cert": "https://foo/acme/revoke-cert"
}`))
}))
defer ts.Close()
a := ACME{DNSProvider: "manual", DelayDontCheckDNS: 10, CAServer: ts.URL}
client, err := a.buildACMEClient(account)
if err != nil {
t.Errorf("Error in buildACMEClient: %v", err)
}
if client == nil {
t.Errorf("No client from buildACMEClient!")
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcme_getProvidedCertificate(t *testing.T) {
mm := make(map[string]*tls.Certificate)
mm["*.containo.us"] = &tls.Certificate{}
mm["traefik.acme.io"] = &tls.Certificate{}
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
domains := []string{"traefik.containo.us", "trae.containo.us"}
certificate := a.getProvidedCertificate(domains)
assert.NotNil(t, certificate)
domains = []string{"traefik.acme.io", "trae.acme.io"}
certificate = a.getProvidedCertificate(domains)
assert.Nil(t, certificate)
}

View File

@@ -2,96 +2,55 @@ package acme
import (
"crypto/tls"
"fmt"
"strings"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"crypto/x509"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
type challengeProvider struct {
store cluster.Store
lock sync.RWMutex
type wrapperChallengeProvider struct {
challengeCerts map[string]*tls.Certificate
lock sync.RWMutex
}
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
log.Debugf("Challenge GetCertificate %s", domain)
if !strings.HasSuffix(domain, ".acme.invalid") {
return nil, false
func newWrapperChallengeProvider() *wrapperChallengeProvider {
return &wrapperChallengeProvider{
challengeCerts: map[string]*tls.Certificate{},
}
}
func (c *wrapperChallengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.ChallengeCerts == nil {
return nil, false
if cert, ok := c.challengeCerts[domain]; ok {
return cert, true
}
account.Init()
var result *tls.Certificate
operation := func() error {
for _, cert := range account.ChallengeCerts {
for _, dns := range cert.certificate.Leaf.DNSNames {
if domain == dns {
result = cert.certificate
return nil
}
}
}
return fmt.Errorf("Cannot find challenge cert for domain %s", domain)
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting cert: %v", err)
return nil, false
}
return result, true
return nil, false
}
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
func (c *wrapperChallengeProvider) Present(domain, token, keyAuth string) error {
cert, _, err := acme.TLSSNI01ChallengeCert(keyAuth)
if err != nil {
return err
}
cert.Leaf, err = x509.ParseCertificate(cert.Certificate[0])
if err != nil {
return err
}
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
for i := range cert.Leaf.DNSNames {
c.challengeCerts[cert.Leaf.DNSNames[i]] = &cert
}
account := object.(*Account)
if account.ChallengeCerts == nil {
account.ChallengeCerts = map[string]*ChallengeCert{}
}
account.ChallengeCerts[domain] = &cert
return transaction.Commit(account)
return nil
}
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
func (c *wrapperChallengeProvider) CleanUp(domain, token, keyAuth string) error {
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
delete(account.ChallengeCerts, domain)
return transaction.Commit(account)
}
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
delete(c.challengeCerts, domain)
return nil
}

View File

@@ -1,8 +1,6 @@
package acme
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
@@ -17,44 +15,34 @@ import (
)
func generateDefaultCertificate() (*tls.Certificate, error) {
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, err
}
rsaPrivPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
randomBytes := make([]byte, 100)
_, err := rand.Read(randomBytes)
_, err = rand.Read(randomBytes)
if err != nil {
return nil, err
}
zBytes := sha256.Sum256(randomBytes)
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
tempCertPEM, err := generatePemCert(rsaPrivKey, domain)
if err != nil {
return nil, err
}
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return nil, err
}
return &certificate, nil
}
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, nil, err
}
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
if err != nil {
return nil, nil, err
}
return certPEM, keyPEM, nil
}
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
derBytes, err := generateDerCert(privKey, expiration, domain)
func generatePemCert(privKey *rsa.PrivateKey, domain string) ([]byte, error) {
derBytes, err := generateDerCert(privKey, time.Time{}, domain)
if err != nil {
return nil, err
}
@@ -88,48 +76,3 @@ func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain strin
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
}
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
// generate a new RSA key for the certificates
var tempPrivKey crypto.PrivateKey
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return ChallengeCert{}, "", err
}
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
rsaPrivPEM := pemEncode(rsaPrivKey)
zBytes := sha256.Sum256([]byte(keyAuth))
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
if err != nil {
return ChallengeCert{}, "", err
}
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return ChallengeCert{}, "", err
}
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
}
func pemEncode(data interface{}) []byte {
var pemBlock *pem.Block
switch key := data.(type) {
case *ecdsa.PrivateKey:
keyBytes, _ := x509.MarshalECPrivateKey(key)
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
case *rsa.PrivateKey:
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
break
case *x509.CertificateRequest:
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
break
case []byte:
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
}
return pem.EncodeToMemory(pemBlock)
}

View File

@@ -1,97 +0,0 @@
package acme
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"sync"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
)
var _ cluster.Store = (*LocalStore)(nil)
// LocalStore is a store using a file as storage
type LocalStore struct {
file string
storageLock sync.RWMutex
account *Account
}
// NewLocalStore create a LocalStore
func NewLocalStore(file string) *LocalStore {
return &LocalStore{
file: file,
}
}
// Get atomically a struct from the file storage
func (s *LocalStore) Get() cluster.Object {
s.storageLock.RLock()
defer s.storageLock.RUnlock()
return s.account
}
// Load loads file into store
func (s *LocalStore) Load() (cluster.Object, error) {
s.storageLock.Lock()
defer s.storageLock.Unlock()
account := &Account{}
err := checkPermissions(s.file)
if err != nil {
return nil, err
}
f, err := os.Open(s.file)
if err != nil {
return nil, err
}
defer f.Close()
file, err := ioutil.ReadAll(f)
if err != nil {
return nil, err
}
if err := json.Unmarshal(file, &account); err != nil {
return nil, err
}
account.Init()
s.account = account
log.Infof("Loaded ACME config from store %s", s.file)
return account, nil
}
// Begin creates a transaction with the KV store.
func (s *LocalStore) Begin() (cluster.Transaction, cluster.Object, error) {
s.storageLock.Lock()
return &localTransaction{LocalStore: s}, s.account, nil
}
var _ cluster.Transaction = (*localTransaction)(nil)
type localTransaction struct {
*LocalStore
dirty bool
}
// Commit allows to set an object in the file storage
func (t *localTransaction) Commit(object cluster.Object) error {
t.LocalStore.account = object.(*Account)
defer t.storageLock.Unlock()
if t.dirty {
return fmt.Errorf("transaction already used, please begin a new one")
}
// write account to file
data, err := json.MarshalIndent(object, "", " ")
if err != nil {
return err
}
err = ioutil.WriteFile(t.file, data, 0600)
if err != nil {
return err
}
t.dirty = true
return nil
}

View File

@@ -1,25 +0,0 @@
// +build !windows
package acme
import (
"fmt"
"os"
)
// Check file permissions
func checkPermissions(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
if fi.Mode().Perm()&0077 != 0 {
return fmt.Errorf("permissions %o for %s are too open, please use 600", fi.Mode().Perm(), name)
}
return nil
}

View File

@@ -1,6 +0,0 @@
package acme
// Do not check file permissions on Windows right now
func checkPermissions(name string) error {
return nil
}

View File

@@ -6,7 +6,7 @@ package main
import (
"net/http"
"github.com/containous/traefik/log"
log "github.com/Sirupsen/logrus"
)
// OxyLogger implements oxy Logger interface with logrus.

View File

@@ -1,35 +1,25 @@
FROM golang:1.8
FROM golang:1.6.2
# Install a more recent version of mercurial to avoid mismatching results
# between glide run on a decently updated host system and the build container.
RUN awk '$1 ~ "^deb" { $3 = $3 "-backports"; print; exit }' /etc/apt/sources.list > /etc/apt/sources.list.d/backports.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -t jessie-backports --yes --no-install-recommends mercurial=3.9.1-1~bpo8+1 && \
rm -fr /var/lib/apt/lists/
RUN go get github.com/jteeuwen/go-bindata/... \
RUN go get github.com/Masterminds/glide \
&& go get github.com/jteeuwen/go-bindata/... \
&& go get github.com/golang/lint/golint \
&& go get github.com/kisielk/errcheck \
&& go get github.com/client9/misspell/cmd/misspell \
&& go get github.com/mattfarina/glide-hash \
&& go get github.com/sgotti/glide-vc
&& go get github.com/kisielk/errcheck
# Which docker version to test on
ARG DOCKER_VERSION=17.03.1
# Which glide version to test on
ARG GLIDE_VERSION=v0.12.3
# Download glide
RUN mkdir -p /usr/local/bin \
&& curl -fL https://github.com/Masterminds/glide/releases/download/${GLIDE_VERSION}/glide-${GLIDE_VERSION}-linux-amd64.tar.gz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
ARG DOCKER_VERSION=1.10.1
# Download docker
RUN mkdir -p /usr/local/bin \
&& curl -fL https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}-ce.tgz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
RUN set -ex; \
curl https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION} -o /usr/local/bin/docker-${DOCKER_VERSION}; \
chmod +x /usr/local/bin/docker-${DOCKER_VERSION}
# Set the default Docker to be run
RUN ln -s /usr/local/bin/docker-${DOCKER_VERSION} /usr/local/bin/docker
WORKDIR /go/src/github.com/containous/traefik
COPY . /go/src/github.com/containous/traefik
COPY glide.yaml glide.yaml
COPY glide.lock glide.lock
RUN glide install
COPY . /go/src/github.com/containous/traefik

36
circle.yml Normal file
View File

@@ -0,0 +1,36 @@
machine:
pre:
- sudo docker -d -e lxc -s btrfs -H tcp://0.0.0.0:2375:
background: true
- curl --retry 15 --retry-delay 3 -v http://172.17.42.1:2375/version
environment:
REPO: $CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
DOCKER_HOST: tcp://172.17.42.1:2375
MAKE_DOCKER_HOST: $DOCKER_HOST
VERSION: v1.0.alpha.$CIRCLE_BUILD_NUM
dependencies:
pre:
- docker version
- go get github.com/tcnksm/ghr
- make validate
override:
- make binary
test:
override:
- make test-unit
- make test-integration
post:
- make crossbinary
- make image
deployment:
hub:
branch: master
commands:
- ghr -t $GITHUB_TOKEN -u $CIRCLE_PROJECT_USERNAME -r $CIRCLE_PROJECT_REPONAME --prerelease ${VERSION} dist/
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push ${REPO,,}:latest
- docker tag ${REPO,,}:latest ${REPO,,}:${VERSION}
- docker push ${REPO,,}:${VERSION}

View File

@@ -1,255 +0,0 @@
package cluster
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/job"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/docker/libkv/store"
"github.com/satori/go.uuid"
)
// Metadata stores Object plus metadata
type Metadata struct {
object Object
Object []byte
Lock string
}
// NewMetadata returns new Metadata
func NewMetadata(object Object) *Metadata {
return &Metadata{object: object}
}
// Marshall marshalls object
func (m *Metadata) Marshall() error {
var err error
m.Object, err = json.Marshal(m.object)
return err
}
func (m *Metadata) unmarshall() error {
if len(m.Object) == 0 {
return nil
}
return json.Unmarshal(m.Object, m.object)
}
// Listener is called when Object has been changed in KV store
type Listener func(Object) error
var _ Store = (*Datastore)(nil)
// Datastore holds a struct synced in a KV store
type Datastore struct {
kv staert.KvSource
ctx context.Context
localLock *sync.RWMutex
meta *Metadata
lockKey string
listener Listener
}
// NewDataStore creates a Datastore
func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object, listener Listener) (*Datastore, error) {
datastore := Datastore{
kv: kvSource,
ctx: ctx,
meta: &Metadata{object: object},
lockKey: kvSource.Prefix + "/lock",
localLock: &sync.RWMutex{},
listener: listener,
}
err := datastore.watchChanges()
if err != nil {
return nil, err
}
return &datastore, nil
}
func (d *Datastore) watchChanges() error {
stopCh := make(chan struct{})
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
if err != nil {
return err
}
go func() {
ctx, cancel := context.WithCancel(d.ctx)
operation := func() error {
for {
select {
case <-ctx.Done():
stopCh <- struct{}{}
return nil
case _, ok := <-kvCh:
if !ok {
cancel()
return err
}
err = d.reload()
if err != nil {
return err
}
// log.Debugf("Datastore object change received: %+v", d.meta)
if d.listener != nil {
err := d.listener(d.meta.object)
if err != nil {
log.Errorf("Error calling datastore listener: %s", err)
}
}
}
}
}
notify := func(err error, time time.Duration) {
log.Errorf("Error in watch datastore: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
log.Errorf("Error in watch datastore: %v", err)
}
}()
return nil
}
func (d *Datastore) reload() error {
log.Debugf("Datastore reload")
d.localLock.Lock()
err := d.kv.LoadConfig(d.meta)
if err != nil {
d.localLock.Unlock()
return err
}
err = d.meta.unmarshall()
if err != nil {
d.localLock.Unlock()
return err
}
d.localLock.Unlock()
return nil
}
// Begin creates a transaction with the KV store.
func (d *Datastore) Begin() (Transaction, Object, error) {
id := uuid.NewV4().String()
log.Debugf("Transaction %s begins", id)
remoteLock, err := d.kv.NewLock(d.lockKey, &store.LockOptions{TTL: 20 * time.Second, Value: []byte(id)})
if err != nil {
return nil, nil, err
}
stopCh := make(chan struct{})
ctx, cancel := context.WithCancel(d.ctx)
var errLock error
go func() {
_, errLock = remoteLock.Lock(stopCh)
cancel()
}()
select {
case <-ctx.Done():
if errLock != nil {
return nil, nil, errLock
}
case <-d.ctx.Done():
stopCh <- struct{}{}
return nil, nil, d.ctx.Err()
}
// we got the lock! Now make sure we are synced with KV store
operation := func() error {
meta := d.get()
if meta.Lock != id {
return fmt.Errorf("Object lock value: expected %s, got %s", id, meta.Lock)
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Datastore sync error: %v, retrying in %s", err, time)
err = d.reload()
if err != nil {
log.Errorf("Error reloading: %+v", err)
}
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
return nil, nil, fmt.Errorf("Datastore cannot sync: %v", err)
}
// we synced with KV store, we can now return Setter
return &datastoreTransaction{
Datastore: d,
remoteLock: remoteLock,
id: id,
}, d.meta.object, nil
}
func (d *Datastore) get() *Metadata {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta
}
// Load load atomically a struct from the KV store
func (d *Datastore) Load() (Object, error) {
d.localLock.Lock()
defer d.localLock.Unlock()
err := d.kv.LoadConfig(d.meta)
if err != nil {
return nil, err
}
err = d.meta.unmarshall()
if err != nil {
return nil, err
}
return d.meta.object, nil
}
// Get atomically a struct from the KV store
func (d *Datastore) Get() Object {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta.object
}
var _ Transaction = (*datastoreTransaction)(nil)
type datastoreTransaction struct {
*Datastore
remoteLock store.Locker
dirty bool
id string
}
// Commit allows to set an object in the KV store
func (s *datastoreTransaction) Commit(object Object) error {
s.localLock.Lock()
defer s.localLock.Unlock()
if s.dirty {
return fmt.Errorf("Transaction already used, please begin a new one")
}
s.Datastore.meta.object = object
err := s.Datastore.meta.Marshall()
if err != nil {
return fmt.Errorf("Marshall error: %s", err)
}
err = s.kv.StoreConfig(s.Datastore.meta)
if err != nil {
return fmt.Errorf("StoreConfig error: %s", err)
}
err = s.remoteLock.Unlock()
if err != nil {
return fmt.Errorf("Unlock error: %s", err)
}
s.dirty = true
log.Debugf("Transaction committed %s", s.id)
return nil
}

View File

@@ -1,104 +0,0 @@
package cluster
import (
"context"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/docker/leadership"
)
// Leadership allows leadership election using a KV store
type Leadership struct {
*safe.Pool
*types.Cluster
candidate *leadership.Candidate
leader *safe.Safe
listeners []LeaderListener
}
// NewLeadership creates a leadership
func NewLeadership(ctx context.Context, cluster *types.Cluster) *Leadership {
return &Leadership{
Pool: safe.NewPool(ctx),
Cluster: cluster,
candidate: leadership.NewCandidate(cluster.Store, cluster.Store.Prefix+"/leader", cluster.Node, 20*time.Second),
listeners: []LeaderListener{},
leader: safe.New(false),
}
}
// LeaderListener is called when leadership has changed
type LeaderListener func(elected bool) error
// Participate tries to be a leader
func (l *Leadership) Participate(pool *safe.Pool) {
pool.GoCtx(func(ctx context.Context) {
log.Debugf("Node %s running for election", l.Cluster.Node)
defer log.Debugf("Node %s no more running for election", l.Cluster.Node)
backOff := backoff.NewExponentialBackOff()
operation := func() error {
return l.run(ctx, l.candidate)
}
notify := func(err error, time time.Duration) {
log.Errorf("Leadership election error %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), backOff, notify)
if err != nil {
log.Errorf("Cannot elect leadership %+v", err)
}
})
}
// AddListener adds a leadership listerner
func (l *Leadership) AddListener(listener LeaderListener) {
l.listeners = append(l.listeners, listener)
}
// Resign resigns from being a leader
func (l *Leadership) Resign() {
l.candidate.Resign()
log.Infof("Node %s resigned", l.Cluster.Node)
}
func (l *Leadership) run(ctx context.Context, candidate *leadership.Candidate) error {
electedCh, errCh := candidate.RunForElection()
for {
select {
case elected := <-electedCh:
l.onElection(elected)
case err := <-errCh:
return err
case <-ctx.Done():
l.candidate.Resign()
return nil
}
}
}
func (l *Leadership) onElection(elected bool) {
if elected {
log.Infof("Node %s elected leader ♚", l.Cluster.Node)
l.leader.Set(true)
l.Start()
} else {
log.Infof("Node %s elected slave ♝", l.Cluster.Node)
l.leader.Set(false)
l.Stop()
}
for _, listener := range l.listeners {
err := listener(elected)
if err != nil {
log.Errorf("Error calling Leadership listener: %s", err)
}
}
}
// IsLeader returns true if current node is leader
func (l *Leadership) IsLeader() bool {
return l.leader.Get().(bool)
}

View File

@@ -1,16 +0,0 @@
package cluster
// Object is the struct to store
type Object interface{}
// Store is a generic interface to represents a storage
type Store interface {
Load() (Object, error)
Get() Object
Begin() (Transaction, Object, error)
}
// Transaction allows to set a struct in the KV store
type Transaction interface {
Commit(object Object) error
}

View File

@@ -1,152 +0,0 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/url"
"os/exec"
"regexp"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/mvdan/xurls"
)
var (
bugtracker = "https://github.com/containous/traefik/issues/new"
bugTemplate = `<!--
PLEASE READ THIS MESSAGE.
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For other type of questions, consider using one of:
- the Traefik community Slack channel: https://traefik.herokuapp.com
- StackOverflow: https://stackoverflow.com/questions/tagged/traefik
HOW TO WRITE A GOOD ISSUE?
- if it's possible use the command` + "`" + `traefik bug` + "`" + `. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### Do you want to request a *feature* or report a *bug*?
### What did you do?
### What did you expect to see?
### What did you see instead?
### Output of ` + "`" + `traefik version` + "`" + `: (_What version of Traefik are you using?_)
` + "```" + `
{{.Version}}
` + "```" + `
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
` + "```" + `toml
{{.Configuration}}
` + "```" + `
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in debug mode (` + "`" + `--debug` + "`" + ` switch)
` + "```" + `
(paste your output here)
` + "```" + `
`
)
// newBugCmd builds a new Bug command
func newBugCmd(traefikConfiguration interface{}, traefikPointersConfiguration interface{}) *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "bug",
Description: `Report an issue on Traefik bugtracker`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
var version bytes.Buffer
if err := getVersionPrint(&version); err != nil {
return err
}
tmpl, err := template.New("").Parse(bugTemplate)
if err != nil {
return err
}
configJSON, err := json.MarshalIndent(traefikConfiguration, "", " ")
if err != nil {
return err
}
v := struct {
Version string
Configuration string
}{
Version: version.String(),
Configuration: anonymize(string(configJSON)),
}
var bug bytes.Buffer
if err := tmpl.Execute(&bug, v); err != nil {
return err
}
body := bug.String()
URL := bugtracker + "?body=" + url.QueryEscape(body)
if err := openBrowser(URL); err != nil {
fmt.Print("Please file a new issue at " + bugtracker + " using this template:\n\n")
fmt.Print(body)
}
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func openBrowser(URL string) error {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", URL).Start()
case "windows":
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", URL).Start()
case "darwin":
err = exec.Command("open", URL).Start()
default:
err = fmt.Errorf("unsupported platform")
}
return err
}
func anonymize(input string) string {
replace := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, replace), replace)
}

View File

@@ -1,313 +0,0 @@
package main
import (
"crypto/tls"
"encoding/json"
"fmt"
fmtlog "log"
"net/http"
"os"
"path/filepath"
"reflect"
"runtime"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/middlewares"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/server"
"github.com/containous/traefik/types"
"github.com/containous/traefik/version"
"github.com/coreos/go-systemd/daemon"
"github.com/docker/libkv/store"
"github.com/satori/go.uuid"
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
//traefik config inits
traefikConfiguration := server.NewTraefikConfiguration()
traefikPointersConfiguration := server.NewTraefikDefaultPointersConfiguration()
//traefik Command init
traefikCmd := &flaeg.Command{
Name: "traefik",
Description: `traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
Complete documentation is available at https://traefik.io`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
run(traefikConfiguration)
return nil
},
}
//storeconfig Command init
var kv *staert.KvSource
var err error
storeconfigCmd := &flaeg.Command{
Name: "storeconfig",
Description: `Store the static traefik configuration into a Key-value stores. Traefik will not start.`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
if kv == nil {
return fmt.Errorf("Error using command storeconfig, no Key-value store defined")
}
jsonConf, err := json.Marshal(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
fmtlog.Printf("Storing configuration: %s\n", jsonConf)
err = kv.StoreConfig(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
if traefikConfiguration.GlobalConfiguration.ACME != nil && len(traefikConfiguration.GlobalConfiguration.ACME.StorageFile) > 0 {
// convert ACME json file to KV store
store := acme.NewLocalStore(traefikConfiguration.GlobalConfiguration.ACME.StorageFile)
object, err := store.Load()
if err != nil {
return err
}
meta := cluster.NewMetadata(object)
err = meta.Marshall()
if err != nil {
return err
}
source := staert.KvSource{
Store: kv,
Prefix: traefikConfiguration.GlobalConfiguration.ACME.Storage,
}
err = source.StoreConfig(meta)
if err != nil {
return err
}
}
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
//init flaeg source
f := flaeg.New(traefikCmd, os.Args[1:])
//add custom parsers
f.AddParser(reflect.TypeOf(server.EntryPoints{}), &server.EntryPoints{})
f.AddParser(reflect.TypeOf(server.DefaultEntryPoints{}), &server.DefaultEntryPoints{})
f.AddParser(reflect.TypeOf(types.Constraints{}), &types.Constraints{})
f.AddParser(reflect.TypeOf(kubernetes.Namespaces{}), &kubernetes.Namespaces{})
f.AddParser(reflect.TypeOf([]acme.Domain{}), &acme.Domains{})
f.AddParser(reflect.TypeOf(types.Buckets{}), &types.Buckets{})
//add commands
f.AddCommand(newVersionCmd())
f.AddCommand(newBugCmd(traefikConfiguration, traefikPointersConfiguration))
f.AddCommand(storeconfigCmd)
usedCmd, err := f.GetCommand()
if err != nil {
fmtlog.Println(err)
os.Exit(-1)
}
if _, err := f.Parse(usedCmd); err != nil {
fmtlog.Printf("Error parsing command: %s\n", err)
os.Exit(-1)
}
//staert init
s := staert.NewStaert(traefikCmd)
//init toml source
toml := staert.NewTomlSource("traefik", []string{traefikConfiguration.ConfigFile, "/etc/traefik/", "$HOME/.traefik/", "."})
//add sources to staert
s.AddSource(toml)
s.AddSource(f)
if _, err := s.LoadConfig(); err != nil {
fmtlog.Println(fmt.Errorf("Error reading TOML config file %s : %s", toml.ConfigFileUsed(), err))
os.Exit(-1)
}
traefikConfiguration.ConfigFile = toml.ConfigFileUsed()
kv, err = CreateKvSource(traefikConfiguration)
if err != nil {
fmtlog.Printf("Error creating kv store: %s\n", err)
os.Exit(-1)
}
// IF a KV Store is enable and no sub-command called in args
if kv != nil && usedCmd == traefikCmd {
if traefikConfiguration.Cluster == nil {
traefikConfiguration.Cluster = &types.Cluster{Node: uuid.NewV4().String()}
}
if traefikConfiguration.Cluster.Store == nil {
traefikConfiguration.Cluster.Store = &types.Store{Prefix: kv.Prefix, Store: kv.Store}
}
s.AddSource(kv)
if _, err := s.LoadConfig(); err != nil {
fmtlog.Printf("Error loading configuration: %s\n", err)
os.Exit(-1)
}
}
if err := s.Run(); err != nil {
fmtlog.Printf("Error running traefik: %s\n", err)
os.Exit(-1)
}
os.Exit(0)
}
func run(traefikConfiguration *server.TraefikConfiguration) {
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
// load global configuration
globalConfiguration := traefikConfiguration.GlobalConfiguration
http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = globalConfiguration.MaxIdleConnsPerHost
if globalConfiguration.InsecureSkipVerify {
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
loggerMiddleware := middlewares.NewLogger(globalConfiguration.AccessLogsFile)
defer loggerMiddleware.Close()
if globalConfiguration.File != nil && len(globalConfiguration.File.Filename) == 0 {
// no filename, setting to global config file
if len(traefikConfiguration.ConfigFile) != 0 {
globalConfiguration.File.Filename = traefikConfiguration.ConfigFile
} else {
log.Errorln("Error using file configuration backend, no filename defined")
}
}
if len(globalConfiguration.EntryPoints) == 0 {
globalConfiguration.EntryPoints = map[string]*server.EntryPoint{"http": {Address: ":80"}}
globalConfiguration.DefaultEntryPoints = []string{"http"}
}
if globalConfiguration.Debug {
globalConfiguration.LogLevel = "DEBUG"
}
// logging
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
if err != nil {
log.Error("Error getting level", err)
}
log.SetLevel(level)
if len(globalConfiguration.TraefikLogsFile) > 0 {
dir := filepath.Dir(globalConfiguration.TraefikLogsFile)
err := os.MkdirAll(dir, 0755)
if err != nil {
log.Errorf("Failed to create log path %s: %s", dir, err)
}
fi, err := os.OpenFile(globalConfiguration.TraefikLogsFile, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
defer func() {
if err := fi.Close(); err != nil {
log.Error("Error closing file", err)
}
}()
if err != nil {
log.Error("Error opening file", err)
} else {
log.SetOutput(fi)
log.SetFormatter(&logrus.TextFormatter{DisableColors: true, FullTimestamp: true, DisableSorting: true})
}
} else {
log.SetFormatter(&logrus.TextFormatter{FullTimestamp: true, DisableSorting: true})
}
jsonConf, _ := json.Marshal(globalConfiguration)
log.Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
if globalConfiguration.CheckNewVersion {
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
version.CheckNewVersion()
for {
select {
case <-ticker.C:
version.CheckNewVersion()
}
}
})
}
if len(traefikConfiguration.ConfigFile) != 0 {
log.Infof("Using TOML configuration file %s", traefikConfiguration.ConfigFile)
}
log.Debugf("Global configuration loaded %s", string(jsonConf))
svr := server.NewServer(globalConfiguration)
svr.Start()
defer svr.Close()
sent, err := daemon.SdNotify(false, "READY=1")
if !sent && err != nil {
log.Error("Fail to notify", err)
}
t, err := daemon.SdWatchdogEnabled(false)
if err != nil {
log.Error("Problem with watchdog", err)
} else if t != 0 {
// Send a ping each half time given
t = t / 2
log.Info("Watchdog activated with timer each ", t)
safe.Go(func() {
tick := time.Tick(t)
for range tick {
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.Error("Fail to tick watchdog")
}
}
})
}
svr.Wait()
log.Info("Shutting down")
}
// CreateKvSource creates KvSource
// TLS support is enable for Consul and Etcd backends
func CreateKvSource(traefikConfiguration *server.TraefikConfiguration) (*staert.KvSource, error) {
var kv *staert.KvSource
var store store.Store
var err error
switch {
case traefikConfiguration.Consul != nil:
store, err = traefikConfiguration.Consul.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Consul.Prefix,
}
case traefikConfiguration.Etcd != nil:
store, err = traefikConfiguration.Etcd.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Etcd.Prefix,
}
case traefikConfiguration.Zookeeper != nil:
store, err = traefikConfiguration.Zookeeper.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Zookeeper.Prefix,
}
case traefikConfiguration.Boltdb != nil:
store, err = traefikConfiguration.Boltdb.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Boltdb.Prefix,
}
}
return kv, err
}

View File

@@ -1,63 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/containous/traefik/version"
)
var versionTemplate = `Version: {{.Version}}
Codename: {{.Codename}}
Go version: {{.GoVersion}}
Built: {{.BuildTime}}
OS/Arch: {{.Os}}/{{.Arch}}`
// newVersionCmd builds a new Version command
func newVersionCmd() *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "version",
Description: `Print version`,
Config: struct{}{},
DefaultPointersConfig: struct{}{},
Run: func() error {
if err := getVersionPrint(os.Stdout); err != nil {
return err
}
fmt.Printf("\n")
return nil
},
}
}
func getVersionPrint(wr io.Writer) error {
tmpl, err := template.New("").Parse(versionTemplate)
if err != nil {
return err
}
v := struct {
Version string
Codename string
GoVersion string
BuildTime string
Os string
Arch string
}{
Version: version.Version,
Codename: version.Codename,
GoVersion: runtime.Version(),
BuildTime: version.BuildDate,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
return tmpl.Execute(wr, v)
}

View File

@@ -1,23 +1,20 @@
package main
import (
"crypto/tls"
"errors"
"fmt"
"os"
"regexp"
"strings"
"time"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/provider"
"github.com/containous/traefik/types"
"regexp"
"strings"
"time"
)
// TraefikConfiguration holds GlobalConfiguration and other stuff
type TraefikConfiguration struct {
GlobalConfiguration `mapstructure:",squash"`
ConfigFile string `short:"c" description:"Configuration file to use (TOML)."`
GlobalConfiguration
ConfigFile string `short:"c" description:"Configuration file to use (TOML)."`
}
// GlobalConfiguration holds global configuration (with providers, etc.).
@@ -29,13 +26,11 @@ type GlobalConfiguration struct {
TraefikLogsFile string `description:"Traefik logs file"`
LogLevel string `short:"l" description:"Log level"`
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key'"`
Cluster *types.Cluster `description:"Enable clustering"`
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags"`
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags."`
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL"`
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint"`
ProvidersThrottleDuration time.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time."`
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used"`
InsecureSkipVerify bool `description:"Disable SSL certificate verification"`
Retry *Retry `description:"Enable retry sending request if network error"`
Docker *provider.Docker `description:"Enable Docker backend"`
File *provider.File `description:"Enable File backend"`
@@ -47,8 +42,6 @@ type GlobalConfiguration struct {
Zookeeper *provider.Zookepper `description:"Enable Zookeeper backend"`
Boltdb *provider.BoltDb `description:"Enable Boltdb backend"`
Kubernetes *provider.Kubernetes `description:"Enable Kubernetes backend"`
Mesos *provider.Mesos `description:"Enable Mesos backend"`
Eureka *provider.Eureka `description:"Enable Eureka backend"`
}
// DefaultEntryPoints holds default entry points
@@ -75,9 +68,7 @@ func (dep *DefaultEntryPoints) Set(value string) error {
}
// Get return the EntryPoints map
func (dep *DefaultEntryPoints) Get() interface{} {
return DefaultEntryPoints(*dep)
}
func (dep *DefaultEntryPoints) Get() interface{} { return DefaultEntryPoints(*dep) }
// SetValue sets the EntryPoints map with val
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
@@ -102,7 +93,7 @@ func (ep *EntryPoints) String() string {
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (ep *EntryPoints) Set(value string) error {
regex := regexp.MustCompile("(?:Name:(?P<Name>\\S*))\\s*(?:Address:(?P<Address>\\S*))?\\s*(?:TLS:(?P<TLS>\\S*))?\\s*((?P<TLSACME>TLS))?\\s*(?:CA:(?P<CA>\\S*))?\\s*(?:Redirect.EntryPoint:(?P<RedirectEntryPoint>\\S*))?\\s*(?:Redirect.Regex:(?P<RedirectRegex>\\S*))?\\s*(?:Redirect.Replacement:(?P<RedirectReplacement>\\S*))?\\s*(?:Compress:(?P<Compress>\\S*))?")
regex := regexp.MustCompile("(?:Name:(?P<Name>\\S*))\\s*(?:Address:(?P<Address>\\S*))?\\s*(?:TLS:(?P<TLS>\\S*))?\\s*((?P<TLSACME>TLS))?\\s*(?:Redirect.EntryPoint:(?P<RedirectEntryPoint>\\S*))?\\s*(?:Redirect.Regex:(?P<RedirectRegex>\\S*))?\\s*(?:Redirect.Replacement:(?P<RedirectReplacement>\\S*))?")
match := regex.FindAllStringSubmatch(value, -1)
if match == nil {
return errors.New("Bad EntryPoints format: " + value)
@@ -128,10 +119,6 @@ func (ep *EntryPoints) Set(value string) error {
Certificates: Certificates{},
}
}
if len(result["CA"]) > 0 {
files := strings.Split(result["CA"], ",")
tls.ClientCAFiles = files
}
var redirect *Redirect
if len(result["RedirectEntryPoint"]) > 0 || len(result["RedirectRegex"]) > 0 || len(result["RedirectReplacement"]) > 0 {
redirect = &Redirect{
@@ -141,25 +128,17 @@ func (ep *EntryPoints) Set(value string) error {
}
}
compress := false
if len(result["Compress"]) > 0 {
compress = strings.EqualFold(result["Compress"], "enable") || strings.EqualFold(result["Compress"], "on")
}
(*ep)[result["Name"]] = &EntryPoint{
Address: result["Address"],
TLS: tls,
Redirect: redirect,
Compress: compress,
}
return nil
}
// Get return the EntryPoints map
func (ep *EntryPoints) Get() interface{} {
return EntryPoints(*ep)
}
func (ep *EntryPoints) Get() interface{} { return EntryPoints(*ep) }
// SetValue sets the EntryPoints map with val
func (ep *EntryPoints) SetValue(val interface{}) {
@@ -177,8 +156,6 @@ type EntryPoint struct {
Address string
TLS *TLS
Redirect *Redirect
Auth *types.Auth
Compress bool
}
// Redirect configures a redirection of an entry point to another, or to an URL
@@ -190,74 +167,12 @@ type Redirect struct {
// TLS configures TLS for an entry point
type TLS struct {
MinVersion string
CipherSuites []string
Certificates Certificates
ClientCAFiles []string
}
// Map of allowed TLS minimum versions
var minVersion = map[string]uint16{
`VersionTLS10`: tls.VersionTLS10,
`VersionTLS11`: tls.VersionTLS11,
`VersionTLS12`: tls.VersionTLS12,
}
// Map of TLS CipherSuites from crypto/tls
var cipherSuites = map[string]uint16{
`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
`TLS_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
`TLS_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
`TLS_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_RSA_WITH_AES_128_CBC_SHA,
`TLS_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_RSA_WITH_AES_256_CBC_SHA,
`TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
`TLS_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
Certificates Certificates
}
// Certificates defines traefik certificates type
// Certs and Keys could be either a file path, or the file content itself
type Certificates []Certificate
//CreateTLSConfig creates a TLS config from Certificate structures
func (certs *Certificates) CreateTLSConfig() (*tls.Config, error) {
config := &tls.Config{}
config.Certificates = []tls.Certificate{}
certsSlice := []Certificate(*certs)
for _, v := range certsSlice {
isAPath := false
_, errCert := os.Stat(v.CertFile)
_, errKey := os.Stat(v.KeyFile)
if errCert == nil {
if errKey == nil {
isAPath = true
} else {
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
}
} else if errKey == nil {
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
}
cert := tls.Certificate{}
var err error
if isAPath {
cert, err = tls.LoadX509KeyPair(v.CertFile, v.KeyFile)
if err != nil {
return nil, err
}
} else {
cert, err = tls.X509KeyPair([]byte(v.CertFile), []byte(v.KeyFile))
if err != nil {
return nil, err
}
}
config.Certificates = append(config.Certificates, cert)
}
return config, nil
}
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (certs *Certificates) String() string {
@@ -288,7 +203,6 @@ func (certs *Certificates) Type() string {
}
// Certificate holds a SSL cert/key pair
// Certs and Key could be either a file path, or the file content itself
type Certificate struct {
CertFile string
KeyFile string
@@ -304,9 +218,7 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
//default Docker
var defaultDocker provider.Docker
defaultDocker.Watch = true
defaultDocker.ExposedByDefault = true
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
defaultDocker.SwarmMode = false
// default File
var defaultFile provider.File
@@ -316,9 +228,6 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
// default Web
var defaultWeb WebProvider
defaultWeb.Address = ":8080"
defaultWeb.Statistics = &types.Statistics{
RecentErrors: 10,
}
// default Marathon
var defaultMarathon provider.Marathon
@@ -326,7 +235,6 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
defaultMarathon.ExposedByDefault = true
defaultMarathon.Constraints = []types.Constraint{}
defaultMarathon.DialerTimeout = 60
// default Consul
var defaultConsul provider.Consul
@@ -364,17 +272,9 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
//default Kubernetes
var defaultKubernetes provider.Kubernetes
defaultKubernetes.Watch = true
defaultKubernetes.Endpoint = ""
defaultKubernetes.LabelSelector = ""
defaultKubernetes.Endpoint = "http://127.0.0.1:8080"
defaultKubernetes.Constraints = []types.Constraint{}
// default Mesos
var defaultMesos provider.Mesos
defaultMesos.Watch = true
defaultMesos.Endpoint = "http://127.0.0.1:5050"
defaultMesos.ExposedByDefault = true
defaultMesos.Constraints = []types.Constraint{}
defaultConfiguration := GlobalConfiguration{
Docker: &defaultDocker,
File: &defaultFile,
@@ -386,7 +286,6 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
Zookeeper: &defaultZookeeper,
Boltdb: &defaultBoltDb,
Kubernetes: &defaultKubernetes,
Mesos: &defaultMesos,
Retry: &Retry{},
}
return &TraefikConfiguration{

View File

@@ -2,10 +2,5 @@
Description=Traefik
[Service]
Type=notify
ExecStart=/usr/bin/traefik --configFile=/etc/traefik.toml
Restart=always
WatchdogSec=1s
[Install]
WantedBy=multi-user.target
Restart=on-failure

10
docs.Dockerfile Normal file
View File

@@ -0,0 +1,10 @@
FROM alpine:3.7
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin
COPY requirements.txt /mkdocs/
WORKDIR /mkdocs
VOLUME /mkdocs
RUN apk --no-cache --no-progress add py-pip \
&& pip install --trusted-host pypi.python.org --user -r requirements.txt

View File

@@ -13,12 +13,12 @@ Let's take our example from the [overview](https://docs.traefik.io/#overview) ag
> ![Architecture](img/architecture.png)
Let's zoom on Træfik and have an overview of its internal architecture:
Let's zoom on Træfɪk and have an overview of its internal architecture:
![Architecture](img/internal.png)
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfik (listening port, SSL, traffic redirection...).
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfɪk (listening port, SSL, traffic redirection...).
- Traffic is then forwarded to a matching [frontend](#frontends). A frontend defines routes from [entrypoints](#entrypoints) to [backends](#backends).
Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can match or not a request.
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
@@ -26,11 +26,11 @@ Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can
## Entrypoints
Entrypoints are the network entry points into Træfik.
Entrypoints are the network entry points into Træfɪk.
They can be defined using:
- a port (80, 443...)
- SSL (Certificates, Keys, authentication with a client certificate signed by a trusted CA...)
- SSL (Certificates. Keys...)
- redirection to another entrypoint (redirect `HTTP` to `HTTPS`)
Here is an example of entrypoints definition:
@@ -54,81 +54,25 @@ Here is an example of entrypoints definition:
- We enable SSL on `https` by giving a certificate and a key.
- We also redirect all the traffic from entrypoint `http` to `https`.
And here is another example with client certificate authentication:
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- We enable SSL on `https` by giving a certificate and a key.
- One or several files containing Certificate Authorities in PEM format are added.
- It is possible to have multiple CA:s in the same file or keep them in separate files.
## Frontends
A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
A frontend is a set of rules that forwards the incoming traffic from an entrypoint to a backend.
Frontends can be defined using the following rules:
Rules may be classified in one of two groups: Modifiers and matchers.
- `Headers: Content-Type, application/json`: Headers adds a matcher for request header values. It accepts a sequence of key/value pairs to be matched.
- `HeadersRegexp: Content-Type, application/(text|json)`: Regular expressions can be used with headers as well. It accepts a sequence of key/value pairs, where the value has regex support.
- `Host: traefik.io, www.traefik.io`: Match request host with given host list.
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Adds a matcher for the URL hosts. It accepts templates with zero or more URL variables enclosed by `{}`. Variables can define an optional regexp pattern to be matched.
- `Method: GET, POST, PUT`: Method adds a matcher for HTTP methods. It accepts a sequence of one or more methods to be matched.
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Path adds a matcher for the URL paths. It accepts templates with zero or more URL variables enclosed by `{}`.
- `PathStrip`: Same as `Path` but strip the given prefix from the request URL's Path.
- `PathPrefix`: PathPrefix adds a matcher for the URL path prefixes. This matches if the given template is a prefix of the full URL path.
- `PathPrefixStrip`: Same as `PathPrefix` but strip the given prefix from the request URL's Path.
### Modifiers
Modifier rules only modify the request. They do not have any impact on routing decisions being made.
Following is the list of existing modifier rules:
- `AddPrefix: /products`: Add path prefix to the existing request path prior to forwarding the request to the backend.
- `ReplacePath: /serverless-path`: Replaces the path and adds the old path to the `X-Replaced-Path` header. Useful for mapping to AWS Lambda or Google Cloud Functions.
### Matchers
Matcher rules determine if a particular request should be forwarded to a backend.
Separate multiple rule values by `,` (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches). Does not work for `Headers` and `HeadersRegexp`.
Separate multiple rule values by `;` (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).
You can use multiple rules by separating them by `;`
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
Following is the list of existing matcher rules along with examples:
- `Headers: Content-Type, application/json`: Match HTTP header. It accepts a comma-separated key/value pair where both key and value must be literals.
- `HeadersRegexp: Content-Type, application/(text|json)`: Match HTTP header. It accepts a comma-separated key/value pair where the key must be a literal and the value may be a literal or a regular expression.
- `Host: traefik.io, www.traefik.io`: Match request host. It accepts a sequence of literal hosts.
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Match request host. It accepts a sequence of literal and regular expression hosts.
- `Method: GET, POST, PUT`: Match request HTTP method. It accepts a sequence of HTTP methods.
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Match exact request path. It accepts a sequence of literal and regular expression paths.
- `PathStrip: /products/`: Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal paths.
- `PathStripRegex: /articles/{category}/{id:[0-9]+}`: Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression paths.
- `PathPrefix: /products/, /articles/{category}/{id:[0-9]+}`: Match request prefix path. It accepts a sequence of literal and regular expression prefix paths.
- `PathPrefixStrip: /products/`: Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header.
- `PathPrefixStripRegex: /articles/{category}/{id:[0-9]+}`: Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header.
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces. Any pattern supported by [Go's regexp package](https://golang.org/pkg/regexp/) may be used. Example: `/posts/{id:[0-9]+}`.
(Note that the variable has no special meaning; however, it is required by the gorilla/mux dependency which embeds the regular expression and defines the syntax.)
#### Path Matcher Usage Guidelines
This section explains when to use the various path matchers.
Use `Path` if your backend listens on the exact path only. For instance, `Path: /products` would match `/products` but not `/products/shoes`.
Use a `*Prefix*` matcher if your backend listens on a particular base path but also serves requests on sub-paths. For instance, `PathPrefix: /products` would match `/products` but also `/products/shoes` and `/products/shirts`. Since the path is forwarded as-is, your backend is expected to listen on `/products`.
Use a `*Strip` matcher if your backend listens on the root path (`/`) but should be routeable on a specific prefix. For instance, `PathPrefixStrip: /products` would match `/products` but also `/products/shoes` and `/products/shirts`. Since the path is stripped prior to forwarding, your backend is expected to listen on `/`.
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs. Continuing on the example, the backend should return `/products/shoes/image.png` (and not `/images.png` which Traefik would likely not be able to associate with the same backend). The `X-Forwarded-Prefix` header (available since Traefik 1.3) can be queried to build such URLs dynamically.
Instead of distinguishing your backends by path only, you can add a Host matcher to the mix. That way, namespacing of your backends happens on the basis of hosts in addition to paths.
### Examples
Here is an example of frontends definition:
```toml
@@ -143,7 +87,7 @@ Here is an example of frontends definition:
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "HostRegexp:localhost,{subdomain:[a-z]+}.localhost"
rule = "Host:localhost,{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
@@ -161,55 +105,36 @@ As seen in the previous example, you can combine multiple rules.
In TOML file, you can use multiple routes:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Path:/test"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Host:Path:/test"
```
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
You can also use the notation using a `;` separator, same result:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
Finally, you can create a rule to bind multiple domains or Path to a frontend, using the `,` separator:
```toml
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
```
### Rules Order
When combining `Modifier` rules with `Matcher` rules, it is important to remember that `Modifier` rules **ALWAYS** apply after the `Matcher` rules.
The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portion of the rule will apply first, and the `Modifier` will apply later.
- `PathStrip`
- `PathStripRegex`
- `PathPrefixStrip`
- `PathPrefixStripRegex`
`Modifiers` will be applied in a pre-determined order regardless of their order in the `rule` configuration section.
1. `PathStrip`
2. `PathPrefixStrip`
3. `PathStripRegex`
4. `PathPrefixStripRegex`
5. `AddPrefix`
6. `ReplacePath`
### Priorities
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
@@ -217,20 +142,20 @@ By default, routes will be sorted (in descending order) using rules length (to a
You can customize priority by frontend:
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
```
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
```
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
@@ -238,16 +163,16 @@ Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
## Backends
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
Various methods of load-balancing are supported:
Various methods of load-balancing is supported:
- `wrr`: Weighted Round Robin
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others. It also rolls back to original weights if the servers have changed.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
Initial state is Standby. CB observes the statistics and does not modify the request.
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
In case if condition matches, CB enters Tripped state, where it responds with predefines code or redirects to another frontend.
Once Tripped timer expires, CB enters Recovering state and resets all stats.
In case the condition does not match and recovery timer expires, CB enters Standby state.
In case if the condition does not match and recovery timer expires, CB enters Standby state.
It can be configured using:
@@ -280,37 +205,6 @@ For example:
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
Sticky sessions are supported with both load balancers. When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial
request. On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend
will be assigned.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.loadbalancer]
sticky = true
```
A health check can be configured in order to remove a backend from LB rotation
as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
requests periodically carried out by Traefik. The check is defined by a path
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
often the health check should be executed (the default being 30 seconds). Each
backend must respond to the health check within 5 seconds.
A recovering backend returning 200 OK responses again is being returned to the
LB rotation pool.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
```
## Servers
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
@@ -344,31 +238,10 @@ Here is an example of backends and servers definition:
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
# Configuration
# Launch
Træfik's configuration has two parts:
- The [static Træfik configuration](/basics#static-trfk-configuration) which is loaded only at the beginning.
- The [dynamic Træfik configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
## Static Træfik configuration
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
Træfik can be configured using many configuration sources with the following precedence order.
Each item takes precedence over the item below it:
- [Key-value Store](/basics/#key-value-stores)
- [Arguments](/basics/#arguments)
- [Configuration file](/basics/#configuration-file)
- Default
It means that arguments override configuration file, and Key-value Store overrides arguments.
### Configuration file
By default, Træfik will try to find a `traefik.toml` in the following places:
Træfɪk can be configured using a TOML file configuration, arguments, or both.
By default, Træfɪk will try to find a `traefik.toml` in the following places:
- `/etc/traefik/`
- `$HOME/.traefik/`
@@ -380,63 +253,15 @@ You can override this by setting a `configFile` argument:
$ traefik --configFile=foo/bar/myconfigfile.toml
```
Please refer to the [global configuration](/toml/#global-configuration) section to get documentation on it.
Træfɪk uses the following precedence order. Each item takes precedence over the item below it:
### Arguments
- arguments
- configuration file
- default
Each argument (and command) is described in the help section:
It means that arguments overrides configuration file.
Each argument is described in the help section:
```bash
$ traefik --help
```
Note that all default values will be displayed as well.
### Key-value stores
Træfik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
## Dynamic Træfik configuration
The dynamic configuration concerns :
- [Frontends](/basics/#frontends)
- [Backends](/basics/#backends)
- [Servers](/basics/#servers)
Træfik can hot-reload those rules which could be provided by [multiple configuration backends](/toml/#configuration-backends).
We only need to enable `watch` option to make Træfik watch configuration backend changes and generate its configuration automatically.
Routes to services will be created and updated instantly at any changes.
Please refer to the [configuration backends](/toml/#configuration-backends) section to get documentation on it.
# Commands
Usage: `traefik [command] [--flag=flag_argument]`
List of Træfik available commands with description :                                                             
- `version` : Print version 
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
Each command may have related flags.
All those related flags will be displayed with :
```bash
$ traefik [command] --help
```
Note that each command is described at the beginning of the help section:
```bash
$ traefik --help
```

View File

@@ -117,7 +117,7 @@ server {
Here is the `traefik.toml` file used:
```toml
```
MaxIdleConnsPerHost = 100000
defaultEntryPoints = ["http"]
@@ -145,7 +145,7 @@ defaultEntryPoints = ["http"]
## Results
### whoami:
```shell
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
Running 1m test @ http://IP-whoami:80/bench
20 threads and 1000 connections
@@ -164,7 +164,7 @@ Transfer/sec: 6.40MB
```
### nginx:
```shell
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
Running 1m test @ http://IP-nginx:8001/bench
20 threads and 1000 connections
@@ -183,7 +183,7 @@ Transfer/sec: 4.97MB
```
### traefik:
```shell
```
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
Running 1m test @ http://IP-traefik:8000/bench
20 threads and 1000 connections

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -1,5 +1,5 @@
<p align="center">
<img src="img/traefik.logo.png" alt="Træfik" title="Træfik" />
<img src="img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
</p>
[![Build Status](https://travis-ci.org/containous/traefik.svg?branch=master)](https://travis-ci.org/containous/traefik)
@@ -10,8 +10,8 @@
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Træfik (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Mesos/Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Amazon ECS](https://aws.amazon.com/ecs/), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), Rest API, file...) to manage its configuration automatically and dynamically.
Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Mesos/Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), Rest API, file...) to manage its configuration automatically and dynamically.
## Overview
@@ -26,29 +26,22 @@ But a microservices architecture is dynamic... Services are added, removed, kill
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
Here enters Træfik.
Here enters Træfɪk.
![Architecture](img/architecture.png)
Træfik can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Routes to your services will be created instantly.
Run it and forget it!
## Demo
## Quickstart
Here is a talk (in french) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Lets'Encrypt.
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](http://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfik features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
[![Traefik Devoxx France](http://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](http://www.youtube.com/watch?v=QvAz9mVx5TI)
[![Traefik Devoxx France](https://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](https://www.youtube.com/watch?v=QvAz9mVx5TI)
## Get it
@@ -70,65 +63,41 @@ docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.to
## Test it
You can test Træfik easily using [Docker compose](https://docs.docker.com/compose), with this `docker-compose.yml` file in a folder named `traefik`:
You can test Træfɪk easily using [Docker compose](https://docs.docker.com/compose), with this `docker-compose.yml` file:
```yaml
version: '2'
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
services:
proxy:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
whoami1:
image: emilevauge/whoami
labels:
- "traefik.backend=whoami"
- "traefik.frontend.rule=Host:whoami.docker.localhost"
networks:
webgateway:
driver: bridge
whoami2:
image: emilevauge/whoami
labels:
- "traefik.backend=whoami"
- "traefik.frontend.rule=Host:whoami.docker.localhost"
```
Start it from within the `traefik` folder:
docker-compose up -d
In a browser you may open `http://localhost:8080` to access Træfik's dashboard and observe the following magic.
Now, create a folder named `test` and create a `docker-compose.yml` in it with this content:
```yaml
version: '2'
services:
whoami:
image: emilevauge/whoami
networks:
- web
labels:
- "traefik.backend=whoami"
- "traefik.frontend.rule=Host:whoami.docker.localhost"
networks:
web:
external:
name: traefik_webgateway
Then, start it:
```
Then, start and scale it in the `test` folder:
```shell
docker-compose up -d
docker-compose scale whoami=2
```
Finally, test load-balancing between the two services `test_whoami_1` and `test_whoami_2`:
Finally, test load-balancing between the two servers `whoami1` and `whoami2`:
```shell
```bash
$ curl -H Host:whoami.docker.localhost http://127.0.0.1
Hostname: ef194d07634a
IP: 127.0.0.1

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +0,0 @@
# Clustering / High Availability (beta)
This guide explains how tu use Træfik in high availability mode.
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
## Prerequisites
You will need a working KV store cluster.
## File configuration to KV store migration
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file.
Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) to get more details.
## Deploy a Træfik cluster
Once your Træfik configuration is uploaded on your KV store, you can start each Træfik instance.
A Træfik cluster is based on a master/slave model.
When starting, Træfik will elect a master. If this instance fails, another master will be automatically elected.

View File

@@ -1,11 +1,11 @@
# Examples
You will find here some configuration examples of Træfik.
You will find here some configuration examples of Træfɪk.
## HTTP only
```toml
```
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
@@ -14,7 +14,7 @@ defaultEntryPoints = ["http"]
## HTTP + HTTPS (with SNI)
```toml
```
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
@@ -29,11 +29,10 @@ defaultEntryPoints = ["http", "https"]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
```
Note that we can either give path to certificate file or directly the file content itself ([like in this TOML example](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store)).
## HTTP redirect on HTTPS
```toml
```
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
@@ -50,7 +49,7 @@ defaultEntryPoints = ["http", "https"]
## Let's Encrypt support
```toml
```
[entryPoints]
[entryPoints.https]
address = ":443"
@@ -80,7 +79,7 @@ entryPoint = "https"
## Override entrypoints in frontends
```toml
```
[frontends]
[frontends.frontend1]
backend = "backend2"
@@ -97,44 +96,3 @@ entryPoint = "https"
backend = "backend2"
rule = "Path:/test"
```
## Enable Basic authentication in an entrypoint
With two user/pass:
- `test`:`test`
- `test2`:`test2`
Passwords are encoded in MD5: you can use htpasswd to generate those ones.
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
```
## Pass Authenticated user to application via headers
Providing an authentication method as described above, it is possible to pass the user to the application
via a configurable header value
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth]
headerField = "X-WebAuth-User"
[entryPoints.http.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
```
## Override the Traefik HTTP server IdleTimeout and/or throttle configurations from re-loading too quickly
```toml
IdleTimeout = "360s"
ProvidersThrottleDuration = "5s"
```

View File

@@ -1,580 +0,0 @@
# Kubernetes Ingress Controller
This guide explains how to use Træfik as an Ingress controller in a Kubernetes cluster.
If you are not familiar with Ingresses in Kubernetes you might want to read the [Kubernetes user guide](http://kubernetes.io/docs/user-guide/ingress/)
The config files used in this guide can be found in the [examples directory](https://github.com/containous/traefik/tree/master/examples/k8s)
## Prerequisites
1. A working Kubernetes cluster. If you want to follow along with this guide, you should setup [minikube](http://kubernetes.io/docs/getting-started-guides/minikube/)
on your machine, as it is the quickest way to get a local Kubernetes cluster setup for experimentation and development.
2. The `kubectl` binary should be [installed on your workstation](http://kubernetes.io/docs/getting-started-guides/minikube/#download-kubectl).
### Role Based Access Control configuration (Kubernetes 1.6+ only)
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/) in 1.6+ to allow fine-grained control
of Kubernetes resources and api.
If your cluster is configured with RBAC, you may need to authorize Traefik to use
kubernetes API using ClusterRole and ClusterRoleBinding resources:
_Note: your cluster may have suitable ClusterRoles already setup, but the following should work everywhere_
```yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
```
[examples/k8s/traefik-rbac.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik-rbac.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
```
## Deploy Træfik using a Deployment object
We are going to deploy Træfik with a
[Deployment](http://kubernetes.io/docs/user-guide/deployments/), as this will
allow you to easily roll out config changes or update the image.
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
ports:
- containerPort: 80
hostPort: 80
- containerPort: 8080
args:
- --web
- --kubernetes
```
[examples/k8s/traefik.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik.yaml)
> notice that we binding port 80 on the Træfik container to port 80 on the host.
> With a multi node cluster we might expose Træfik with a NodePort or LoadBalancer service
> and run more than 1 replica of Træfik for high availability.
To deploy Træfik to your cluster start by submitting the deployment to the cluster with `kubectl`:
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik.yaml
```
### Check the deployment
Now lets check if our deployment was successful.
Start by listing the pods in the `kube-system` namespace:
```shell
$kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikubevm 1/1 Running 0 4h
kubernetes-dashboard-s8krj 1/1 Running 0 4h
traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m
```
You should see that after submitting the Deployment to Kubernetes it has launched
a pod, and it is now running. _It might take a few moments for kubernetes to pull
the Træfik image and start the container._
> You could also check the deployment with the Kubernetes dashboard, run
> `minikube dashboard` to open it in your browser, then choose the `kube-system`
> namespace from the menu at the top right of the screen.
You should now be able to access Træfik on port 80 of your minikube instance.
```sh
curl $(minikube ip)
404 page not found
```
> We expect to see a 404 response here as we haven't yet given Træfik any configuration.
## Deploy Træfik using Helm Chart
Instead of installing Træfik via a Deployment object, you can also use the Træfik Helm chart.
Install Træfik chart by:
```sh
helm install stable/traefik
```
For more information, check out [the doc](https://github.com/kubernetes/charts/tree/master/stable/traefik).
## Submitting An Ingress to the cluster.
Lets start by creating a Service and an Ingress that will expose the
[Træfik Web UI](https://github.com/containous/traefik#web-ui).
```yaml
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.local
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
```
[examples/k8s/ui.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/ui.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
```
Now lets setup an entry in our /etc/hosts file to route `traefik-ui.local`
to our cluster.
> In production you would want to set up real dns entries.
> You can get the ip address of your minikube instance by running `minikube ip`
```shell
echo "$(minikube ip) traefik-ui.local" | sudo tee -a /etc/hosts
```
We should now be able to visit [traefik-ui.local](http://traefik-ui.local) in the browser and view the Træfik Web UI.
## Name based routing
In this example we are going to setup websites for 3 of the United Kingdoms
best loved cheeses, Cheddar, Stilton and Wensleydale.
First lets start by launching the 3 pods for the cheese websites.
```yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: stilton
labels:
app: cheese
cheese: stilton
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: stilton
template:
metadata:
labels:
app: cheese
task: stilton
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:stilton
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cheddar
labels:
app: cheese
cheese: cheddar
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: cheddar
template:
metadata:
labels:
app: cheese
task: cheddar
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:cheddar
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: wensleydale
labels:
app: cheese
cheese: wensleydale
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: wensleydale
template:
metadata:
labels:
app: cheese
task: wensleydale
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:wensleydale
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
```
[examples/k8s/cheese-deployments.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-deployments.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-deployments.yaml
```
Next we need to setup a service for each of the cheese pods.
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: stilton
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: stilton
---
apiVersion: v1
kind: Service
metadata:
name: cheddar
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: cheddar
---
apiVersion: v1
kind: Service
metadata:
name: wensleydale
annotations:
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: wensleydale
```
> Notice that we also set a [circuit breaker expression](https://docs.traefik.io/basics/#backends) for one of the backends
> by setting the `traefik.backend.circuitbreaker` annotation on the service.
[examples/k8s/cheese-services.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-services.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-services.yaml
```
Now we can submit an ingress for the cheese websites.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheese
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: stilton.local
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
- host: cheddar.local
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: http
- host: wensleydale.local
http:
paths:
- path: /
backend:
serviceName: wensleydale
servicePort: http
```
[examples/k8s/cheese-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-ingress.yaml)
> Notice that we list each hostname, and add a backend service.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
```
Now visit the [Træfik dashboard](http://traefik-ui.local/) and you should
see a frontend for each host. Along with a backend listing for each service
with a Server set up for each pod.
If you edit your `/etc/hosts` again you should be able to access the cheese
websites in your browser.
```shell
echo "$(minikube ip) stilton.local cheddar.local wensleydale.local" | sudo tee -a /etc/hosts
```
* [Stilton](http://stilton.local/)
* [Cheddar](http://cheddar.local/)
* [Wensleydale](http://wensleydale.local/)
## Path based routing
Now lets suppose that our fictional client has decided that while they are
super happy about our cheesy web design, when they asked for 3 websites
they had not really bargained on having to buy 3 domain names.
No problem, we say, why don't we reconfigure the sites to host all 3 under one domain.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheeses
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- host: cheeses.local
http:
paths:
- path: /stilton
backend:
serviceName: stilton
servicePort: http
- path: /cheddar
backend:
serviceName: cheddar
servicePort: http
- path: /wensleydale
backend:
serviceName: wensleydale
servicePort: http
```
[examples/k8s/cheeses-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheeses-ingress.yaml)
> Notice that we are configuring Træfik to strip the prefix from the url path
> with the `traefik.frontend.rule.type` annotation so that we can use
> the containers from the previous example without modification.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheeses-ingress.yaml
```
```shell
echo "$(minikube ip) cheeses.local" | sudo tee -a /etc/hosts
```
You should now be able to visit the websites in your browser.
* [cheeses.local/stilton](http://cheeses.local/stilton/)
* [cheeses.local/cheddar](http://cheeses.local/cheddar/)
* [cheeses.local/wensleydale](http://cheeses.local/wensleydale/)
## Disable passing the Host header
By default Træfik will pass the incoming Host header on to the upstream resource.
There are times however where you may not want this to be the case.
For example if your service is of the ExternalName type.
### Disable entirely
Add the following to your toml config:
```toml
disablePassHostHeaders = true
```
### Disable per ingress
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader`
annotation on your ingress to `false`.
Here is an example ingress definition:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.passHostHeader: "false"
spec:
rules:
- host: example.com
http:
paths:
- path: /static
backend:
serviceName: static
servicePort: https
```
And an example service definition:
```yaml
apiVersion: v1
kind: Service
metadata:
name: static
spec:
ports:
- name: https
port: 443
type: ExternalName
externalName: static.otherdomain.com
```
If you were to visit example.com/static the request would then be passed onto
static.otherdomain.com/static and static.otherdomain.com would receive the
request with the Host header being static.otherdomain.com.
Note: The per ingress annotation overides whatever the global value is set to.
So you could set `disablePassHostHeaders` to `true` in your toml file and then enable passing
the host header per ingress if you wanted.
## Excluding an ingress from Træfik
You can control which ingress Træfik cares about by using the `kubernetes.io/ingress.class` annotation.
By default if the annotation is not set at all Træfik will include the ingress.
If the annotation is set to anything other than traefik or a blank string Træfik will ignore it.
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)

View File

@@ -1,329 +0,0 @@
# Key-value store configuration
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be sorted in a Key-value store.
This section explains how to launch Træfik using a configuration loaded from a Key-value store.
Træfik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
# Static configuration in Key-value store
We will see the steps to set it up with an easy example.
Note that we could do the same with any other Key-value Store.
## docker-compose file for Consul
The Træfik global configuration will be getted from a [Consul](https://consul.io) store.
First we have to launch Consul in a container.
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [emilevauge/whoamI](https://github.com/emilevauge/whoamI) :
```yaml
consul:
image: progrium/consul
command: -server -bootstrap -log-level debug -ui-dir /ui
ports:
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
expose:
- "8300"
- "8301"
- "8301/udp"
- "8302"
- "8302/udp"
whoami1:
image: emilevauge/whoami
whoami2:
image: emilevauge/whoami
whoami3:
image: emilevauge/whoami
whoami4:
image: emilevauge/whoami
```
## Upload the configuration in the Key-value store
We should now fill the store with the Træfik global configuration, as we do with a [TOML file configuration](/toml).
To do that, we can send the Key-value pairs via [curl commands](https://www.consul.io/intro/getting-started/kv.html) or via the [Web UI](https://www.consul.io/intro/getting-started/ui.html).
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
Please refer to the [store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
Here is the toml configuration we would like to store in the Key-value Store :
```toml
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = """-----BEGIN CERTIFICATE-----
<cert file content>
-----END CERTIFICATE-----"""
KeyFile = """-----BEGIN CERTIFICATE-----
<key file content>
-----END CERTIFICATE-----"""
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
[web]
address = ":8081"
```
And there, the same global configuration in the Key-value Store (using `prefix = "traefik"`):
| Key | Value |
|-----------------------------------------------------------|---------------------------------------------------------------|
| `/traefik/loglevel` | `DEBUG` |
| `/traefik/defaultentrypoints/0` | `http` |
| `/traefik/defaultentrypoints/1` | `https` |
| `/traefik/entrypoints/http/address` | `:80` |
| `/traefik/entrypoints/https/address` | `:443` |
| `/traefik/entrypoints/https/tls/certificates/0/certfile` | `integration/fixtures/https/snitest.com.cert` |
| `/traefik/entrypoints/https/tls/certificates/0/keyfile` | `integration/fixtures/https/snitest.com.key` |
| `/traefik/entrypoints/https/tls/certificates/1/certfile` | `--BEGIN CERTIFICATE--<cert file content>--END CERTIFICATE--` |
| `/traefik/entrypoints/https/tls/certificates/1/keyfile` | `--BEGIN CERTIFICATE--<key file content>--END CERTIFICATE--` |
| `/traefik/consul/endpoint` | `127.0.0.1:8500` |
| `/traefik/consul/watch` | `true` |
| `/traefik/consul/prefix` | `traefik` |
| `/traefik/web/address` | `:8081` |
In case you are setting key values manually:
- Remember to specify the indexes (`0`,`1`, `2`, ... ) under prefixes `/traefik/defaultentrypoints/` and `/traefik/entrypoints/https/tls/certificates/` in order to match the global configuration structure.
- Be careful to give the correct IP address and port on the key `/traefik/consul/endpoint`.
Note that we can either give path to certificate file or directly the file content itself.
## Launch Træfik
We will now launch Træfik in a container.
We use CLI flags to setup the connection between Træfik and Consul.
All the rest of the global configuration is stored in Consul.
Here is the [docker-compose file](https://docs.docker.com/compose/compose-file/) :
```yaml
traefik:
image: traefik
command: --consul --consul.endpoint=127.0.0.1:8500
ports:
- "80:80"
- "8080:8080"
```
NB : Be careful to give the correct IP address and port in the flag `--consul.endpoint`.
## TLS support
So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) support TLS connections.
To set it up, we should enable [consul security](https://www.consul.io/docs/internals/security.html) (or [etcd security](https://coreos.com/etcd/docs/latest/security.html)).
Then, we have to provide CA, Cert and Key to Træfik using `consul` flags :
- `--consul.tls`
- `--consul.tls.ca=path/to/the/file`
- `--consul.tls.cert=path/to/the/file`
- `--consul.tls.key=path/to/the/file`
Or etcd flags :
- `--etcd.tls`
- `--etcd.tls.ca=path/to/the/file`
- `--etcd.tls.cert=path/to/the/file`
- `--etcd.tls.key=path/to/the/file`
Note that we can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
Remember the command `traefik --help` to display the updated list of flags.
# Dynamic configuration in Key-value store
Following our example, we will provide backends/frontends rules to Træfik.
Note that this section is independent of the way Træfik got its static configuration.
It means that the static configuration can either come from the same Key-value store or from any other sources.
## Key-value storage structure
Here is the toml configuration we would like to store in the store :
```toml
[file]
# rules
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
```
And there, the same dynamic configuration in a KV Store (using `prefix = "traefik"`):
- backend 1
| Key | Value |
|--------------------------------------------------------|-----------------------------|
| `/traefik/backends/backend1/circuitbreaker/expression` | `NetworkErrorRatio() > 0.5` |
| `/traefik/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik/backends/backend1/servers/server1/weight` | `10` |
| `/traefik/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
| `/traefik/backends/backend1/servers/server2/weight` | `1` |
| `/traefik/backends/backend1/servers/server2/tags` | `api,helloworld` |
- backend 2
| Key | Value |
|-----------------------------------------------------|------------------------|
| `/traefik/backends/backend2/maxconn/amount` | `10` |
| `/traefik/backends/backend2/maxconn/extractorfunc` | `request.host` |
| `/traefik/backends/backend2/loadbalancer/method` | `drr` |
| `/traefik/backends/backend2/servers/server1/url` | `http://172.17.0.4:80` |
| `/traefik/backends/backend2/servers/server1/weight` | `1` |
| `/traefik/backends/backend2/servers/server2/url` | `http://172.17.0.5:80` |
| `/traefik/backends/backend2/servers/server2/weight` | `2` |
| `/traefik/backends/backend2/servers/server2/tags` | `web` |
- frontend 1
| Key | Value |
|---------------------------------------------------|-----------------------|
| `/traefik/frontends/frontend1/backend` | `backend2` |
| `/traefik/frontends/frontend1/routes/test_1/rule` | `Host:test.localhost` |
- frontend 2
| Key | Value |
|----------------------------------------------------|--------------------|
| `/traefik/frontends/frontend2/backend` | `backend1` |
| `/traefik/frontends/frontend2/passHostHeader` | `true` |
| `/traefik/frontends/frontend2/priority` | `10` |
| `/traefik/frontends/frontend2/entrypoints` | `http,https` |
| `/traefik/frontends/frontend2/routes/test_2/rule` | `PathPrefix:/test` |
## Atomic configuration changes
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
Note that only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically. As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag. To solve this problem, Træfik supports a special key called `/traefik/alias`. If set, Træfik use the value as an alternative key prefix.
Given the key structure below, Træfik will use the `http://172.17.0.2:80` as its only backend (frontend keys have been omitted for brevity).
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/1` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
When an atomic configuration change is required, you may write a new configuration at an alternative prefix. Here, although the `/traefik_configurations/2/...` keys have been set, the old configuration is still active because the `/traefik/alias` key still points to `/traefik_configurations/1`:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/1` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically. Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://172.17.0.4:80` hosts while no traffic is sent to the `172.17.0.2:80` host:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/2` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
Note that Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
# Store configuration in Key-value store
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfk).
The static Træfik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
```bash
$ traefik storeconfig [flags] ...
```
This command is here only to automate the [process which upload the configuration into the Key-value store](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store).
Træfik will not start but the [static configuration](/basics/#static-trfk-configuration) will be uploaded into the Key-value store.
If you configured ACME (Let's Encrypt), your registration account and your certificates will also be uploaded.
To upload your ACME certificates to the KV store, get your traefik TOML file and add the new `storage` option in the `acme` section:
```toml
[acme]
email = "test@traefik.io"
storage = "traefik/acme/account" # the key where to store your certificates in the KV store
storageFile = "acme.json" # your old certificates store
```
Call `traefik storeconfig` to upload your config in the KV store.
Then remove the line `storageFile = "acme.json"` from your TOML config file.
That's it!
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)

View File

@@ -1,96 +0,0 @@
# Marathon
This guide explains how to integrate Marathon and operate the cluster in a reliable way from Traefik's standpoint.
# Host detection
Marathon offers multiple ways to run (Docker-containerized) applications, the most popular ones being
- BRIDGE-networked containers with dynamic high ports exposed
- HOST-networked containers with host machine ports
- containers with dedicated IP addresses ([IP-per-task](https://mesosphere.github.io/marathon/docs/ip-per-task.html)).
Traefik tries to detect the configured mode and route traffic to the right IP addresses. It is possible to force using task hosts with the `forceTaskHostname` option.
Given the complexity of the subject, it is possible that the heuristic fails. Apart from filing an issue and waiting for the feature request / bug report to get addressed, one workaround for such situations is to customize the Marathon template file to the individual needs. (Note that this does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.)
# Port detection
Traefik also attempts to determine the right port (which is a [non-trivial matter in Marathon](https://mesosphere.github.io/marathon/docs/ports.html)). Following is the order by which Traefik tries to identify the port (the first one that yields a positive result will be used):
1. A arbitrary port specified through the `traefik.port` label.
1. The task port (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `portDefinitions` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `ipAddressPerTask` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
# Achieving high availability
## Scenarios
There are three scenarios where the availability of a Marathon application could be impaired along with the risk of losing or failing requests:
- During the startup phase when Traefik already routes requests to the backend even though it has not completed its bootstrapping process yet.
- During the shutdown phase when Traefik still routes requests to the backend while the backend is already terminating.
- During a failure of the application when Traefik has not yet identified the backend as being erroneous.
The first two scenarios are common with every rolling upgrade of an application (i.e., a new version release or configuration update).
The following sub-sections describe how to resolve or mitigate each scenario.
### Startup
In general, it is possible to define [readiness checks](https://mesosphere.github.io/marathon/docs/readiness-checks.html) (available since Marathon version 1.1) per application and have Marathon take these into account during the startup phase. The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range. As long as the check keeps failing, Marathon will not proceed with the deployment (within the configured upgrade stategy bounds).
Unfortunately, Traefik does not respect the result of the readiness check yet. Support is expected to land in a not-too-distant future release of Traefik, however, as being tracked by [issue 1559](https://github.com/containous/traefik/issues/1559).
A current mitigation strategy is to enable [retries](http://docs.traefik.io/toml/#retry-configuration) and make sure that a sufficient number of healthy application tasks exist so that one retry will likely hit one of those. Apart from its probabilistic nature, the workaround comes at the price of increased latency.
### Shutdown
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence (i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration). Specifically, each termination handler should install a signal handler listening for a SIGTERM signal and implement the following steps on signal reception:
1. Disable Keep-Alive HTTP connections.
1. Keep accepting HTTP requests for a certain period of time.
1. Stop accepting new connections.
1. Finish serving any in-flight requests.
1. Shut down.
Traefik already ignores Marathon tasks whose state does not match `TASK_RUNNING`; since terminating tasks transition into the `TASK_KILLING` and eventually `TASK_KILLED` state, there is nothing further that needs to be done on Traefik's end.
How long HTTP requests should continue to be accepted in step 2 depends on how long Traefik needs to receive and process the Marathon configuration update. Under regular operational conditions, it should be on the order of seconds, with 10 seconds possibly being a good default value.
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy. Paired with termination handlers, they would cover for those cases where either the termination sequence or Traefik cannot complete their part of the orchestration process in time.
### Failure
A failing application always happens unexpectedly, and hence, it is very difficult or even impossible to rule out the adversal effects categorically. Failure reasons vary broadly and could stretch from unacceptable slowness, a task crash, or a network split.
There are two mitigaton efforts:
1. Configure [Marathon health checks](https://mesosphere.github.io/marathon/docs/health-checks.html) on each application.
1. Configure Traefik health checks (possibly via the `traefik.backend.healthcheck.*` labels) and make sure they probe with proper frequency.
The Marathon health check makes sure that applications once deemed dysfunctional are being rescheduled to different slaves. However, they might take a while to get triggered and the follow-up processes to complete. For that reason, the Treafik health check provides an additional check that responds more rapidly and does not require a configuration reload to happen. Additionally, it protects from cases that the Marathon health check may not be able to cover, such as a network split.
## (Non-)Alternatives
There are a few alternatives of varying quality that are frequently asked for. The remaining section is going to explore them along with a benefit/cost trade-off.
### Reusing Marathon health checks
It may seem obvious to reuse the Marathon health checks as a signal to Traefik whether an application should be taken into load-balancing rotation or not.
Apart from the increased latency a failing health check may have, a major problem with this is is that Marathon does not persist the health check results. Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure. Re-elections do not only happen during regular maintenance work (often requiring rolling upgrades of the Marathon nodes) but also when the Marathon leader fails spontaneously). As such, there is no way to handle this situation deterministically.
Finally, Marathon health checks are not mandatory (the default is to use the task state as reported by Mesos), so requiring them for Traefik would raise the entry barrier for Marathon users.
Traefik used to use the health check results but moved away from it as [users reported the dramatic consequences](https://github.com/containous/traefik/issues/653).
### Draining
Another common approach is to let a proxy drain backends that are supposed to shut down. That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
On the plus side, this would not require any modifications to the application in question. However, implementing this fully within Traefik seems like a non-trivial undertaking. Additionally, the approach is less flexible compared to a custom termination handler since only the latter allows for the implementation of custom termination sequences that go beyond simple request draining (e.g., persisting a snapshot state to disk prior to terminating).
The feature is currently not implemented; a request for draining in general is at [issue 41](https://github.com/containous/traefik/issues/41).

View File

@@ -1,305 +0,0 @@
# Docker Swarm (mode) cluster
This section explains how to create a multi-host docker cluster with
swarm mode using [docker-machine](https://docs.docker.com/machine) and
how to deploy Træfik on it.
The cluster consists of:
- 3 servers
- 1 manager
- 2 workers
- 1 [overlay](https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network) network
(multi-host networking)
## Prerequisites
1. You will need to install [docker-machine](https://docs.docker.com/machine/)
2. You will need the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
## Cluster provisioning
First, let's create all the required nodes. It's a shorter version of
the [swarm tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).
```shell
docker-machine create -d virtualbox manager
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2
```
Then, let's setup the cluster, in order :
1. initialize the cluster
2. get the token for other host to join
3. on both workers, join the cluster with the token
```shell
docker-machine ssh manager "docker swarm init \
--listen-addr $(docker-machine ip manager) \
--advertise-addr $(docker-machine ip manager)"
export worker_token=$(docker-machine ssh manager "docker swarm \
join-token worker -q")
docker-machine ssh worker1 "docker swarm join \
--token=${worker_token} \
--listen-addr $(docker-machine ip worker1) \
--advertise-addr $(docker-machine ip worker1) \
$(docker-machine ip manager)"
docker-machine ssh worker2 "docker swarm join \
--token=${worker_token} \
--listen-addr $(docker-machine ip worker2) \
--advertise-addr $(docker-machine ip worker2) \
$(docker-machine ip manager)"
```
Let's validate the cluster is up and running.
```shell
docker-machine ssh manager docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2a770ov9vixeadep674265u1n worker1 Ready Active
dbi3or4q8ii8elbws70g4hkdh * manager Ready Active Leader
esbhhy6vnqv90xomjaomdgy46 worker2 Ready Active
```
Finally, let's create a network for Træfik to use.
```shell
docker-machine ssh manager "docker network create --driver=overlay traefik-net"
```
## Deploy Træfik
Let's deploy Træfik as a docker service in our cluster. The only
requirement for Træfik to work with swarm mode is that it needs to run
on a manager node — we are going to use a
[constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints-constraint) for
that.
```shell
docker-machine ssh manager "docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 --publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik \
--docker \
--docker.swarmmode \
--docker.domain=traefik \
--docker.watch \
--web"
```
Let's explain this command:
- `--publish 80:80 --publish 8080:8080`: we publish port `80` and
`8080` on the cluster.
- `--constraint=node.role==manager`: we ask docker to schedule Træfik
on a manager node.
- `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock`:
we bind mount the docker socket where Træfik is scheduled to be able
to speak to the daemon.
- `--network traefik-net`: we attach the Træfik service (and thus
the underlying container) to the `traefik-net` network.
- `--docker`: enable docker backend, and `--docker.swarmmode` to
enable the swarm mode on Træfik.
- `--web`: activate the webUI on port 8080
## Deploy your apps
We can now deploy our app on the cluster,
here [whoami](https://github.com/emilevauge/whoami), a simple web
server in Go. We start 2 services, on the `traefik-net` network.
```shell
docker-machine ssh manager "docker service create \
--name whoami0 \
--label traefik.port=80 \
--network traefik-net \
emilevauge/whoami"
docker-machine ssh manager "docker service create \
--name whoami1 \
--label traefik.port=80 \
--network traefik-net \
--label traefik.backend.loadbalancer.sticky=true \
emilevauge/whoami"
```
Note that we set whoami1 to use sticky sessions (`--label traefik.backend.loadbalancer.sticky=true`). We'll demonstrate that later.
If using `docker stack deploy`, there is [a specific way that the labels must be defined in the docker-compose file](https://github.com/containous/traefik/issues/994#issuecomment-269095109).
Check that everything is scheduled and started:
```shell
docker-machine ssh manager "docker service ls"
ID NAME REPLICAS IMAGE COMMAND
ab046gpaqtln whoami0 1/1 emilevauge/whoami
cgfg5ifzrpgm whoami1 1/1 emilevauge/whoami
dtpl249tfghc traefik 1/1 traefik --docker --docker.swarmmode --docker.domain=traefik --docker.watch --web
```
## Access to your apps through Træfik
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
Hostname: 8147a7746e7a
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.3
IP: fe80::42:aff:fe00:903
IP: 172.18.0.3
IP: fe80::42:acff:fe12:3
GET / HTTP/1.1
Host: 10.0.9.3:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.3:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
curl -H Host:whoami1.traefik http://$(docker-machine ip manager)
Hostname: ba2c21488299
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.4
IP: fe80::42:aff:fe00:904
IP: 172.18.0.2
IP: fe80::42:acff:fe12:2
GET / HTTP/1.1
Host: 10.0.9.4:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.4:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
Note that as Træfik is published, you can access it from any machine
and not only the manager.
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip worker1)
Hostname: 8147a7746e7a
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.3
IP: fe80::42:aff:fe00:903
IP: 172.18.0.3
IP: fe80::42:acff:fe12:3
GET / HTTP/1.1
Host: 10.0.9.3:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.3:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
curl -H Host:whoami1.traefik http://$(docker-machine ip worker2)
Hostname: ba2c21488299
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.4
IP: fe80::42:aff:fe00:904
IP: 172.18.0.2
IP: fe80::42:acff:fe12:2
GET / HTTP/1.1
Host: 10.0.9.4:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.4:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
## Scale both services
```shell
docker-machine ssh manager "docker service scale whoami0=5"
docker-machine ssh manager "docker service scale whoami1=5"
```
Check that we now have 5 replicas of each `whoami` service:
```shell
docker-machine ssh manager "docker service ls"
ID NAME REPLICAS IMAGE COMMAND
ab046gpaqtln whoami0 5/5 emilevauge/whoami
cgfg5ifzrpgm whoami1 5/5 emilevauge/whoami
dtpl249tfghc traefik 1/1 traefik --docker --docker.swarmmode --docker.domain=traefik --docker.watch --web
```
## Access to your whoami0 through Træfik multiple times.
Repeat the following command multiple times and note that the Hostname changes each time as Traefik load balances each request against the 5 tasks.
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
Hostname: 8147a7746e7a
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.3
IP: fe80::42:aff:fe00:903
IP: 172.18.0.3
IP: fe80::42:acff:fe12:3
GET / HTTP/1.1
Host: 10.0.9.3:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.3:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
Do the same against whoami1.
```shell
curl -H Host:whoami1.traefik http://$(docker-machine ip manager)
Hostname: ba2c21488299
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.4
IP: fe80::42:aff:fe00:904
IP: 172.18.0.2
IP: fe80::42:acff:fe12:2
GET / HTTP/1.1
Host: 10.0.9.4:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.4:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
Wait, I thought we added the sticky flag to whoami1? Traefik relies on a cookie to maintain stickyness so you'll need to test this with a browser.
First you need to add whoami1.traefik to your hosts file:
```ssh
if [ -n "$(grep whoami1.traefik /etc/hosts)" ];
then
echo "whoami1.traefik already exists (make sure the ip is current)";
else
sudo -- sh -c -e "echo '$(docker-machine ip manager)\twhoami1.traefik'
>> /etc/hosts";
fi
```
Now open your browser and go to http://whoami1.traefik/
You will now see that stickyness is maintained.
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)

View File

@@ -1,7 +1,7 @@
# Swarm cluster
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Træfik on it.
The cluster consists of:
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Træfɪk on it.
The cluster will be made of:
- 2 servers
- 1 swarm master
@@ -10,24 +10,24 @@ The cluster consists of:
## Prerequisites
1. You need to install [docker-machine](https://docs.docker.com/machine/)
2. You need the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
1. You will need to install [docker-machine](https://docs.docker.com/machine/)
2. You will need the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
## Cluster provisioning
We first follow [this guide](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to create the cluster.
We will first follow [this guide](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to create the cluster.
### Create machine `mh-keystore`
This machine is the service registry of our cluster.
This machine will be the service registry of our cluster.
```shell
```sh
docker-machine create -d virtualbox mh-keystore
```
Then we install the service registry [Consul](https://consul.io) on this machine:
```shell
```sh
eval "$(docker-machine env mh-keystore)"
docker run -d \
-p "8500:8500" \
@@ -37,9 +37,9 @@ docker run -d \
### Create machine `mhs-demo0`
This machine is a swarm master and a swarm agent on it.
This machine will have a swarm master and a swarm agent on it.
```shell
```sh
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
@@ -50,9 +50,9 @@ docker-machine create -d virtualbox \
### Create machine `mhs-demo1`
This machine have a swarm agent on it.
This machine will have a swarm agent on it.
```shell
```sh
docker-machine create -d virtualbox \
--swarm \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
@@ -65,16 +65,16 @@ docker-machine create -d virtualbox \
Create the overlay network on the swarm master:
```shell
```sh
eval $(docker-machine env --swarm mhs-demo0)
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
```
## Deploy Træfik
## Deploy Træfɪk
Deploy Træfik:
Deploy Træfɪk:
```shell
```sh
docker $(docker-machine config mhs-demo0) run \
-d \
-p 80:80 -p 8080:8080 \
@@ -84,14 +84,14 @@ docker $(docker-machine config mhs-demo0) run \
-l DEBUG \
-c /dev/null \
--docker \
--docker.domain=traefik \
--docker.endpoint=tcp://$(docker-machine ip mhs-demo0):3376 \
--docker.domain traefik \
--docker.endpoint tcp://$(docker-machine ip mhs-demo0):3376 \
--docker.tls \
--docker.tls.ca=/ssl/ca.pem \
--docker.tls.cert=/ssl/server.pem \
--docker.tls.key=/ssl/server-key.pem \
--docker.tls.ca /ssl/ca.pem \
--docker.tls.cert /ssl/server.pem \
--docker.tls.key /ssl/server-key.pem \
--docker.tls.insecureSkipVerify \
--docker.watch \
--docker.watch \
--web
```
@@ -102,7 +102,7 @@ Let's explain this command:
- `-v /var/lib/boot2docker/:/ssl`: mount the ssl keys generated by docker-machine
- `-c /dev/null`: empty config file
- `--docker`: enable docker backend
- `--docker.endpoint=tcp://172.18.0.1:3376`: connect to the swarm master using the docker_gwbridge network
- `--docker.endpoint tcp://172.18.0.1:3376`: connect to the swarm master using the docker_gwbridge network
- `--docker.tls`: enable TLS using the docker-machine keys
- `--web`: activate the webUI on port 8080
@@ -110,7 +110,7 @@ Let's explain this command:
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in GO, on the network `my-net`:
```shell
```sh
eval $(docker-machine env --swarm mhs-demo0)
docker run -d --name=whoami0 --net=my-net --env="constraint:node==mhs-demo0" emilevauge/whoami
docker run -d --name=whoami1 --net=my-net --env="constraint:node==mhs-demo1" emilevauge/whoami
@@ -118,7 +118,7 @@ docker run -d --name=whoami1 --net=my-net --env="constraint:node==mhs-demo1" emi
Check that everything is started:
```shell
```sh
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba2c21488299 emilevauge/whoami "/whoamI" 8 seconds ago Up 9 seconds 80/tcp mhs-demo1/whoami1
@@ -126,9 +126,9 @@ ba2c21488299 emilevauge/whoami "/whoamI" 8 seconds ago
8fbc39271b4c traefik "/traefik -l DEBUG -c" 36 seconds ago Up 37 seconds 192.168.99.101:80->80/tcp, 192.168.99.101:8080->8080/tcp mhs-demo0/serene_bhabha
```
## Access to your apps through Træfik
## Access to your apps through Træfɪk
```shell
```sh
curl -H Host:whoami0.traefik http://$(docker-machine ip mhs-demo0)
Hostname: 8147a7746e7a
IP: 127.0.0.1
@@ -167,3 +167,4 @@ X-Forwarded-Server: 8fbc39271b4c
```
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)

View File

@@ -1,11 +1,12 @@
kubelet:
image: gcr.io/google_containers/hyperkube-amd64:v1.5.2
image: gcr.io/google_containers/hyperkube-amd64:v1.2.2
privileged: true
pid: host
net : host
volumes:
- /sys:/sys:rw
- /:/rootfs:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw,shared
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
command: ['/hyperkube', 'kubelet', '--hostname-override=127.0.0.1', '--api-servers=http://localhost:8080', '--config=/etc/kubernetes/manifests', '--allow-privileged=true', '--v=2', '--cluster-dns=10.0.0.10', '--cluster-domain=cluster.local']
command: ['/hyperkube', 'kubelet', '--containerized', '--hostname-override=127.0.0.1', '--address=0.0.0.0', '--api-servers=http://localhost:8080', '--config=/etc/kubernetes/manifests', '--allow-privileged=true', '--v=2']

View File

@@ -1,59 +1,43 @@
zk:
image: bobrik/zookeeper
net: host
environment:
ZK_CONFIG: tickTime=2000,initLimit=10,syncLimit=5,maxClientCnxns=128,forceSync=no,clientPort=2181
ZK_ID: 1
version: '2'
services:
zookeeper:
image: netflixoss/exhibitor:1.5.2
hostname: zookeeper
ports:
- "2181:2181"
mesos-master:
image: mesosphere/marathon:v1.2.0-RC6
hostname: mesos-master
entrypoint: [ "mesos-master" ]
ports:
- "5050:5050"
links:
- zookeeper
environment:
- MESOS_CLUSTER=local
- MESOS_HOSTNAME=mesos-master.docker
- MESOS_LOG_DIR=/var/log
- MESOS_WORK_DIR=/var/lib/mesos
- MESOS_QUORUM=1
- MESOS_ZK=zk://zookeeper:2181/mesos
mesos-slave:
image: mesosphere/mesos-slave-dind:0.2.4_mesos-0.27.2_docker-1.8.2_ubuntu-14.04.4
entrypoint:
- mesos-slave
privileged: true
hostname: mesos-slave
ports:
- "5051:5051"
links:
- zookeeper
- mesos-master
environment:
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
- MESOS_LOG_DIR=/var/log
- MESOS_MASTER=zk://zookeeper:2181/mesos
- MESOS_PORT=5051
- MESOS_WORK_DIR=/var/lib/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
- MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD=90secs
- MESOS_DOCKER_STOP_TIMEOUT=60secs
- MESOS_RESOURCES=cpus:2;mem:2048;disk:20480;ports(*):[12000-12999]
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
marathon:
image: mesosphere/marathon:v1.2.0-RC6
ports:
- "8080:8080"
links:
- zookeeper
- mesos-master
extra_hosts:
- "mesos-slave:172.17.0.1"
environment:
- MARATHON_ZK=zk://zookeeper:2181/marathon
- MARATHON_MASTER=zk://zookeeper:2181/mesos
master:
image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
net: host
environment:
MESOS_ZK: zk://127.0.0.1:2181/mesos
MESOS_HOSTNAME: 127.0.0.1
MESOS_IP: 127.0.0.1
MESOS_QUORUM: 1
MESOS_CLUSTER: docker-compose
MESOS_WORK_DIR: /var/lib/mesos
slave:
image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
net: host
pid: host
privileged: true
environment:
MESOS_MASTER: zk://127.0.0.1:2181/mesos
MESOS_HOSTNAME: 127.0.0.1
MESOS_IP: 127.0.0.1
MESOS_CONTAINERIZERS: docker,mesos
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /usr/bin/docker:/usr/bin/docker:ro
- /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1:ro
- /var/run/docker.sock:/var/run/docker.sock
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
marathon:
image: mesosphere/marathon:v1.1.1
net: host
environment:
MARATHON_MASTER: zk://127.0.0.1:2181/mesos
MARATHON_ZK: zk://127.0.0.1:2181/marathon
MARATHON_HOSTNAME: 127.0.0.1
command: --event_subscriber http_callback

View File

@@ -1,7 +0,0 @@
traefik:
image: traefik
command: --web --rancher --rancher.domain=rancher.localhost --rancher.endpoint=http://example.com --rancher.accesskey=XXXXXXX --rancher.secretkey=YYYYYY --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8080:8080"

111
examples/k8s.ingress.yaml Normal file
View File

@@ -0,0 +1,111 @@
# 3 Services for the 3 endpoints of the Ingress
apiVersion: v1
kind: Service
metadata:
name: service1
labels:
app: whoami
spec:
type: NodePort
ports:
- port: 80
nodePort: 30283
targetPort: 80
protocol: TCP
name: https
selector:
app: whoami
---
apiVersion: v1
kind: Service
metadata:
name: service2
labels:
app: whoami
spec:
type: NodePort
ports:
- port: 80
nodePort: 30284
targetPort: 80
protocol: TCP
name: http
selector:
app: whoami
---
apiVersion: v1
kind: Service
metadata:
name: service3
labels:
app: whoami
spec:
type: NodePort
ports:
- port: 80
nodePort: 30285
targetPort: 80
protocol: TCP
name: http
selector:
app: whoami
---
# A single RC matching all Services
apiVersion: v1
kind: ReplicationController
metadata:
name: whoami
spec:
replicas: 1
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: emilevauge/whoami
ports:
- containerPort: 80
---
# An Ingress with 2 hosts and 3 endpoints
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whoami-ingress
spec:
rules:
- host: foo.localhost
http:
paths:
- path: /bar
backend:
serviceName: service1
servicePort: 80
- host: bar.localhost
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- backend:
serviceName: service3
servicePort: 80
---
# Another Ingress with PathPrefixStrip
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whoami-ingress-stripped
annotations:
traefik.frontend.rule.type: "PathPrefixStrip"
spec:
rules:
- host: foo.localhost
http:
paths:
- path: /prefixWillBeStripped
backend:
serviceName: service1
servicePort: 80

View File

@@ -1,6 +1,10 @@
#!/bin/bash
kubectl create -f - << EOF
kind: Namespace
apiVersion: v1
metadata:
name: kube-system
labels:
name: kube-system
name: kube-system
EOF

31
examples/k8s.rc.yaml Normal file
View File

@@ -0,0 +1,31 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
imagePullPolicy: Always
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 8080
args:
- --web
- --kubernetes
- --logLevel=DEBUG

View File

@@ -1,99 +0,0 @@
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: stilton
labels:
app: cheese
cheese: stilton
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: stilton
template:
metadata:
labels:
app: cheese
task: stilton
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:stilton
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cheddar
labels:
app: cheese
cheese: cheddar
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: cheddar
template:
metadata:
labels:
app: cheese
task: cheddar
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:cheddar
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: wensleydale
labels:
app: cheese
cheese: wensleydale
spec:
replicas: 2
selector:
matchLabels:
app: cheese
task: wensleydale
template:
metadata:
labels:
app: cheese
task: wensleydale
version: v0.0.1
spec:
containers:
- name: cheese
image: errm/cheese:wensleydale
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80

View File

@@ -1,27 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheese
spec:
rules:
- host: stilton.local
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
- host: cheddar.local
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: http
- host: wensleydale.local
http:
paths:
- path: /
backend:
serviceName: wensleydale
servicePort: http

View File

@@ -1,39 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: stilton
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: stilton
---
apiVersion: v1
kind: Service
metadata:
name: cheddar
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: cheddar
---
apiVersion: v1
kind: Service
metadata:
name: wensleydale
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: wensleydale

View File

@@ -1,23 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheeses
annotations:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- host: cheeses.local
http:
paths:
- path: /stilton
backend:
serviceName: stilton
servicePort: http
- path: /cheddar
backend:
serviceName: cheddar
servicePort: http
- path: /wensleydale
backend:
serviceName: wensleydale
servicePort: http

View File

@@ -1,37 +0,0 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,47 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
containers:
- image: traefik
name: traefik-ingress-lb
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8081
securityContext:
privileged: true
args:
- -d
- --web
- --web.address=:8081
- --kubernetes

View File

@@ -1,28 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8081
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik-ui.local
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web

View File

@@ -6,4 +6,7 @@ Copyright
//go:generate mkdir -p static
//go:generate go-bindata -pkg autogen -o autogen/gen.go ./static/... ./templates/...
//go:generate mkdir -p vendor/github.com/docker/docker/autogen/dockerversion
//go:generate cp script/dockerversion vendor/github.com/docker/docker/autogen/dockerversion/dockerversion.go
package main

703
glide.lock generated
View File

@@ -1,176 +1,143 @@
hash: bfc5801ed56be5f703a0924d8832dcccc42bf02f9e2b035ef77eab62c0cb4884
updated: 2017-06-29T16:47:14.848940186+02:00
hash: c7c28fa3f095cd3e31f8531dd5badeb196256965f003f5cbadd0f509960aa647
updated: 2016-08-01T17:16:21.884990443+02:00
imports:
- name: cloud.google.com/go
version: 2e6a95edb1071d750f6d7db777bf66cd2997af6c
subpackages:
- compute/metadata
- internal
- name: github.com/abbot/go-http-auth
version: d45c47bedec736d172957bd394786b76626fa8ac
- name: github.com/ArthurHlt/go-eureka-client
version: 9d0a49cbd39aa3634ae1977e9f519a262b10adaf
subpackages:
- eureka
- name: github.com/ArthurHlt/gominlog
version: 72eebf980f467d3ab3a8b4ddf660f664911ce519
- name: github.com/aws/aws-sdk-go
version: 3f8f870ec9939e32b3372abf74d24e468bcd285d
subpackages:
- aws
- aws/awserr
- aws/awsutil
- aws/client
- aws/client/metadata
- aws/corehandlers
- aws/credentials
- aws/credentials/ec2rolecreds
- aws/credentials/endpointcreds
- aws/credentials/stscreds
- aws/defaults
- aws/ec2metadata
- aws/endpoints
- aws/request
- aws/session
- aws/signer/v4
- private/protocol
- private/protocol/ec2query
- private/protocol/json/jsonutil
- private/protocol/jsonrpc
- private/protocol/query
- private/protocol/query/queryutil
- private/protocol/rest
- private/protocol/restxml
- private/protocol/xml/xmlutil
- private/waiter
- service/dynamodb
- service/dynamodb/dynamodbattribute
- service/dynamodb/dynamodbiface
- service/dynamodbattribute
- service/ec2
- service/ecs
- service/route53
- service/sts
- name: github.com/Azure/azure-sdk-for-go
version: 088007b3b08cc02b27f2eadfdcd870958460ce7e
subpackages:
- arm/dns
- name: github.com/Azure/go-autorest
version: a2fdd780c9a50455cecd249b00bdc3eb73a78e31
subpackages:
- autorest
- autorest/azure
- autorest/date
- autorest/to
- name: github.com/beorn7/perks
version: 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
subpackages:
- quantile
- name: github.com/blang/semver
version: 31b736133b98f26d5e078ec9eb591666edfd091f
- name: github.com/boltdb/bolt
version: e9cf4fae01b5a8ff89d0ec6b32f0d9c9f79aefdd
version: 5cc10bbbc5c141029940133bb33c9e969512a698
- name: github.com/BurntSushi/toml
version: b26d9c308763d68093482582cea63d69be07a0f0
version: 99064174e013895bbd9b025c31100bd1d9b590ca
- name: github.com/BurntSushi/ty
version: 6add9cd6ad42d389d6ead1dde60b4ad71e46fd74
subpackages:
- fun
- name: github.com/cenk/backoff
version: 5d150e7eec023ce7a124856b37c68e54b4050ac7
- name: github.com/cenkalti/backoff
version: cdf48bbc1eb78d1349cbda326a4a037f7ba565c6
- name: github.com/codahale/hdrhistogram
version: 9208b142303c12d8899bae836fd524ac9338b4fd
version: f8ad88b59a584afeee9d334eff879b104439117b
- name: github.com/codegangsta/cli
version: bf4a526f48af7badd25d2cb02d587e1b01be3b50
version: 1efa31f08b9333f1bd4882d61f9d668a70cd902e
- name: github.com/codegangsta/negroni
version: c0db5feaa33826cd5117930c8f4ee5c0f565eec6
version: dc6b9d037e8dab60cbfc09c61d6932537829be8b
- name: github.com/containous/flaeg
version: b5d2dc5878df07c2d74413348186982e7b865871
version: b98687da5c323650f4513fda6b6203fcbdec9313
- name: github.com/containous/mux
version: a819b77bba13f0c0cbe36e437bc2e948411b3996
- name: github.com/containous/staert
version: 1e26a71803e428fd933f5f9c8e50a26878f53147
version: e2aa88e235a02dd52aa1d5d9de75f9d9139d1602
- name: github.com/coreos/etcd
version: c400d05d0aa73e21e431c16145e558d624098018
version: 1c9e0a0e33051fed6c05c141e6fcbfe5c7f2a899
subpackages:
- Godeps/_workspace/src/github.com/coreos/go-systemd/journal
- Godeps/_workspace/src/github.com/coreos/pkg/capnslog
- Godeps/_workspace/src/github.com/ugorji/go/codec
- Godeps/_workspace/src/golang.org/x/net/context
- client
- pkg/fileutil
- pkg/pathutil
- pkg/types
- version
- name: github.com/coreos/go-oidc
version: 5644a2f50e2d2d5ba0b474bc5bc55fea1925936d
subpackages:
- http
- jose
- key
- oauth2
- oidc
- name: github.com/coreos/go-systemd
version: 48702e0da86bd25e76cfef347e2adeb434a0d0a6
subpackages:
- daemon
- name: github.com/coreos/pkg
version: fa29b1d70f0beaddd4c7021607cc3c3be8ce94b8
subpackages:
- health
- httputil
- timeutil
- name: github.com/davecgh/go-spew
version: 04cdfd42973bb9c8589fd6a731800cf222fde1a9
version: 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
subpackages:
- spew
- name: github.com/decker502/dnspod-go
version: 68650ee11e182e30773781d391c66a0c80ccf9f2
- name: github.com/dgrijalva/jwt-go
version: d2709f9f1f31ebcda9651b03077758c1f3a0018c
- name: github.com/dnsimple/dnsimple-go
version: 5a5b427618a76f9eed5ede0f3e6306fbd9311d2e
subpackages:
- dnsimple
- name: github.com/docker/distribution
version: 325b0804fef3a66309d962357aac3c2ce3f4d329
version: 857d0f15c0a4d8037175642e0ca3660829551cb5
subpackages:
- digest
- reference
- digest
- registry/api/errcode
- registry/client/auth
- registry/client/transport
- registry/client
- context
- registry/api/v2
- registry/storage/cache
- registry/storage/cache/memory
- uuid
- name: github.com/docker/docker
version: 49bf474f9ed7ce7143a59d1964ff7b7fd9b52178
version: 9837ec4da53f15f9120d53a6e1517491ba8b0261
subpackages:
- namesgenerator
- pkg/namesgenerator
- pkg/random
- cliconfig
- cliconfig/configfile
- pkg/jsonmessage
- pkg/promise
- pkg/stdcopy
- pkg/term
- reference
- registry
- runconfig/opts
- pkg/homedir
- pkg/jsonlog
- pkg/system
- pkg/term/windows
- image
- image/v1
- pkg/ioutils
- opts
- pkg/httputils
- pkg/mflag
- pkg/stringid
- pkg/tarsum
- pkg/mount
- pkg/signal
- pkg/urlutil
- builder
- builder/dockerignore
- pkg/archive
- pkg/fileutils
- pkg/progress
- pkg/streamformatter
- layer
- pkg/longpath
- api/types/backend
- pkg/chrootarchive
- pkg/gitutils
- pkg/symlink
- pkg/idtools
- pkg/pools
- daemon/graphdriver
- pkg/reexec
- pkg/plugins
- pkg/plugins/transport
- name: github.com/docker/engine-api
version: 3d1601b9d2436a70b0dfc045a23f6503d19195df
version: 3d3d0b6c9d2651aac27f416a6da0224c1875b3eb
subpackages:
- client
- client/transport
- client/transport/cancellable
- types
- types/blkiodev
- types/container
- types/events
- types/filters
- types/container
- types/network
- client/transport
- client/transport/cancellable
- types/reference
- types/registry
- types/strslice
- types/swarm
- types/time
- types/versions
- types/blkiodev
- types/strslice
- name: github.com/docker/go-connections
version: 990a1a1a70b0da4c4cb70e117971a4f0babfbf1a
subpackages:
- nat
- sockets
- tlsconfig
- nat
- name: github.com/docker/go-units
version: 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
- name: github.com/docker/leadership
version: 0a913e2d71a12fd14a028452435cb71ac8d82cb6
version: f2d77a61e3c169b43402a0a1e84f06daf29b8190
- name: github.com/docker/libcompose
version: 8ee7bcc364f7b8194581a3c6bd9fa019467c7873
subpackages:
- docker
- project
- project/events
- project/options
- config
- docker/builder
- docker/client
- labels
- logger
- lookup
- utils
- yaml
- version
- name: github.com/docker/libkv
version: 1d8431073ae03cdaedb198a89722f3aab6d418ef
version: 35d3e2084c650109e7bcc7282655b1bc8ba924ff
subpackages:
- store
- store/boltdb
@@ -178,498 +145,164 @@ imports:
- store/etcd
- store/zookeeper
- name: github.com/donovanhide/eventsource
version: 441a03aa37b3329bbb79f43de81914ea18724718
- name: github.com/eapache/channels
version: 47238d5aae8c0fefd518ef2bee46290909cf8263
- name: github.com/eapache/queue
version: 44cc805cf13205b55f69e14bcb69867d1ae92f98
- name: github.com/edeckers/auroradnsclient
version: 8b777c170cfd377aa16bb4368f093017dddef3f9
subpackages:
- records
- requests
- requests/errors
- tokens
- zones
version: fd1de70867126402be23c306e1ce32828455d85b
- name: github.com/elazarl/go-bindata-assetfs
version: 30f82fa23fd844bd5bb1e5f216db87fd77b5eb43
- name: github.com/emicklei/go-restful
version: 892402ba11a2e2fd5e1295dd633481f27365f14d
subpackages:
- log
- swagger
- name: github.com/fatih/color
version: 9131ab34cf20d2f6d83fdc67168a5430d1c7dc23
version: 57eb5e1fc594ad4b0b1dbea7b286d299e0cb43c2
- name: github.com/gambol99/go-marathon
version: dd6cbd4c2d71294a19fb89158f2a00d427f174ab
- name: github.com/ghodss/yaml
version: 73d445a93680fa1a78ae23a5839bad48f32ba1ee
- name: github.com/go-ini/ini
version: e7fea39b01aea8d5671f6858f0532f56e8bff3a5
- name: github.com/go-kit/kit
version: f66b0e13579bfc5a48b9e2a94b1209c107ea1f41
subpackages:
- metrics
- metrics/internal/lv
- metrics/prometheus
- name: github.com/go-openapi/jsonpointer
version: 46af16f9f7b149af66e5d1bd010e3574dc06de98
- name: github.com/go-openapi/jsonreference
version: 13c6e3589ad90f49bd3e3bbe2c2cb3d7a4142272
- name: github.com/go-openapi/spec
version: 6aced65f8501fe1217321abf0749d354824ba2ff
- name: github.com/go-openapi/swag
version: 1d0bd113de87027671077d3c71eb3ac5d7dbba72
- name: github.com/gogo/protobuf
version: 909568be09de550ed094403c2bf8a261b5bb730a
subpackages:
- proto
- sortkeys
- name: github.com/golang/glog
version: fca8c8854093a154ff1eb580aae10276ad6b1b5f
- name: github.com/golang/protobuf
version: 2bba0603135d7d7f5cb73b2125beeda19c09f4ef
subpackages:
- proto
- name: github.com/google/go-github
version: 6896997c7c9fe603fb9d2e8e92303bb18481e60a
subpackages:
- github
version: a558128c87724cd7430060ef5aedf39f83937f55
- name: github.com/go-check/check
version: 4f90aeace3a26ad7021961c297b22c42160c7b25
- name: github.com/google/go-querystring
version: 53e6ce116135b80d037921a7fdd5138cf32d7a8a
version: 9235644dd9e52eeae6fa48efd539fdc351a0af53
subpackages:
- query
- name: github.com/google/gofuzz
version: bbcb9da2d746f8bdbd6a936686a0a6067ada0ec5
- name: github.com/googleapis/gax-go
version: 9af46dd5a1713e8b5cd71106287eba3cefdde50b
- name: github.com/gorilla/context
version: 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
- name: github.com/gorilla/websocket
version: a91eba7f97777409bc2c443f5534d41dd20c5720
version: aed02d124ae4a0e94fea4541c8effd05bf0c8296
- name: github.com/hashicorp/consul
version: 3f92cc70e8163df866873c16c6d89889b5c95fc4
version: 8a8271fd81cdaa1bbc20e4ced86531b90c7eaf79
subpackages:
- api
- name: github.com/hashicorp/go-cleanhttp
version: 3573b8b52aa7b37b9358d966a898feb387f62437
- name: github.com/hashicorp/go-version
version: 03c5bf6be031b6dd45afec16b1cf94fc8938bc77
version: 875fb671b3ddc66f8e2f0acc33829c8cb989a38d
- name: github.com/hashicorp/serf
version: 19f2c401e122352c047a84d6584dd51e2fb8fcc4
version: 6c4672d66fc6312ddde18399262943e21175d831
subpackages:
- coordinate
- name: github.com/JamesClonk/vultr
version: 0f156dd232bc4ebf8a32ba83fec57c0e4c9db69f
- serf
- name: github.com/libkermit/docker
version: 3b5eb2973efff7af33cfb65141deaf4ed25c6d02
subpackages:
- lib
- name: github.com/jmespath/go-jmespath
version: bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
- name: github.com/jonboulle/clockwork
version: 72f9bd7c4e0c2a40055ab3d0f09654f730cce982
- name: github.com/juju/ratelimit
version: 77ed1c8a01217656d2080ad51981f6e99adaa177
- compose
- name: github.com/libkermit/docker-check
version: bb75a86b169c6c5d22c0ee98278124036f272d7b
subpackages:
- compose
- name: github.com/mailgun/manners
version: fada45142db3f93097ca917da107aa3fad0ffcb5
- name: github.com/mailgun/timetools
version: fd192d755b00c968d312d23f521eb0cdc6f66bd0
- name: github.com/mailru/easyjson
version: d5b7844b561a7bc640052f1b935f7b800330d7e0
subpackages:
- buffer
- jlexer
- jwriter
- name: github.com/mattn/go-colorable
version: 5411d3eea5978e6cdc258b30de592b60df6aba96
repo: https://github.com/mattn/go-colorable
- name: github.com/mattn/go-isatty
version: 57fdcb988a5c543893cc61bce354a6e24ab70022
repo: https://github.com/mattn/go-isatty
- name: github.com/mattn/go-shellwords
version: 02e3cf038dcea8290e44424da473dd12be796a8a
- name: github.com/matttproud/golang_protobuf_extensions
version: c12348ce28de40eed0136aa2b644d0ee0650e56c
subpackages:
- pbutil
- name: github.com/mesos/mesos-go
version: 068d5470506e3780189fe607af40892814197c5e
subpackages:
- detector
- detector/zoo
- mesos
- mesosproto
- mesosutil
- upid
- name: github.com/mesosphere/mesos-dns
version: b47dc4c19f215e98da687b15b4c64e70f629bea5
repo: https://github.com/containous/mesos-dns.git
vcs: git
subpackages:
- detect
- errorutil
- logging
- models
- records
- records/labels
- records/state
- util
version: 525bedee691b5a8df547cb5cf9f86b7fb1883e24
- name: github.com/Microsoft/go-winio
version: fff283ad5116362ca252298cfc9b95828956d85d
version: ce2922f643c8fd76b46cadc7f404a06282678b34
- name: github.com/miekg/dns
version: 8060d9f51305bbe024b99679454e62f552cd0b0b
- name: github.com/mitchellh/mapstructure
version: 53818660ed4955e899c0bcafa97299a388bd7c8e
- name: github.com/mvdan/xurls
version: db96455566f05ffe42bd6ac671f05eeb1152b45d
- name: github.com/NYTimes/gziphandler
version: 316adfc72ed3b0157975917adf62ba2dc31842ce
repo: https://github.com/containous/gziphandler.git
vcs: git
version: 5d001d020961ae1c184f9f8152fdc73810481677
- name: github.com/moul/http2curl
version: b1479103caacaa39319f75e7f57fc545287fca0d
- name: github.com/ogier/pflag
version: 45c278ab3607870051a2ea9040bb85fcb8557481
- name: github.com/opencontainers/runc
version: 50401b5b4c2e01e4f1372b73a021742deeaf4e2d
version: bd1d3ac0480c5d3babac10dc32cff2886563219c
subpackages:
- libcontainer/user
- name: github.com/ovh/go-ovh
version: d2207178e10e4527e8f222fd8707982df8c3af17
subpackages:
- ovh
- name: github.com/pborman/uuid
version: ca53cad383cad2479bbba7f7a1a05797ec1386e4
- name: github.com/pkg/errors
version: ff09b135c25aae272398c51a07235b90a75aa4f0
- name: github.com/parnurzeal/gorequest
version: 045012d33ef41ea146c1b675df9296d0dc1a212d
- name: github.com/pmezard/go-difflib
version: d8ed2627bdf02c080bf22230dbb337003b7aba2d
subpackages:
- difflib
- name: github.com/prometheus/client_golang
version: 08fd2e12372a66e68e30523c7642e0cbc3e4fbde
subpackages:
- prometheus
- prometheus/promhttp
- name: github.com/prometheus/client_model
version: 6f3806018612930941127f2a7c6c453ba2c527d2
subpackages:
- go
- name: github.com/prometheus/common
version: 49fee292b27bfff7f354ee0f64e1bc4850462edf
subpackages:
- expfmt
- internal/bitbucket.org/ww/goautoneg
- model
- name: github.com/prometheus/procfs
version: a1dba9ce8baed984a2495b658c82687f8157b98f
subpackages:
- xfs
- name: github.com/PuerkitoBio/purell
version: 8a290539e2e8629dbc4e6bad948158f790ec31f4
- name: github.com/PuerkitoBio/urlesc
version: 5bd2802263f21d8788851d5305584c82a5c75d7e
- name: github.com/pyr/egoscale
version: 987e683a7552f34ee586217d1cc8507d52e80ab9
subpackages:
- src/egoscale
- name: github.com/rancher/go-rancher
version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b
subpackages:
- client
- name: github.com/ryanuber/go-glob
version: 256dc444b735e061061cf46c809487313d5b0065
version: 572520ed46dbddaed19ea3d9541bdd0494163693
- name: github.com/samuel/go-zookeeper
version: 1d7be4effb13d2d908342d349d71a284a7542693
version: e64db453f3512cade908163702045e0f31137843
subpackages:
- zk
- name: github.com/satori/go.uuid
version: 879c5887cd475cd7864858769793b2ceb0d44feb
- name: github.com/Sirupsen/logrus
version: 10f801ebc38b33738c9d17d50860f484a0988ff5
- name: github.com/spf13/pflag
version: 5ccb023bc27df288a957c5e994cd44fd19619465
version: a283a10442df8dc09befd873fab202bf8a253d6a
- name: github.com/streamrail/concurrent-map
version: 8bf1e9bacbf65b10c81d0f4314cf2b1ebef728b5
version: 65a174a3a4188c0b7099acbc6cfa0c53628d3287
- name: github.com/stretchr/objx
version: cbeaeb16a013161a98496fad62933b1d21786672
- name: github.com/stretchr/testify
version: 4d4bfba8f1d1027c4fdbe371823030df51419987
version: d77da356e56a7428ad25149ca77381849a6a5232
subpackages:
- assert
- mock
- require
- assert
- name: github.com/thoas/stats
version: 152b5d051953fdb6e45f14b6826962aadc032324
- name: github.com/timewasted/linode
version: 37e84520dcf74488f67654f9c775b9752c232dc1
subpackages:
- dns
- name: github.com/tv42/zbase32
version: 03389da7e0bf9844767f82690f4d68fc097a1306
- name: github.com/ugorji/go
version: ea9cd21fa0bc41ee4bdd50ac7ed8cbc7ea2ed960
version: b94837a2404ab90efe9289e77a70694c355739cb
subpackages:
- codec
- name: github.com/unrolled/render
version: 50716a0a853771bb36bfce61a45cdefdb98c2e6e
version: 198ad4d8b8a4612176b804ca10555b222a086b40
- name: github.com/vdemeester/docker-events
version: be74d4929ec1ad118df54349fda4b0cba60f849b
version: 20e6d2db238723e68197a9e3c6c34c99a9893a9c
- name: github.com/vdemeester/shakers
version: 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
- name: github.com/vulcand/oxy
version: 49f1894c20d972f5c73ff44b859f87deb83f0076
version: 4298f24d572dc554eb984f2ffdf6bdd54d4bd613
repo: https://github.com/containous/oxy.git
vcs: git
subpackages:
- cbreaker
- connlimit
- forward
- memmetrics
- roundrobin
- stream
- utils
- memmetrics
- name: github.com/vulcand/predicate
version: 19b9dde14240d94c804ae5736ad0e1de10bf8fe6
- name: github.com/vulcand/route
version: cb89d787ddbb1c5849a7ac9f79004c1fd12a4a32
- name: github.com/vulcand/vulcand
version: 42492a3a85e294bdbdd1bcabb8c12769a81ea284
version: 28a4e5c0892167589737b95ceecbcef00295be50
subpackages:
- conntracker
- plugin
- plugin/rewrite
- plugin
- conntracker
- router
- name: github.com/xenolf/lego
version: 5dfe609afb1ebe9da97c9846d97a55415e5a5ccd
version: b2fad6198110326662e9e356a97199078a4a775c
subpackages:
- acme
- providers/dns
- providers/dns/auroradns
- providers/dns/azure
- providers/dns/cloudflare
- providers/dns/digitalocean
- providers/dns/dnsimple
- providers/dns/dnsmadeeasy
- providers/dns/dnspod
- providers/dns/dyn
- providers/dns/exoscale
- providers/dns/gandi
- providers/dns/googlecloud
- providers/dns/linode
- providers/dns/namecheap
- providers/dns/ns1
- providers/dns/ovh
- providers/dns/pdns
- providers/dns/rackspace
- providers/dns/rfc2136
- providers/dns/route53
- providers/dns/vultr
- name: golang.org/x/crypto
version: 4ed45ec682102c643324fae5dff8dab085b6c300
version: d81fdb778bf2c40a91b24519d60cdc5767318829
subpackages:
- bcrypt
- blowfish
- ocsp
- name: golang.org/x/net
version: 242b6b35177ec3909636b6cf6a47e8c2c6324b5d
version: b400c2eff1badec7022a8c8f5bea058b6315eed7
subpackages:
- context
- context/ctxhttp
- http2
- http2/hpack
- idna
- internal/timeseries
- lex/httplex
- proxy
- publicsuffix
- trace
- name: golang.org/x/oauth2
version: 7fdf09982454086d5570c7db3e11f360194830ca
subpackages:
- google
- internal
- jws
- jwt
- proxy
- name: golang.org/x/sys
version: 8d1157a435470616f975ff9bb013bea8d0962067
version: 62bee037599929a6e9146f29d10dd5208c43507d
subpackages:
- unix
- windows
- name: golang.org/x/text
version: 2910a502d2bf9e43193af9d68ca516529614eed3
subpackages:
- cases
- internal/tag
- language
- runes
- secure/bidirule
- secure/precis
- transform
- unicode/bidi
- unicode/norm
- width
- name: google.golang.org/api
version: 9bf6e6e569ff057f75d9604a46c52928f17d2b54
subpackages:
- dns/v1
- gensupport
- googleapi
- googleapi/internal/uritemplates
- name: google.golang.org/appengine
version: 4f7eeb5305a4ba1966344836ba4af9996b7b4e05
subpackages:
- internal
- internal/app_identity
- internal/base
- internal/datastore
- internal/log
- internal/modules
- internal/remote_api
- internal/urlfetch
- urlfetch
- name: google.golang.org/grpc
version: cdee119ee21e61eef7093a41ba148fa83585e143
subpackages:
- codes
- credentials
- grpclog
- internal
- keepalive
- metadata
- naming
- peer
- stats
- tap
- transport
- name: gopkg.in/fsnotify.v1
version: 629574ca2a5df945712d3079857300b5e4da0236
- name: gopkg.in/inf.v0
version: 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
- name: gopkg.in/ini.v1
version: e7fea39b01aea8d5671f6858f0532f56e8bff3a5
version: a8a77c9133d2d6fd8334f3260d06f60e8d80a5fb
- name: gopkg.in/mgo.v2
version: 3f83fa5005286a7fe593b055f0d7771a7dce4655
version: 29cc868a5ca65f401ff318143f9408d02f4799cc
subpackages:
- bson
- internal/json
- name: gopkg.in/ns1/ns1-go.v2
version: 2abc76c60bf88ba33b15d1d87a13f624d8dff956
subpackages:
- rest
- rest/model/account
- rest/model/data
- rest/model/dns
- rest/model/filter
- rest/model/monitor
- name: gopkg.in/square/go-jose.v1
version: aa2e30fdd1fe9dd3394119af66451ae790d50e0d
version: e3f973b66b91445ec816dd7411ad1b6495a5a2fc
subpackages:
- cipher
- json
- name: gopkg.in/yaml.v2
version: 53feefa2559fb8dfa8d81baad31be332c97d6c77
- name: k8s.io/client-go
version: e121606b0d09b2e1c467183ee46217fa85a6b672
testImports:
- name: github.com/Azure/go-ansiterm
version: fa152c58bc15761d0200cb75fe958b89a9d4888e
subpackages:
- discovery
- kubernetes
- kubernetes/typed/apps/v1beta1
- kubernetes/typed/authentication/v1beta1
- kubernetes/typed/authorization/v1beta1
- kubernetes/typed/autoscaling/v1
- kubernetes/typed/batch/v1
- kubernetes/typed/batch/v2alpha1
- kubernetes/typed/certificates/v1alpha1
- kubernetes/typed/core/v1
- kubernetes/typed/extensions/v1beta1
- kubernetes/typed/policy/v1beta1
- kubernetes/typed/rbac/v1alpha1
- kubernetes/typed/storage/v1beta1
- pkg/api
- pkg/api/errors
- pkg/api/install
- pkg/api/meta
- pkg/api/meta/metatypes
- pkg/api/resource
- pkg/api/unversioned
- pkg/api/v1
- pkg/api/validation/path
- pkg/apimachinery
- pkg/apimachinery/announced
- pkg/apimachinery/registered
- pkg/apis/apps
- pkg/apis/apps/install
- pkg/apis/apps/v1beta1
- pkg/apis/authentication
- pkg/apis/authentication/install
- pkg/apis/authentication/v1beta1
- pkg/apis/authorization
- pkg/apis/authorization/install
- pkg/apis/authorization/v1beta1
- pkg/apis/autoscaling
- pkg/apis/autoscaling/install
- pkg/apis/autoscaling/v1
- pkg/apis/batch
- pkg/apis/batch/install
- pkg/apis/batch/v1
- pkg/apis/batch/v2alpha1
- pkg/apis/certificates
- pkg/apis/certificates/install
- pkg/apis/certificates/v1alpha1
- pkg/apis/extensions
- pkg/apis/extensions/install
- pkg/apis/extensions/v1beta1
- pkg/apis/policy
- pkg/apis/policy/install
- pkg/apis/policy/v1beta1
- pkg/apis/rbac
- pkg/apis/rbac/install
- pkg/apis/rbac/v1alpha1
- pkg/apis/storage
- pkg/apis/storage/install
- pkg/apis/storage/v1beta1
- pkg/auth/user
- pkg/conversion
- pkg/conversion/queryparams
- pkg/fields
- pkg/genericapiserver/openapi/common
- pkg/labels
- pkg/runtime
- pkg/runtime/serializer
- pkg/runtime/serializer/json
- pkg/runtime/serializer/protobuf
- pkg/runtime/serializer/recognizer
- pkg/runtime/serializer/streaming
- pkg/runtime/serializer/versioning
- pkg/selection
- pkg/third_party/forked/golang/reflect
- pkg/third_party/forked/golang/template
- pkg/types
- pkg/util
- pkg/util/cert
- pkg/util/clock
- pkg/util/diff
- pkg/util/errors
- pkg/util/flowcontrol
- pkg/util/framer
- pkg/util/integer
- pkg/util/intstr
- pkg/util/json
- pkg/util/jsonpath
- pkg/util/labels
- pkg/util/net
- pkg/util/parsers
- pkg/util/rand
- pkg/util/runtime
- pkg/util/sets
- pkg/util/uuid
- pkg/util/validation
- pkg/util/validation/field
- pkg/util/wait
- pkg/util/yaml
- pkg/version
- pkg/watch
- pkg/watch/versioned
- plugin/pkg/client/auth
- plugin/pkg/client/auth/gcp
- plugin/pkg/client/auth/oidc
- rest
- tools/cache
- tools/clientcmd/api
- tools/metrics
- transport
testImports: []
- winterm
- name: github.com/cloudfoundry-incubator/candiedyaml
version: 99c3df83b51532e3615f851d8c2dbb638f5313bf
- name: github.com/flynn/go-shlex
version: 3f9db97f856818214da2e1057f8ad84803971cff
- name: github.com/gorilla/mux
version: 9fa818a44c2bf1396a17f9d5a3c0f6dd39d2ff8e
- name: github.com/vbatts/tar-split
version: 28bc4c32f9fa9725118a685c9ddd7ffdbdbfe2c8
subpackages:
- tar/asm
- tar/storage
- archive/tar
- name: github.com/xeipuuv/gojsonpointer
version: e0fe6f68307607d540ed8eac07a342c33fa1b54a
- name: github.com/xeipuuv/gojsonreference
version: e02fc20de94c78484cd5ffb007f8af96be030a45
- name: github.com/xeipuuv/gojsonschema
version: 66a3de92def23708184148ae337750915875e7c1

View File

@@ -5,10 +5,12 @@ import:
subpackages:
- fun
- package: github.com/Sirupsen/logrus
- package: github.com/cenk/backoff
- package: github.com/cenkalti/backoff
- package: github.com/codegangsta/negroni
- package: github.com/containous/flaeg
version: b98687da5c323650f4513fda6b6203fcbdec9313
- package: github.com/vulcand/oxy
version: 49f1894c20d972f5c73ff44b859f87deb83f0076
version: 4298f24d572dc554eb984f2ffdf6bdd54d4bd613
repo: https://github.com/containous/oxy.git
vcs: git
subpackages:
@@ -19,21 +21,18 @@ import:
- stream
- utils
- package: github.com/containous/staert
version: 1e26a71803e428fd933f5f9c8e50a26878f53147
version: e2aa88e235a02dd52aa1d5d9de75f9d9139d1602
- package: github.com/docker/engine-api
version: v0.4.0
version: 3d3d0b6c9d2651aac27f416a6da0224c1875b3eb
subpackages:
- client
- types
- types/events
- types/filters
- package: github.com/docker/go-connections
version: v0.2.1
subpackages:
- sockets
- tlsconfig
- package: github.com/docker/go-units
version: 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
- package: github.com/docker/libkv
subpackages:
- store
@@ -42,113 +41,44 @@ import:
- store/etcd
- store/zookeeper
- package: github.com/elazarl/go-bindata-assetfs
- package: github.com/gambol99/go-marathon
version: a558128c87724cd7430060ef5aedf39f83937f55
- package: github.com/containous/mux
- package: github.com/hashicorp/consul
subpackages:
- api
- package: github.com/mailgun/manners
- package: github.com/parnurzeal/gorequest
- package: github.com/streamrail/concurrent-map
- package: github.com/stretchr/testify
subpackages:
- assert
- mock
- require
- package: github.com/thoas/stats
version: 152b5d051953fdb6e45f14b6826962aadc032324
- package: github.com/unrolled/render
- package: github.com/vdemeester/docker-events
version: be74d4929ec1ad118df54349fda4b0cba60f849b
version: 20e6d2db238723e68197a9e3c6c34c99a9893a9c
- package: github.com/vulcand/vulcand
version: 42492a3a85e294bdbdd1bcabb8c12769a81ea284
subpackages:
- plugin/rewrite
- package: github.com/xenolf/lego
version: 5dfe609afb1ebe9da97c9846d97a55415e5a5ccd
version: b2fad6198110326662e9e356a97199078a4a775c
subpackages:
- acme
- package: golang.org/x/net
subpackages:
- context
- package: gopkg.in/fsnotify.v1
- package: github.com/libkermit/docker-check
version: bb75a86b169c6c5d22c0ee98278124036f272d7b
- package: github.com/libkermit/docker
version: 3b5eb2973efff7af33cfb65141deaf4ed25c6d02
- package: github.com/docker/docker
version: v1.13.0
version: 9837ec4da53f15f9120d53a6e1517491ba8b0261
subpackages:
- namesgenerator
- package: github.com/go-check/check
- package: github.com/docker/libcompose
version: 8ee7bcc364f7b8194581a3c6bd9fa019467c7873
- package: github.com/mattn/go-shellwords
- package: github.com/vdemeester/shakers
- package: github.com/ryanuber/go-glob
- package: github.com/mesos/mesos-go
subpackages:
- mesosproto
- mesos
- upid
- mesosutil
- detector
- package: github.com/miekg/dns
version: 8060d9f51305bbe024b99679454e62f552cd0b0b
- package: github.com/mesosphere/mesos-dns
version: b47dc4c19f215e98da687b15b4c64e70f629bea5
repo: https://github.com/containous/mesos-dns.git
vcs: git
- package: github.com/abbot/go-http-auth
- package: github.com/NYTimes/gziphandler
repo: https://github.com/containous/gziphandler.git
vcs: git
- package: github.com/docker/leadership
- package: github.com/satori/go.uuid
version: ^1.1.0
- package: k8s.io/client-go
version: v2.0.0
- package: github.com/gambol99/go-marathon
version: dd6cbd4c2d71294a19fb89158f2a00d427f174ab
- package: github.com/ArthurHlt/go-eureka-client
subpackages:
- eureka
- package: github.com/coreos/go-systemd
version: v14
subpackages:
- daemon
- package: github.com/google/go-github
- package: github.com/hashicorp/go-version
- package: github.com/mvdan/xurls
- package: github.com/go-kit/kit
version: v0.3.0
subpackages:
- metrics
- package: github.com/eapache/channels
version: v1.1.0
- package: golang.org/x/sys
version: 8d1157a435470616f975ff9bb013bea8d0962067
- package: golang.org/x/net
version: 242b6b35177ec3909636b6cf6a47e8c2c6324b5d
subpackages:
- http2
- context
- package: github.com/docker/distribution
version: v2.6.0
- package: github.com/aws/aws-sdk-go
version: v1.6.18
subpackages:
- aws
- aws/credentials
- aws/defaults
- aws/ec2metadata
- aws/endpoints
- aws/request
- aws/session
- service/dynamodb
- service/dynamodb/dynamodbiface
- service/dynamodbattribute
- service/ec2
- service/ecs
- package: cloud.google.com/go
version: v0.7.0
subpackages:
- compute/metadata
- package: github.com/gogo/protobuf
version: v0.3
subpackages:
- proto
- package: github.com/rancher/go-rancher
version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b
- package: golang.org/x/oauth2/google
version: 7fdf09982454086d5570c7db3e11f360194830ca
- package: github.com/googleapis/gax-go
version: 9af46dd5a1713e8b5cd71106287eba3cefdde50b
- package: google.golang.org/grpc
version: v1.2.0

View File

@@ -1,139 +0,0 @@
package healthcheck
import (
"context"
"fmt"
"net/http"
"net/url"
"sync"
"time"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/vulcand/oxy/roundrobin"
)
var singleton *HealthCheck
var once sync.Once
// GetHealthCheck returns the health check which is guaranteed to be a singleton.
func GetHealthCheck() *HealthCheck {
once.Do(func() {
singleton = newHealthCheck()
})
return singleton
}
// Options are the public health check options.
type Options struct {
Path string
Interval time.Duration
LB LoadBalancer
}
func (opt Options) String() string {
return fmt.Sprintf("[Path: %s Interval: %s]", opt.Path, opt.Interval)
}
// BackendHealthCheck HealthCheck configuration for a backend
type BackendHealthCheck struct {
Options
disabledURLs []*url.URL
requestTimeout time.Duration
}
//HealthCheck struct
type HealthCheck struct {
Backends map[string]*BackendHealthCheck
cancel context.CancelFunc
}
// LoadBalancer includes functionality for load-balancing management.
type LoadBalancer interface {
RemoveServer(u *url.URL) error
UpsertServer(u *url.URL, options ...roundrobin.ServerOption) error
Servers() []*url.URL
}
func newHealthCheck() *HealthCheck {
return &HealthCheck{
Backends: make(map[string]*BackendHealthCheck),
}
}
// NewBackendHealthCheck Instantiate a new BackendHealthCheck
func NewBackendHealthCheck(options Options) *BackendHealthCheck {
return &BackendHealthCheck{
Options: options,
requestTimeout: 5 * time.Second,
}
}
//SetBackendsConfiguration set backends configuration
func (hc *HealthCheck) SetBackendsConfiguration(parentCtx context.Context, backends map[string]*BackendHealthCheck) {
hc.Backends = backends
if hc.cancel != nil {
hc.cancel()
}
ctx, cancel := context.WithCancel(parentCtx)
hc.cancel = cancel
for backendID, backend := range hc.Backends {
currentBackendID := backendID
currentBackend := backend
safe.Go(func() {
hc.execute(ctx, currentBackendID, currentBackend)
})
}
}
func (hc *HealthCheck) execute(ctx context.Context, backendID string, backend *BackendHealthCheck) {
log.Debugf("Initial healthcheck for currentBackend %s ", backendID)
checkBackend(backend)
ticker := time.NewTicker(backend.Interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
log.Debugf("Stopping all current Healthcheck goroutines")
return
case <-ticker.C:
log.Debugf("Refreshing healthcheck for currentBackend %s ", backendID)
checkBackend(backend)
}
}
}
func checkBackend(currentBackend *BackendHealthCheck) {
enabledURLs := currentBackend.LB.Servers()
var newDisabledURLs []*url.URL
for _, url := range currentBackend.disabledURLs {
if checkHealth(url, currentBackend) {
log.Debugf("HealthCheck is up [%s]: Upsert in server list", url.String())
currentBackend.LB.UpsertServer(url, roundrobin.Weight(1))
} else {
log.Warnf("HealthCheck is still failing [%s]", url.String())
newDisabledURLs = append(newDisabledURLs, url)
}
}
currentBackend.disabledURLs = newDisabledURLs
for _, url := range enabledURLs {
if !checkHealth(url, currentBackend) {
log.Warnf("HealthCheck has failed [%s]: Remove from server list", url.String())
currentBackend.LB.RemoveServer(url)
currentBackend.disabledURLs = append(currentBackend.disabledURLs, url)
}
}
}
func checkHealth(serverURL *url.URL, backend *BackendHealthCheck) bool {
client := http.Client{
Timeout: backend.requestTimeout,
}
resp, err := client.Get(serverURL.String() + backend.Path)
if err == nil {
defer resp.Body.Close()
}
return err == nil && resp.StatusCode == 200
}

View File

@@ -1,202 +0,0 @@
package healthcheck
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"sync"
"testing"
"time"
"github.com/vulcand/oxy/roundrobin"
)
const healthCheckInterval = 100 * time.Millisecond
type testLoadBalancer struct {
// RWMutex needed due to parallel test execution: Both the system-under-test
// and the test assertions reference the counters.
*sync.RWMutex
numRemovedServers int
numUpsertedServers int
servers []*url.URL
}
func (lb *testLoadBalancer) RemoveServer(u *url.URL) error {
lb.Lock()
defer lb.Unlock()
lb.numRemovedServers++
lb.removeServer(u)
return nil
}
func (lb *testLoadBalancer) UpsertServer(u *url.URL, options ...roundrobin.ServerOption) error {
lb.Lock()
defer lb.Unlock()
lb.numUpsertedServers++
lb.servers = append(lb.servers, u)
return nil
}
func (lb *testLoadBalancer) Servers() []*url.URL {
return lb.servers
}
func (lb *testLoadBalancer) removeServer(u *url.URL) {
var i int
var serverURL *url.URL
for i, serverURL = range lb.servers {
if *serverURL == *u {
break
}
}
lb.servers = append(lb.servers[:i], lb.servers[i+1:]...)
}
type testHandler struct {
done func()
healthSequence []bool
}
func newTestServer(done func(), healthSequence []bool) *httptest.Server {
handler := &testHandler{
done: done,
healthSequence: healthSequence,
}
return httptest.NewServer(handler)
}
// ServeHTTP returns 200 or 503 HTTP response codes depending on whether the
// current request is marked as healthy or not.
// It calls the given 'done' function once all request health indicators have
// been depleted.
func (th *testHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if len(th.healthSequence) == 0 {
panic("received unexpected request")
}
healthy := th.healthSequence[0]
if healthy {
w.WriteHeader(http.StatusOK)
} else {
w.WriteHeader(http.StatusServiceUnavailable)
}
th.healthSequence = th.healthSequence[1:]
if len(th.healthSequence) == 0 {
th.done()
}
}
func TestSetBackendsConfiguration(t *testing.T) {
tests := []struct {
desc string
startHealthy bool
healthSequence []bool
wantNumRemovedServers int
wantNumUpsertedServers int
}{
{
desc: "healthy server staying healthy",
startHealthy: true,
healthSequence: []bool{true},
wantNumRemovedServers: 0,
wantNumUpsertedServers: 0,
},
{
desc: "healthy server becoming sick",
startHealthy: true,
healthSequence: []bool{false},
wantNumRemovedServers: 1,
wantNumUpsertedServers: 0,
},
{
desc: "sick server becoming healthy",
startHealthy: false,
healthSequence: []bool{true},
wantNumRemovedServers: 0,
wantNumUpsertedServers: 1,
},
{
desc: "sick server staying sick",
startHealthy: false,
healthSequence: []bool{false},
wantNumRemovedServers: 0,
wantNumUpsertedServers: 0,
},
{
desc: "healthy server toggling to sick and back to healthy",
startHealthy: true,
healthSequence: []bool{false, true},
wantNumRemovedServers: 1,
wantNumUpsertedServers: 1,
},
}
for _, test := range tests {
test := test
t.Run(test.desc, func(t *testing.T) {
t.Parallel()
// The context is passed to the health check and canonically cancelled by
// the test server once all expected requests have been received.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ts := newTestServer(cancel, test.healthSequence)
defer ts.Close()
lb := &testLoadBalancer{RWMutex: &sync.RWMutex{}}
backend := NewBackendHealthCheck(Options{
Path: "/path",
Interval: healthCheckInterval,
LB: lb,
})
serverURL := MustParseURL(ts.URL)
if test.startHealthy {
lb.servers = append(lb.servers, serverURL)
} else {
backend.disabledURLs = append(backend.disabledURLs, serverURL)
}
healthCheck := HealthCheck{
Backends: make(map[string]*BackendHealthCheck),
}
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
healthCheck.execute(ctx, "id", backend)
wg.Done()
}()
// Make test timeout dependent on number of expected requests, health
// check interval, and a safety margin.
timeout := time.Duration(len(test.healthSequence)*int(healthCheckInterval) + 500)
select {
case <-time.After(timeout):
t.Fatal("test did not complete in time")
case <-ctx.Done():
wg.Wait()
}
lb.Lock()
defer lb.Unlock()
if lb.numRemovedServers != test.wantNumRemovedServers {
t.Errorf("got %d removed servers, wanted %d", lb.numRemovedServers, test.wantNumRemovedServers)
}
if lb.numUpsertedServers != test.wantNumUpsertedServers {
t.Errorf("got %d upserted servers, wanted %d", lb.numUpsertedServers, test.wantNumUpsertedServers)
}
})
}
}
func MustParseURL(rawurl string) *url.URL {
u, err := url.Parse(rawurl)
if err != nil {
panic(fmt.Sprintf("failed to parse URL '%s': %s", rawurl, err))
}
return u
}

View File

@@ -75,7 +75,7 @@ func (s *AccessLogSuite) TestAccessLog(c *check.C) {
c.Assert(tokens[9], checker.Equals, fmt.Sprintf("%d", i+1))
c.Assert(strings.HasPrefix(tokens[10], "frontend"), checker.True)
c.Assert(strings.HasPrefix(tokens[11], "http://127.0.0.1:808"), checker.True)
c.Assert(regexp.MustCompile("^\\d+ms$").MatchString(tokens[12]), checker.True)
c.Assert(regexp.MustCompile("^\\d+\\.\\d+.*s$").MatchString(tokens[12]), checker.True)
}
}
c.Assert(count, checker.Equals, 3)

View File

@@ -1,160 +0,0 @@
package main
import (
"crypto/tls"
"errors"
"net/http"
"os"
"os/exec"
"time"
"github.com/containous/traefik/integration/utils"
"github.com/go-check/check"
checker "github.com/vdemeester/shakers"
)
// ACME test suites (using libcompose)
type AcmeSuite struct {
BaseSuite
boulderIP string
}
// Acme tests configuration
type AcmeTestCase struct {
onDemand bool
traefikConfFilePath string
domainToCheck string
}
// Domain to check
const acmeDomain = "traefik.acme.wtf"
// Wildcard domain to chekc
const wildcardDomain = "*.acme.wtf"
func (s *AcmeSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "boulder")
s.composeProject.Start(c)
s.boulderIP = s.composeProject.Container(c, "boulder").NetworkSettings.IPAddress
// wait for boulder
err := utils.Try(120*time.Second, func() error {
resp, err := http.Get("http://" + s.boulderIP + ":4000/directory")
if err != nil {
return err
}
if resp.StatusCode != 200 {
return errors.New("Expected http 200 from boulder")
}
return nil
})
c.Assert(err, checker.IsNil)
}
func (s *AcmeSuite) TearDownSuite(c *check.C) {
// shutdown and delete compose project
if s.composeProject != nil {
s.composeProject.Stop(c)
}
}
// Test OnDemand option with none provided certificate
func (s *AcmeSuite) TestOnDemandRetrieveAcmeCertificate(c *check.C) {
aTestCase := AcmeTestCase{
traefikConfFilePath: "fixtures/acme/acme.toml",
onDemand: true,
domainToCheck: acmeDomain}
s.retrieveAcmeCertificate(c, aTestCase)
}
// Test OnHostRule option with none provided certificate
func (s *AcmeSuite) TestOnHostRuleRetrieveAcmeCertificate(c *check.C) {
aTestCase := AcmeTestCase{
traefikConfFilePath: "fixtures/acme/acme.toml",
onDemand: false,
domainToCheck: acmeDomain}
s.retrieveAcmeCertificate(c, aTestCase)
}
// Test OnDemand option with a wildcard provided certificate
func (s *AcmeSuite) TestOnDemandRetrieveAcmeCertificateWithWildcard(c *check.C) {
aTestCase := AcmeTestCase{
traefikConfFilePath: "fixtures/acme/acme_provided.toml",
onDemand: true,
domainToCheck: wildcardDomain}
s.retrieveAcmeCertificate(c, aTestCase)
}
// Test onHostRule option with a wildcard provided certificate
func (s *AcmeSuite) TestOnHostRuleRetrieveAcmeCertificateWithWildcard(c *check.C) {
aTestCase := AcmeTestCase{
traefikConfFilePath: "fixtures/acme/acme_provided.toml",
onDemand: false,
domainToCheck: wildcardDomain}
s.retrieveAcmeCertificate(c, aTestCase)
}
// Doing an HTTPS request and test the response certificate
func (s *AcmeSuite) retrieveAcmeCertificate(c *check.C, a AcmeTestCase) {
file := s.adaptFile(c, a.traefikConfFilePath, struct {
BoulderHost string
OnDemand, OnHostRule bool
}{s.boulderIP, a.onDemand, !a.onDemand})
defer os.Remove(file)
cmd := exec.Command(traefikBinary, "--configFile="+file)
err := cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
backend := startTestServer("9010", 200)
defer backend.Close()
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client := &http.Client{Transport: tr}
// wait for traefik (generating acme account take some seconds)
err = utils.Try(30*time.Second, func() error {
_, err := client.Get("https://127.0.0.1:5001")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
tr = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
ServerName: acmeDomain,
},
}
client = &http.Client{Transport: tr}
req, _ := http.NewRequest("GET", "https://127.0.0.1:5001/", nil)
req.Host = acmeDomain
req.Header.Set("Host", acmeDomain)
req.Header.Set("Accept", "*/*")
var resp *http.Response
// Retry to send a Request which uses the LE generated certificate
err = utils.Try(60*time.Second, func() error {
resp, err = client.Do(req)
// /!\ If connection is not closed, SSLHandshake will only be done during the first trial /!\
req.Close = true
if err != nil {
return err
} else if resp.TLS.PeerCertificates[0].Subject.CommonName != a.domainToCheck {
return errors.New("Domain " + resp.TLS.PeerCertificates[0].Subject.CommonName + " found in place of " + a.domainToCheck)
}
return nil
})
c.Assert(err, checker.IsNil)
// Check Domain into response certificate
c.Assert(resp.TLS.PeerCertificates[0].Subject.CommonName, checker.Equals, a.domainToCheck)
// Expected a 200
c.Assert(resp.StatusCode, checker.Equals, 200)
}

View File

@@ -26,7 +26,7 @@ func (s *SimpleSuite) TestInvalidConfigShouldFail(c *check.C) {
defer cmd.Process.Kill()
output := b.Bytes()
c.Assert(string(output), checker.Contains, "Near line 0 (last key parsed ''): bare keys cannot contain '{'")
c.Assert(string(output), checker.Contains, "Near line 0 (last key parsed ''): Bare keys cannot contain '{'")
}
func (s *SimpleSuite) TestSimpleDefaultConfig(c *check.C) {
@@ -70,7 +70,7 @@ func (s *SimpleSuite) TestDefaultEntryPoints(c *check.C) {
defer cmd.Process.Kill()
output := b.Bytes()
c.Assert(string(output), checker.Contains, "\"DefaultEntryPoints\":[\"http\"]")
c.Assert(string(output), checker.Contains, "\\\"DefaultEntryPoints\\\":[\\\"http\\\"]")
}
func (s *SimpleSuite) TestPrintHelp(c *check.C) {

View File

@@ -1,25 +1,21 @@
package main
import (
"context"
"errors"
"io/ioutil"
"net/http"
"os"
"os/exec"
"strings"
"sync"
"time"
"github.com/containous/staert"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/integration/utils"
"github.com/containous/traefik/provider"
"github.com/docker/libkv"
"github.com/docker/libkv/store"
"github.com/docker/libkv/store/consul"
"github.com/go-check/check"
"errors"
"github.com/containous/traefik/integration/utils"
checker "github.com/vdemeester/shakers"
"io/ioutil"
"os"
"strings"
)
// Consul test suites (using libcompose)
@@ -28,7 +24,7 @@ type ConsulSuite struct {
kv store.Store
}
func (s *ConsulSuite) setupConsul(c *check.C) {
func (s *ConsulSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "consul")
s.composeProject.Start(c)
@@ -56,56 +52,7 @@ func (s *ConsulSuite) setupConsul(c *check.C) {
c.Assert(err, checker.IsNil)
}
func (s *ConsulSuite) setupConsulTLS(c *check.C) {
s.createComposeProject(c, "consul_tls")
s.composeProject.Start(c)
consul.Register()
clientTLS := &provider.ClientTLS{
CA: "resources/tls/ca.cert",
Cert: "resources/tls/consul.cert",
Key: "resources/tls/consul.key",
InsecureSkipVerify: true,
}
TLSConfig, err := clientTLS.CreateTLSConfig()
c.Assert(err, checker.IsNil)
kv, err := libkv.NewStore(
store.CONSUL,
[]string{s.composeProject.Container(c, "consul").NetworkSettings.IPAddress + ":8585"},
&store.Config{
ConnectionTimeout: 10 * time.Second,
TLS: TLSConfig,
},
)
if err != nil {
c.Fatal("Cannot create store consul")
}
s.kv = kv
// wait for consul
err = utils.Try(60*time.Second, func() error {
_, err := kv.Exists("test")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
}
func (s *ConsulSuite) TearDownTest(c *check.C) {
// shutdown and delete compose project
if s.composeProject != nil {
s.composeProject.Stop(c)
}
}
func (s *ConsulSuite) TearDownSuite(c *check.C) {}
func (s *ConsulSuite) TestSimpleConfiguration(c *check.C) {
s.setupConsul(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
file := s.adaptFile(c, "fixtures/consul/simple.toml", struct{ ConsulHost string }{consulHost})
defer os.Remove(file)
@@ -123,7 +70,6 @@ func (s *ConsulSuite) TestSimpleConfiguration(c *check.C) {
}
func (s *ConsulSuite) TestNominalConfiguration(c *check.C) {
s.setupConsul(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
file := s.adaptFile(c, "fixtures/consul/simple.toml", struct{ ConsulHost string }{consulHost})
defer os.Remove(file)
@@ -244,279 +190,3 @@ func (s *ConsulSuite) TestNominalConfiguration(c *check.C) {
c.Assert(err, checker.IsNil)
c.Assert(resp.StatusCode, checker.Equals, 404)
}
func (s *ConsulSuite) TestGlobalConfiguration(c *check.C) {
s.setupConsul(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
err := s.kv.Put("traefik/entrypoints/http/address", []byte(":8001"), nil)
c.Assert(err, checker.IsNil)
// wait for consul
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("traefik/entrypoints/http/address")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
// start traefik
cmd := exec.Command(traefikBinary, "--configFile=fixtures/simple_web.toml", "--consul", "--consul.endpoint="+consulHost+":8500")
// cmd.Stdout = os.Stdout
// cmd.Stderr = os.Stderr
err = cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
whoami1 := s.composeProject.Container(c, "whoami1")
whoami2 := s.composeProject.Container(c, "whoami2")
whoami3 := s.composeProject.Container(c, "whoami3")
whoami4 := s.composeProject.Container(c, "whoami4")
backend1 := map[string]string{
"traefik/backends/backend1/circuitbreaker/expression": "NetworkErrorRatio() > 0.5",
"traefik/backends/backend1/servers/server1/url": "http://" + whoami1.NetworkSettings.IPAddress + ":80",
"traefik/backends/backend1/servers/server1/weight": "10",
"traefik/backends/backend1/servers/server2/url": "http://" + whoami2.NetworkSettings.IPAddress + ":80",
"traefik/backends/backend1/servers/server2/weight": "1",
}
backend2 := map[string]string{
"traefik/backends/backend2/loadbalancer/method": "drr",
"traefik/backends/backend2/servers/server1/url": "http://" + whoami3.NetworkSettings.IPAddress + ":80",
"traefik/backends/backend2/servers/server1/weight": "1",
"traefik/backends/backend2/servers/server2/url": "http://" + whoami4.NetworkSettings.IPAddress + ":80",
"traefik/backends/backend2/servers/server2/weight": "2",
}
frontend1 := map[string]string{
"traefik/frontends/frontend1/backend": "backend2",
"traefik/frontends/frontend1/entrypoints": "http",
"traefik/frontends/frontend1/priority": "1",
"traefik/frontends/frontend1/routes/test_1/rule": "Host:test.localhost",
}
frontend2 := map[string]string{
"traefik/frontends/frontend2/backend": "backend1",
"traefik/frontends/frontend2/entrypoints": "http",
"traefik/frontends/frontend2/priority": "10",
"traefik/frontends/frontend2/routes/test_2/rule": "Path:/test",
}
for key, value := range backend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range backend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
// wait for consul
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("traefik/frontends/frontend2/routes/test_2/rule")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
// wait for traefik
err = utils.TryRequest("http://127.0.0.1:8080/api/providers", 60*time.Second, func(res *http.Response) error {
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if !strings.Contains(string(body), "Path:/test") {
return errors.New("Incorrect traefik config")
}
return nil
})
c.Assert(err, checker.IsNil)
//check
client := &http.Client{}
req, err := http.NewRequest("GET", "http://127.0.0.1:8001/", nil)
c.Assert(err, checker.IsNil)
req.Host = "test.localhost"
response, err := client.Do(req)
c.Assert(err, checker.IsNil)
c.Assert(response.StatusCode, checker.Equals, 200)
}
func (s *ConsulSuite) skipTestGlobalConfigurationWithClientTLS(c *check.C) {
c.Skip("wait for relative path issue in the composefile")
s.setupConsulTLS(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
err := s.kv.Put("traefik/web/address", []byte(":8081"), nil)
c.Assert(err, checker.IsNil)
// wait for consul
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("traefik/web/address")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
// start traefik
cmd := exec.Command(traefikBinary, "--configFile=fixtures/simple_web.toml",
"--consul", "--consul.endpoint="+consulHost+":8585",
"--consul.tls.ca=resources/tls/ca.cert",
"--consul.tls.cert=resources/tls/consul.cert",
"--consul.tls.key=resources/tls/consul.key",
"--consul.tls.insecureskipverify")
// cmd.Stdout = os.Stdout
// cmd.Stderr = os.Stderr
err = cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
// wait for traefik
err = utils.TryRequest("http://127.0.0.1:8081/api/providers", 60*time.Second, func(res *http.Response) error {
_, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
}
func (s *ConsulSuite) TestCommandStoreConfig(c *check.C) {
s.setupConsul(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
cmd := exec.Command(traefikBinary, "storeconfig", "--configFile=fixtures/simple_web.toml", "--consul.endpoint="+consulHost+":8500")
err := cmd.Start()
c.Assert(err, checker.IsNil)
// wait for traefik finish without error
cmd.Wait()
//CHECK
checkmap := map[string]string{
"/traefik/loglevel": "DEBUG",
"/traefik/defaultentrypoints/0": "http",
"/traefik/entrypoints/http/address": ":8000",
"/traefik/web/address": ":8080",
"/traefik/consul/endpoint": (consulHost + ":8500"),
}
for key, value := range checkmap {
var p *store.KVPair
err = utils.Try(60*time.Second, func() error {
p, err = s.kv.Get(key)
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
c.Assert(string(p.Value), checker.Equals, value)
}
}
type TestStruct struct {
String string
Int int
}
func (s *ConsulSuite) TestDatastore(c *check.C) {
s.setupConsul(c)
consulHost := s.composeProject.Container(c, "consul").NetworkSettings.IPAddress
kvSource, err := staert.NewKvSource(store.CONSUL, []string{consulHost + ":8500"}, &store.Config{
ConnectionTimeout: 10 * time.Second,
}, "traefik")
c.Assert(err, checker.IsNil)
ctx := context.Background()
datastore1, err := cluster.NewDataStore(ctx, *kvSource, &TestStruct{}, nil)
c.Assert(err, checker.IsNil)
datastore2, err := cluster.NewDataStore(ctx, *kvSource, &TestStruct{}, nil)
c.Assert(err, checker.IsNil)
setter1, _, err := datastore1.Begin()
c.Assert(err, checker.IsNil)
err = setter1.Commit(&TestStruct{
String: "foo",
Int: 1,
})
c.Assert(err, checker.IsNil)
time.Sleep(2 * time.Second)
test1 := datastore1.Get().(*TestStruct)
c.Assert(test1.String, checker.Equals, "foo")
test2 := datastore2.Get().(*TestStruct)
c.Assert(test2.String, checker.Equals, "foo")
setter2, _, err := datastore2.Begin()
c.Assert(err, checker.IsNil)
err = setter2.Commit(&TestStruct{
String: "bar",
Int: 2,
})
c.Assert(err, checker.IsNil)
time.Sleep(2 * time.Second)
test1 = datastore1.Get().(*TestStruct)
c.Assert(test1.String, checker.Equals, "bar")
test2 = datastore2.Get().(*TestStruct)
c.Assert(test2.String, checker.Equals, "bar")
wg := &sync.WaitGroup{}
wg.Add(4)
go func() {
for i := 0; i < 100; i++ {
setter1, _, err := datastore1.Begin()
c.Assert(err, checker.IsNil)
err = setter1.Commit(&TestStruct{
String: "datastore1",
Int: i,
})
c.Assert(err, checker.IsNil)
}
wg.Done()
}()
go func() {
for i := 0; i < 100; i++ {
setter2, _, err := datastore2.Begin()
c.Assert(err, checker.IsNil)
err = setter2.Commit(&TestStruct{
String: "datastore2",
Int: i,
})
c.Assert(err, checker.IsNil)
}
wg.Done()
}()
go func() {
for i := 0; i < 100; i++ {
test1 := datastore1.Get().(*TestStruct)
c.Assert(test1, checker.NotNil)
}
wg.Done()
}()
go func() {
for i := 0; i < 100; i++ {
test2 := datastore2.Get().(*TestStruct)
c.Assert(test2, checker.NotNil)
}
wg.Done()
}()
wg.Wait()
}

View File

@@ -110,7 +110,7 @@ func (s *DockerSuite) TestDefaultDockerContainers(c *check.C) {
client := &http.Client{}
req, err := http.NewRequest("GET", "http://127.0.0.1:8000/version", nil)
c.Assert(err, checker.IsNil)
req.Host = fmt.Sprintf("%s.docker.localhost", strings.Replace(name, "_", "-", -1))
req.Host = fmt.Sprintf("%s.docker.localhost", name)
resp, err := client.Do(req)
c.Assert(err, checker.IsNil)

View File

@@ -1,181 +0,0 @@
package main
import (
"errors"
"io/ioutil"
"net/http"
"os"
"os/exec"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
"github.com/containous/traefik/integration/utils"
"github.com/containous/traefik/types"
"github.com/go-check/check"
checker "github.com/vdemeester/shakers"
)
type DynamoDBSuite struct {
BaseSuite
}
type DynamoDBItem struct {
ID string `dynamodbav:"id"`
Name string `dynamodbav:"name"`
}
type DynamoDBBackendItem struct {
DynamoDBItem
Backend types.Backend `dynamodbav:"backend"`
}
type DynamoDBFrontendItem struct {
DynamoDBItem
Frontend types.Frontend `dynamodbav:"frontend"`
}
func (s *DynamoDBSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "dynamodb")
s.composeProject.Start(c)
dynamoURL := "http://" + s.composeProject.Container(c, "dynamo").NetworkSettings.IPAddress + ":8000"
config := &aws.Config{
Region: aws.String("us-east-1"),
Credentials: credentials.NewStaticCredentials("id", "secret", ""),
Endpoint: aws.String(dynamoURL),
}
var sess *session.Session
err := utils.Try(60*time.Second, func() error {
_, err := session.NewSession(config)
if err != nil {
return err
}
sess = session.New(config)
return nil
})
svc := dynamodb.New(sess)
// create dynamodb table
params := &dynamodb.CreateTableInput{
AttributeDefinitions: []*dynamodb.AttributeDefinition{
{
AttributeName: aws.String("id"),
AttributeType: aws.String("S"),
},
},
KeySchema: []*dynamodb.KeySchemaElement{
{
AttributeName: aws.String("id"),
KeyType: aws.String("HASH"),
},
},
ProvisionedThroughput: &dynamodb.ProvisionedThroughput{
ReadCapacityUnits: aws.Int64(1),
WriteCapacityUnits: aws.Int64(1),
},
TableName: aws.String("traefik"),
}
_, err = svc.CreateTable(params)
if err != nil {
c.Error(err)
return
}
// load config into dynamodb
whoami1 := "http://" + s.composeProject.Container(c, "whoami1").NetworkSettings.IPAddress + ":80"
whoami2 := "http://" + s.composeProject.Container(c, "whoami2").NetworkSettings.IPAddress + ":80"
whoami3 := "http://" + s.composeProject.Container(c, "whoami3").NetworkSettings.IPAddress + ":80"
backend := DynamoDBBackendItem{
Backend: types.Backend{
Servers: map[string]types.Server{
"whoami1": {
URL: whoami1,
},
"whoami2": {
URL: whoami2,
},
"whoami3": {
URL: whoami3,
},
},
},
DynamoDBItem: DynamoDBItem{
ID: "whoami_backend",
Name: "whoami",
},
}
frontend := DynamoDBFrontendItem{
Frontend: types.Frontend{
EntryPoints: []string{
"http",
},
Backend: "whoami",
Routes: map[string]types.Route{
"hostRule": {
Rule: "Host:test.traefik.io",
},
},
},
DynamoDBItem: DynamoDBItem{
ID: "whoami_frontend",
Name: "whoami",
},
}
backendAttributeValue, err := dynamodbattribute.MarshalMap(backend)
c.Assert(err, checker.IsNil)
frontendAttributeValue, err := dynamodbattribute.MarshalMap(frontend)
c.Assert(err, checker.IsNil)
putParams := &dynamodb.PutItemInput{
Item: backendAttributeValue,
TableName: aws.String("traefik"),
}
_, err = svc.PutItem(putParams)
c.Assert(err, checker.IsNil)
putParams = &dynamodb.PutItemInput{
Item: frontendAttributeValue,
TableName: aws.String("traefik"),
}
_, err = svc.PutItem(putParams)
c.Assert(err, checker.IsNil)
}
func (s *DynamoDBSuite) TestSimpleConfiguration(c *check.C) {
dynamoURL := "http://" + s.composeProject.Container(c, "dynamo").NetworkSettings.IPAddress + ":8000"
file := s.adaptFile(c, "fixtures/dynamodb/simple.toml", struct{ DynamoURL string }{dynamoURL})
defer os.Remove(file)
cmd := exec.Command(traefikBinary, "--configFile="+file)
err := cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
err = utils.TryRequest("http://127.0.0.1:8081/api/providers", 120*time.Second, func(res *http.Response) error {
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if !strings.Contains(string(body), "Host:test.traefik.io") {
return errors.New("incorrect traefik config")
}
return nil
})
c.Assert(err, checker.IsNil)
client := &http.Client{}
req, err := http.NewRequest("GET", "http://127.0.0.1:8080", nil)
c.Assert(err, checker.IsNil)
req.Host = "test.traefik.io"
response, err := client.Do(req)
c.Assert(err, checker.IsNil)
c.Assert(response.StatusCode, checker.Equals, 200)
}
func (s *DynamoDBSuite) TearDownSuite(c *check.C) {
if s.composeProject != nil {
s.composeProject.Stop(c)
}
}

View File

@@ -8,7 +8,6 @@ import (
checker "github.com/vdemeester/shakers"
"crypto/tls"
"errors"
"fmt"
"github.com/containous/traefik/integration/utils"
@@ -26,7 +25,7 @@ type EtcdSuite struct {
kv store.Store
}
func (s *EtcdSuite) SetUpTest(c *check.C) {
func (s *EtcdSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "etcd")
s.composeProject.Start(c)
@@ -55,15 +54,6 @@ func (s *EtcdSuite) SetUpTest(c *check.C) {
c.Assert(err, checker.IsNil)
}
func (s *EtcdSuite) TearDownTest(c *check.C) {
// shutdown and delete compose project
if s.composeProject != nil {
s.composeProject.Stop(c)
}
}
func (s *EtcdSuite) TearDownSuite(c *check.C) {}
func (s *EtcdSuite) TestSimpleConfiguration(c *check.C) {
etcdHost := s.composeProject.Container(c, "etcd").NetworkSettings.IPAddress
file := s.adaptFile(c, "fixtures/etcd/simple.toml", struct{ EtcdHost string }{etcdHost})
@@ -203,266 +193,3 @@ func (s *EtcdSuite) TestNominalConfiguration(c *check.C) {
c.Assert(err, checker.IsNil)
c.Assert(resp.StatusCode, checker.Equals, 404)
}
func (s *EtcdSuite) TestGlobalConfiguration(c *check.C) {
etcdHost := s.composeProject.Container(c, "etcd").NetworkSettings.IPAddress
err := s.kv.Put("/traefik/entrypoints/http/address", []byte(":8001"), nil)
c.Assert(err, checker.IsNil)
// wait for etcd
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("/traefik/entrypoints/http/address")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
// start traefik
cmd := exec.Command(traefikBinary, "--configFile=fixtures/simple_web.toml", "--etcd", "--etcd.endpoint="+etcdHost+":4001")
// cmd.Stdout = os.Stdout
// cmd.Stderr = os.Stderr
err = cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
whoami1 := s.composeProject.Container(c, "whoami1")
whoami2 := s.composeProject.Container(c, "whoami2")
whoami3 := s.composeProject.Container(c, "whoami3")
whoami4 := s.composeProject.Container(c, "whoami4")
backend1 := map[string]string{
"/traefik/backends/backend1/circuitbreaker/expression": "NetworkErrorRatio() > 0.5",
"/traefik/backends/backend1/servers/server1/url": "http://" + whoami1.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend1/servers/server1/weight": "10",
"/traefik/backends/backend1/servers/server2/url": "http://" + whoami2.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend1/servers/server2/weight": "1",
}
backend2 := map[string]string{
"/traefik/backends/backend2/loadbalancer/method": "drr",
"/traefik/backends/backend2/servers/server1/url": "http://" + whoami3.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend2/servers/server1/weight": "1",
"/traefik/backends/backend2/servers/server2/url": "http://" + whoami4.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend2/servers/server2/weight": "2",
}
frontend1 := map[string]string{
"/traefik/frontends/frontend1/backend": "backend2",
"/traefik/frontends/frontend1/entrypoints": "http",
"/traefik/frontends/frontend1/priority": "1",
"/traefik/frontends/frontend1/routes/test_1/rule": "Host:test.localhost",
}
frontend2 := map[string]string{
"/traefik/frontends/frontend2/backend": "backend1",
"/traefik/frontends/frontend2/entrypoints": "http",
"/traefik/frontends/frontend2/priority": "10",
"/traefik/frontends/frontend2/routes/test_2/rule": "Path:/test",
}
for key, value := range backend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range backend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
// wait for etcd
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("/traefik/frontends/frontend2/routes/test_2/rule")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
// wait for traefik
err = utils.TryRequest("http://127.0.0.1:8080/api/providers", 60*time.Second, func(res *http.Response) error {
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if !strings.Contains(string(body), "Path:/test") {
return errors.New("Incorrect traefik config")
}
return nil
})
c.Assert(err, checker.IsNil)
//check
client := &http.Client{}
req, err := http.NewRequest("GET", "http://127.0.0.1:8001/", nil)
c.Assert(err, checker.IsNil)
req.Host = "test.localhost"
response, err := client.Do(req)
c.Assert(err, checker.IsNil)
c.Assert(response.StatusCode, checker.Equals, 200)
}
func (s *EtcdSuite) TestCertificatesContentstWithSNIConfigHandshake(c *check.C) {
etcdHost := s.composeProject.Container(c, "etcd").NetworkSettings.IPAddress
// start traefik
cmd := exec.Command(traefikBinary, "--configFile=fixtures/simple_web.toml", "--etcd", "--etcd.endpoint="+etcdHost+":4001")
// cmd.Stdout = os.Stdout
// cmd.Stderr = os.Stderr
whoami1 := s.composeProject.Container(c, "whoami1")
whoami2 := s.composeProject.Container(c, "whoami2")
whoami3 := s.composeProject.Container(c, "whoami3")
whoami4 := s.composeProject.Container(c, "whoami4")
//Copy the contents of the certificate files into ETCD
snitestComCert, err := ioutil.ReadFile("fixtures/https/snitest.com.cert")
c.Assert(err, checker.IsNil)
snitestComKey, err := ioutil.ReadFile("fixtures/https/snitest.com.key")
c.Assert(err, checker.IsNil)
snitestOrgCert, err := ioutil.ReadFile("fixtures/https/snitest.org.cert")
c.Assert(err, checker.IsNil)
snitestOrgKey, err := ioutil.ReadFile("fixtures/https/snitest.org.key")
c.Assert(err, checker.IsNil)
globalConfig := map[string]string{
"/traefik/entrypoints/https/address": ":4443",
"/traefik/entrypoints/https/tls/certificates/0/certfile": string(snitestComCert),
"/traefik/entrypoints/https/tls/certificates/0/keyfile": string(snitestComKey),
"/traefik/entrypoints/https/tls/certificates/1/certfile": string(snitestOrgCert),
"/traefik/entrypoints/https/tls/certificates/1/keyfile": string(snitestOrgKey),
"/traefik/defaultentrypoints/0": "https",
}
backend1 := map[string]string{
"/traefik/backends/backend1/circuitbreaker/expression": "NetworkErrorRatio() > 0.5",
"/traefik/backends/backend1/servers/server1/url": "http://" + whoami1.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend1/servers/server1/weight": "10",
"/traefik/backends/backend1/servers/server2/url": "http://" + whoami2.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend1/servers/server2/weight": "1",
}
backend2 := map[string]string{
"/traefik/backends/backend2/loadbalancer/method": "drr",
"/traefik/backends/backend2/servers/server1/url": "http://" + whoami3.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend2/servers/server1/weight": "1",
"/traefik/backends/backend2/servers/server2/url": "http://" + whoami4.NetworkSettings.IPAddress + ":80",
"/traefik/backends/backend2/servers/server2/weight": "2",
}
frontend1 := map[string]string{
"/traefik/frontends/frontend1/backend": "backend2",
"/traefik/frontends/frontend1/entrypoints": "http",
"/traefik/frontends/frontend1/priority": "1",
"/traefik/frontends/frontend1/routes/test_1/rule": "Host:snitest.com",
}
frontend2 := map[string]string{
"/traefik/frontends/frontend2/backend": "backend1",
"/traefik/frontends/frontend2/entrypoints": "http",
"/traefik/frontends/frontend2/priority": "10",
"/traefik/frontends/frontend2/routes/test_2/rule": "Host:snitest.org",
}
for key, value := range globalConfig {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range backend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range backend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend1 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
for key, value := range frontend2 {
err := s.kv.Put(key, []byte(value), nil)
c.Assert(err, checker.IsNil)
}
// wait for etcd
err = utils.Try(60*time.Second, func() error {
_, err := s.kv.Exists("/traefik/frontends/frontend2/routes/test_2/rule")
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
err = cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
// wait for traefik
err = utils.TryRequest("http://127.0.0.1:8080/api/providers", 60*time.Second, func(res *http.Response) error {
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if !strings.Contains(string(body), "Host:snitest.org") {
return errors.New("Incorrect traefik config")
}
return nil
})
c.Assert(err, checker.IsNil)
//check
tlsConfig := &tls.Config{
InsecureSkipVerify: true,
ServerName: "snitest.com",
}
conn, err := tls.Dial("tcp", "127.0.0.1:4443", tlsConfig)
c.Assert(err, checker.IsNil, check.Commentf("failed to connect to server"))
defer conn.Close()
err = conn.Handshake()
c.Assert(err, checker.IsNil, check.Commentf("TLS handshake error"))
cs := conn.ConnectionState()
err = cs.PeerCertificates[0].VerifyHostname("snitest.com")
c.Assert(err, checker.IsNil, check.Commentf("certificate did not match SNI servername"))
}
func (s *EtcdSuite) TestCommandStoreConfig(c *check.C) {
etcdHost := s.composeProject.Container(c, "etcd").NetworkSettings.IPAddress
cmd := exec.Command(traefikBinary, "storeconfig", "--configFile=fixtures/simple_web.toml", "--etcd.endpoint="+etcdHost+":4001")
err := cmd.Start()
c.Assert(err, checker.IsNil)
// wait for traefik finish without error
cmd.Wait()
//CHECK
checkmap := map[string]string{
"/traefik/loglevel": "DEBUG",
"/traefik/defaultentrypoints/0": "http",
"/traefik/entrypoints/http/address": ":8000",
"/traefik/web/address": ":8080",
"/traefik/etcd/endpoint": (etcdHost + ":4001"),
}
for key, value := range checkmap {
var p *store.KVPair
err = utils.Try(60*time.Second, func() error {
p, err = s.kv.Get(key)
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
c.Assert(string(p.Value), checker.Equals, value)
}
}

View File

@@ -1,111 +0,0 @@
package main
import (
"bytes"
"errors"
"io/ioutil"
"net/http"
"os"
"os/exec"
"strings"
"text/template"
"time"
"github.com/containous/traefik/integration/utils"
"github.com/go-check/check"
checker "github.com/vdemeester/shakers"
)
// Eureka test suites (using libcompose)
type EurekaSuite struct{ BaseSuite }
func (s *EurekaSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "eureka")
s.composeProject.Start(c)
}
func (s *EurekaSuite) TestSimpleConfiguration(c *check.C) {
eurekaHost := s.composeProject.Container(c, "eureka").NetworkSettings.IPAddress
whoami1Host := s.composeProject.Container(c, "whoami1").NetworkSettings.IPAddress
file := s.adaptFile(c, "fixtures/eureka/simple.toml", struct{ EurekaHost string }{eurekaHost})
defer os.Remove(file)
cmd := exec.Command(traefikBinary, "--configFile="+file)
err := cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
eurekaURL := "http://" + eurekaHost + ":8761/eureka/apps"
// wait for eureka
err = utils.TryRequest(eurekaURL, 60*time.Second, func(res *http.Response) error {
if err != nil {
return err
}
return nil
})
c.Assert(err, checker.IsNil)
eurekaTemplate := `
{
"instance": {
"hostName": "{{ .IP }}",
"app": "{{ .ID }}",
"ipAddr": "{{ .IP }}",
"status": "UP",
"port": {
"$": {{ .Port }},
"@enabled": "true"
},
"dataCenterInfo": {
"name": "MyOwn"
}
}
}`
tmpl, err := template.New("eurekaTemlate").Parse(eurekaTemplate)
c.Assert(err, checker.IsNil)
buf := new(bytes.Buffer)
templateVars := map[string]string{
"ID": "tests-integration-traefik",
"IP": whoami1Host,
"Port": "80",
}
// add in eureka
err = tmpl.Execute(buf, templateVars)
resp, err := http.Post(eurekaURL+"/tests-integration-traefik", "application/json", strings.NewReader(buf.String()))
c.Assert(err, checker.IsNil)
c.Assert(resp.StatusCode, checker.Equals, 204)
// wait for traefik
err = utils.TryRequest("http://127.0.0.1:8080/api/providers", 60*time.Second, func(res *http.Response) error {
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if !strings.Contains(string(body), "Host:tests-integration-traefik") {
return errors.New("Incorrect traefik config")
}
return nil
})
c.Assert(err, checker.IsNil)
client := &http.Client{}
req, err := http.NewRequest("GET", "http://127.0.0.1:8000/", nil)
c.Assert(err, checker.IsNil)
req.Host = "tests-integration-traefik"
resp, err = client.Do(req)
c.Assert(err, checker.IsNil)
c.Assert(resp.StatusCode, checker.Equals, 200)
// TODO validate : run on 80
resp, err = http.Get("http://127.0.0.1:8000/")
// Expected a 404 as we did not configure anything
c.Assert(err, checker.IsNil)
c.Assert(resp.StatusCode, checker.Equals, 404)
}

View File

@@ -1,37 +0,0 @@
# How to generate the self-signed wildcard certificate
```bash
#!/usr/bin/env bash
# Specify where we will install
# the wildcard certificate
SSL_DIR="./ssl"
# Set the wildcarded domain
# we want to use
DOMAIN="*.acme.wtf"
# A blank passphrase
PASSPHRASE=""
# Set our CSR variables
SUBJ="
C=FR
ST=MP
O=
localityName=Toulouse
commonName=$DOMAIN
organizationalUnitName=Traefik
emailAddress=
"
# Create our SSL directory
# in case it doesn't exist
sudo mkdir -p "$SSL_DIR"
# Generate our Private Key, CSR and Certificate
sudo openssl genrsa -out "$SSL_DIR/wildcard.key" 2048
sudo openssl req -new -subj "$(echo -n "$SUBJ" | tr "\n" "/")" -key "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.csr" -passin pass:$PASSPHRASE
sudo openssl x509 -req -days 3650 -in "$SSL_DIR/wildcard.csr" -signkey "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.crt"
sudo rm -f "$SSL_DIR/wildcard.csr"
```

View File

@@ -1,33 +0,0 @@
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":8080"
[entryPoints.https]
address = ":5001"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "/dev/null"
entryPoint = "https"
onDemand = {{.OnDemand}}
OnHostRule = {{.OnHostRule}}
caServer = "http://{{.BoulderHost}}:4000/directory"
[file]
[backends]
[backends.backend]
[backends.backend.servers.server1]
url = "http://127.0.0.1:9010"
[frontends]
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.test]
rule = "Host:traefik.acme.wtf"

View File

@@ -1,35 +0,0 @@
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":8080"
[entryPoints.https]
address = ":5001"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "fixtures/acme/ssl/wildcard.crt"
KeyFile = "fixtures/acme/ssl/wildcard.key"
[acme]
email = "test@traefik.io"
storage = "/dev/null"
entryPoint = "https"
onDemand = {{.OnDemand}}
OnHostRule = {{.OnHostRule}}
caServer = "http://{{.BoulderHost}}:4000/directory"
[file]
[backends]
[backends.backend]
[backends.backend.servers.server1]
url = "http://127.0.0.1:9010"
[frontends]
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.test]
rule = "Host:traefik.acme.wtf"

View File

@@ -1,19 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDJDCCAgwCCQCS90TE7NuTqzANBgkqhkiG9w0BAQsFADBUMQswCQYDVQQGEwJG
UjELMAkGA1UECAwCTVAxETAPBgNVBAcMCFRvdWxvdXNlMRMwEQYDVQQDDAoqLmFj
bWUud3RmMRAwDgYDVQQLDAdUcmFlZmlrMB4XDTE3MDYyMzE0NTE0MVoXDTI3MDYy
MTE0NTE0MVowVDELMAkGA1UEBhMCRlIxCzAJBgNVBAgMAk1QMREwDwYDVQQHDAhU
b3Vsb3VzZTETMBEGA1UEAwwKKi5hY21lLnd0ZjEQMA4GA1UECwwHVHJhZWZpazCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAODqsVCLhauFZPhPXqZDIKST
wqoJST+jO5O/WmA7oC4S6JlecRoNsHAXyddd3cQW3yZqB0ryOHrMOpMX0PPXf3jS
OOXoXA6xsq+RXlR4hDrBkOrj/LR/g62Eiuj2JVO2uy6tKJIetSB/Wzl6OgRkY/um
EXIc7zQS81/QKg+pg7Z4AYJht5J88nOFHJ3RspUMaH1vJ6LhH3MOUkgFj+I1OiqX
Tnkd7EDWbkYxAJa0xI2qbmY5VYv8dsIUN+IlPFDtBt87Fc2qv5dQkOz11FDYxWnz
+kxX6+MESLBaTvJjXvG+bzTfh9xCExFQFiN+Us0JuLX8HKQ4MqWL2IiVLsko2osC
AwEAATANBgkqhkiG9w0BAQsFAAOCAQEAl2jTX2yzUpiufrJ6WtZjKIAH8GF817hS
dWvt2eyLrBPvllMUj8zqCE5uNVUDVuXQvOhOyx+3zZzfcgfYqbTD8G8amNWcSiRA
vonoOn1p1pW2OonSi32h3qv5i4gCyh/6cBneYi03lkQ7uLCsJK9+dXTAvoKL6s23
IXhZGS0Qkvs4vkORA2MX9tyJdyfCCaCx3GpPCGkKrKJ8ePTEvq1ZE2xdhERnV5pz
L1PRY2QthXXVjMz7AXw0gkHvAbtrKVKR1Tv4ZK34bFBh/kyGAjkcn0zdeFKITqTF
tCoXWEArmiRqGuXwbqU3mEA9Cv6aMM+0YX89K2InhOnBU80OWs0uMQ==
-----END CERTIFICATE-----

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA4OqxUIuFq4Vk+E9epkMgpJPCqglJP6M7k79aYDugLhLomV5x
Gg2wcBfJ113dxBbfJmoHSvI4esw6kxfQ89d/eNI45ehcDrGyr5FeVHiEOsGQ6uP8
tH+DrYSK6PYlU7a7Lq0okh61IH9bOXo6BGRj+6YRchzvNBLzX9AqD6mDtngBgmG3
knzyc4UcndGylQxofW8nouEfcw5SSAWP4jU6KpdOeR3sQNZuRjEAlrTEjapuZjlV
i/x2whQ34iU8UO0G3zsVzaq/l1CQ7PXUUNjFafP6TFfr4wRIsFpO8mNe8b5vNN+H
3EITEVAWI35SzQm4tfwcpDgypYvYiJUuySjaiwIDAQABAoIBAQCs9Ex9v4x+pQlL
2NzTxXLom6dp0dI92WwK5W696Zv3UhsDNRiMDFLNH73amxfZnizjAU2yWCkOZNX2
Hq5TlDc11ZJjWRbRRdw+He8HzdUAybCCr+a3dgbv+6hGFGIHydCOyCEWm/50ivq/
bDoI/pnT/ZQUyCM5TAlSeGSfvp7GRHi9v3HOl85H1Pn2Dvyk9gj4y3BIFrKuv8fJ
o6aEzlfgWGROCzshU2m8fB9P0B4hWDlJsc1D01sW60zhjLo9+XoWznmw5mczz7sc
S5sdDh47rSJsNRuFd7YDjeLzJWPqLrKVB5nn6nRbvrnBqhfsknkO4VIXhmEMSs1u
RMYOJ9ShAoGBAPinA6ktIeez1t5IsfxGwbCeZzFI1suZqZeX6ezNKaMpeykyAPuh
CqN7H+a4NCKsinsgHJowU98ckHeAsQ22s7R8dFZhyxEXkcBawY2soK29eq2aJHnY
lqKOwjOA7wgElRHwLkNFniQ5lKFPMly8a9NVAqg+Th/J3uR+7wE2t+b1AoGBAOeQ
H/vVkdaNB2ovnCxMh+OfxpcjkfF6KnD2jpn/TKsbR5BtnrtyRLc5+qt52D0CEgSy
qU3zrsZebShej3OIBPrEwIcPN+LezaxnLMf9RXdOde+wWrQLWLkShJaSTwSoGqZB
fcO0/sc1lzhGxm++ByP5mWbHr/VM9IdTQQH5Bct/AoGBAMhmOrIXeNL4Az2FU0Vi
dWp2T+7NqKfRAXj264Z5V4xzuxpZfadPhHZ7nhth7Erhyn4vRD4UoxQXPmvB4XCP
Bkh5YX3ZNUNiPorL2mDnd1xvcLcHm0xEfisnaWb/DCbnIomhjHeVXT4O1jYn0Qwi
o7hgNFMKXAaMuUJo9xGAWzkdAoGASxC4nY2tOiz7k1udt+qTPqHj4cjhHbOpoHb8
4UUWmH0+ZL50b3Vqey8raH0WMSjDqIw2QBPXu2yO3EBTJnOYkaZIdz/isQPjDplf
tfEPnM5tgubbcHQhLdWn75u8S9km0nB2kYPR98gSnmarGzwx2mKmbOAc1Vs+BcRi
VX5hd4cCgYAubBq0VsFT0KVU3Rva3dgPR1K5bp4r4hE5cGXm4HvLiOgv995CwPy1
27eONF9GN7hvjI6C17jA1Gyx5sN0QrsMv/1BZqiGaragMOPXFD+tVecWuKH4lZQi
VbKTOWHlGkrDCpiYWpfetQAjouj+0c6d+wigcoC8e5dwxBPI2f3rGw==
-----END RSA PRIVATE KEY-----

View File

@@ -12,4 +12,3 @@ logLevel = "DEBUG"
endpoint = "{{.DockerHost}}"
domain = "docker.localhost"
exposedbydefault = true

View File

@@ -1,16 +0,0 @@
defaultEntryPoints = ["http"]
logLevel = "DEBUG"
[entryPoints]
[entryPoints.http]
address = ":8080"
[dynamodb]
AccessKeyID = "key"
SecretAccessKey = "secret"
Endpoint = "{{.DynamoURL}}"
Region = "us-east-1"
[web]
address = ":8081"

View File

@@ -1,14 +0,0 @@
defaultEntryPoints = ["http"]
logLevel = "DEBUG"
[entryPoints]
[entryPoints.http]
address = ":8000"
[eureka]
endpoint = "http://{{.EurekaHost}}:8761/eureka"
delay = "1s"
[web]
address = ":8080"

View File

@@ -1,27 +0,0 @@
defaultEntryPoints = ["http"]
logLevel = "DEBUG"
[entryPoints]
[entryPoints.http]
address = ":8000"
[web]
address = ":8080"
[file]
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "1s"
[backends.backend1.servers.server1]
url = "http://{{.Server1}}:80"
[backends.backend1.servers.server2]
url = "http://{{.Server2}}:80"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"

View File

@@ -1,25 +0,0 @@
# This is how the certs were created
```bash
openssl req -new -newkey rsa:2048 -x509 -days 3650 -extensions v3_ca -keyout ca1.pem -out ca1.crt
openssl req -new -newkey rsa:2048 -x509 -days 3650 -extensions v3_ca -keyout ca2.pem -out ca2.crt
openssl req -new -newkey rsa:2048 -x509 -days 3650 -extensions v3_ca -keyout ca3.pem -out ca3.crt
openssl rsa -in ca1.pem -out ca1.key
openssl rsa -in ca2.pem -out ca2.key
openssl rsa -in ca3.pem -out ca3.key
cat ca1.crt ca2.crt > ca1and2.crt
rm ca1.pem ca2.pem ca3.pem
openssl genrsa -out client1.key 2048
openssl genrsa -out client2.key 2048
openssl genrsa -out client3.key 2048
openssl req -key client1.key -new -out client1.csr
openssl req -key client2.key -new -out client2.csr
openssl req -key client3.key -new -out client3.csr
openssl x509 -req -days 3650 -in client1.csr -CA ca1.crt -CAkey ca1.key -CAcreateserial -out client1.crt
openssl x509 -req -days 3650 -in client2.csr -CA ca2.crt -CAkey ca2.key -CAcreateserial -out client2.crt
openssl x509 -req -days 3650 -in client3.csr -CA ca3.crt -CAkey ca3.key -CAcreateserial -out client3.crt
```

View File

@@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDMjCCAhqgAwIBAgIJAKXHiSnQw6LqMA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
BAMTD2NhMS5leGFtcGxlLmNvbTAeFw0xNjA2MTgxMzAyNDdaFw0yNjA2MTYxMzAy
NDdaMBoxGDAWBgNVBAMTD2NhMS5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBAL9ZNf1Pqu30i/DUyAAbEVFfCvGEmN9hfGAK44IrBqfC
1ziW2Lfg2AkswNIC/T6M+lcoN0ftPhJpnP2Cdz9U/gF9FMd/XAGY/SOiun7wC8so
qdab7CMDlHP1c/XiL7lGEdm9RfynLcJ5JJn2X7mXwEZTviFtiJVmaoAl3TVNy3MZ
ZyfjNac9sA5idpX66TpVO9tE1gu71nRkBvTEzO/IYv8rcWQmogvH7DN3UurP3RUK
weij01rekG3OOOXUlQgZO6mhuvrKes9Xoc901bmTkOgTq7wIFf2AZozU4wy6kZfM
0sdzmjMpuEr7oROepvtzFiVyNIEGDJ3QvEEY4QJaFvcCAwEAAaN7MHkwHQYDVR0O
BBYEFFyJ/cSOOvcsfu+WLZbi/u3t8W/uMEoGA1UdIwRDMEGAFFyJ/cSOOvcsfu+W
LZbi/u3t8W/uoR6kHDAaMRgwFgYDVQQDEw9jYTEuZXhhbXBsZS5jb22CCQClx4kp
0MOi6jAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQCOBLJJF0esBVLX
xmj0xa0TREXTxco40e/fmUU1cGYgl1UCCZI7MLDcl6k6Km9Sbp/LCpZx88mtLwGY
wUss2mQ058kqiUrpb/U8xEbglLrRtsp1y8z7lood/8ru39zj1/9X4MFyqNi6390I
zxZNf2QauUS1TMxgv6UhVE52JaAL+sn2hqA6IaSYeT9NFzFsulCr29mxlIC9SzUr
Mbqri9LKX5aciy78+hQBKdXoJ5raRwttBvULabOrLhZdyvvL6QfcdgRV+JOT7vKn
htQahWSKoqhdpM6Q2pXP42/MyuKXFB5Nk8fnFiIoXH0Bs9vlPLOvToM2jYJ+LlDd
85qbL4eP
-----END CERTIFICATE-----

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAv1k1/U+q7fSL8NTIABsRUV8K8YSY32F8YArjgisGp8LXOJbY
t+DYCSzA0gL9Poz6Vyg3R+0+Emmc/YJ3P1T+AX0Ux39cAZj9I6K6fvALyyip1pvs
IwOUc/Vz9eIvuUYR2b1F/KctwnkkmfZfuZfARlO+IW2IlWZqgCXdNU3LcxlnJ+M1
pz2wDmJ2lfrpOlU720TWC7vWdGQG9MTM78hi/ytxZCaiC8fsM3dS6s/dFQrB6KPT
Wt6Qbc445dSVCBk7qaG6+sp6z1ehz3TVuZOQ6BOrvAgV/YBmjNTjDLqRl8zSx3Oa
Mym4SvuhE56m+3MWJXI0gQYMndC8QRjhAloW9wIDAQABAoIBAGJ9g8mn6R5kImfK
zksno4lTt2lLS/im0AMLd8E3bkyJgIgTNOeopupKC9HNUhaRMAYOoC24kpudmv3t
2n1RvRB9FmX9SxlTavCdwQq3egqPGqRpS2lWXWI2dAKa8t+VjniZ8N00G9yeyFUr
OGhqEMDiN9oy6/uiZK0jUDIwocjS5FZMBh+epM7/CnKj3uvqarmFXKcJ4ni28ww4
RPrXDm+VvXa30/hK8q8Eo3C3u39TMvNEaRqMP/zqRY89fbpd1+Okno79dugFhz7D
r/Jae9z4ChFBXegDmA/OkWOdLY5LyvwvpJpONjD/5wImY1OAJlFTg7S+2FcSVvCF
diUJ7/ECgYEA9pHYlJsWAo/izRUVhKRtBAVVjnlidxExuvOGNXpyPjZd5ruXochu
J6tAKA0rSE4RsISFVCrkQmjDgjyKa2D+o/hsTTlW3yrD4TSLI8/MrDtfCw9XRqeE
KqfeqT79Hh0icnsUVYH4eoND9CKuJ/B9NcdyUqRPm7Pnrx07SnhGHd8CgYEAxqqy
MPIDO2dadRqUIhWwMPIBegkZC1eeuv4pNEyukZc4+pXRshKXhvhmvz5NgsaSsKxZ
O6FgqzgTceLEubVYF4hvy1TC+3Fc/PFvh4Fo3SKjtiJRJjRREDWBu6hl16Cw/83j
k6Im//8WD1ri9iFf8RjrBwYH1xHqGTkNEUHl+ekCgYEAzlIWD6uCDFzIJGGLIvXP
fvjTsadivE039r7Fw8QVCnfFtUetxyOHAUysH5d9a0BgTvtk8Zv+ao9tYXI1RUrh
aOV8AlaDmbQYOj8UWsAL/OalTgTlO+r6jhLwH2DkvqkUZQUWa8KY4DMszoGihysW
KsUcpYh2UMyGhqKINXVU/rMCgYEAqJxbG9trDtHLHjRuoPcTUJc01aQ/EzdMSpxH
0FF8n6he/Z6GGMJaxHyyh4GTO3jZKwU7vrZaWzb+mdvC53KXz3FGoKXRzqIKL8uh
wrn8jCJIG97ITMp+OmmPL/veY8HIN3NAwR4QR5jx2hpjIk51JSTm5FEj+k8EBmA7
TPhG/XECgYA9e9B0jgR2aFSAWzpGMZYPW+NdGQlySv94AJmfF8U5J7PmU2BojvVn
bhWNSQk2LI/mTjLgB+liYtLqFGkgIrJdbBOQ8hKSBPGQltSR0Dvf0ZK/0F1hqDTW
m3AUvPZthNMNJIYkTav5a246tyKkmg11nUQsgoqdxCrEiLyv48PFnw==
-----END RSA PRIVATE KEY-----

View File

@@ -1 +0,0 @@
83E81F36599F4400

View File

@@ -1,40 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDMjCCAhqgAwIBAgIJAKXHiSnQw6LqMA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
BAMTD2NhMS5leGFtcGxlLmNvbTAeFw0xNjA2MTgxMzAyNDdaFw0yNjA2MTYxMzAy
NDdaMBoxGDAWBgNVBAMTD2NhMS5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBAL9ZNf1Pqu30i/DUyAAbEVFfCvGEmN9hfGAK44IrBqfC
1ziW2Lfg2AkswNIC/T6M+lcoN0ftPhJpnP2Cdz9U/gF9FMd/XAGY/SOiun7wC8so
qdab7CMDlHP1c/XiL7lGEdm9RfynLcJ5JJn2X7mXwEZTviFtiJVmaoAl3TVNy3MZ
ZyfjNac9sA5idpX66TpVO9tE1gu71nRkBvTEzO/IYv8rcWQmogvH7DN3UurP3RUK
weij01rekG3OOOXUlQgZO6mhuvrKes9Xoc901bmTkOgTq7wIFf2AZozU4wy6kZfM
0sdzmjMpuEr7oROepvtzFiVyNIEGDJ3QvEEY4QJaFvcCAwEAAaN7MHkwHQYDVR0O
BBYEFFyJ/cSOOvcsfu+WLZbi/u3t8W/uMEoGA1UdIwRDMEGAFFyJ/cSOOvcsfu+W
LZbi/u3t8W/uoR6kHDAaMRgwFgYDVQQDEw9jYTEuZXhhbXBsZS5jb22CCQClx4kp
0MOi6jAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQCOBLJJF0esBVLX
xmj0xa0TREXTxco40e/fmUU1cGYgl1UCCZI7MLDcl6k6Km9Sbp/LCpZx88mtLwGY
wUss2mQ058kqiUrpb/U8xEbglLrRtsp1y8z7lood/8ru39zj1/9X4MFyqNi6390I
zxZNf2QauUS1TMxgv6UhVE52JaAL+sn2hqA6IaSYeT9NFzFsulCr29mxlIC9SzUr
Mbqri9LKX5aciy78+hQBKdXoJ5raRwttBvULabOrLhZdyvvL6QfcdgRV+JOT7vKn
htQahWSKoqhdpM6Q2pXP42/MyuKXFB5Nk8fnFiIoXH0Bs9vlPLOvToM2jYJ+LlDd
85qbL4eP
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDMjCCAhqgAwIBAgIJAKjhXgiuPQexMA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
BAMTD2NhMi5leGFtcGxlLmNvbTAeFw0xNjA2MTgxMzAzMjJaFw0yNjA2MTYxMzAz
MjJaMBoxGDAWBgNVBAMTD2NhMi5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBAMx8S4U3tdeMGn1NEUNWCmD7pIYUCUhtORrn2rqF5b2M
ZQJZXAIfWJ7KrGjn8W7KPx8/V2FREHF1Z6v1fpB2rfCIFo97HszhQEt6lduKup2j
09ItpFjec7RahwaMksYDwl4PaxgKe2OYdLFJ/QIv8+I01vWPXFmHgZkBHQWhR5nV
TvGM6MU834e+PXxCXfcaC8VYpbHYKYxHmM5Sxa5V9WlppBBshB0OL+KrCPXwPqHl
StZPkG2p2qJUjCZ38uDx605RYaORZ0eDhrKj4M3lJzOTTcC4I77BzTb74+GcRT+R
lJMrWrS22jNZONnawBdbTWIFM4PzaqVvE7qVwZK1M5UCAwEAAaN7MHkwHQYDVR0O
BBYEFPooSq3ZvoyIzRQ96/dwUC0LDBvRMEoGA1UdIwRDMEGAFPooSq3ZvoyIzRQ9
6/dwUC0LDBvRoR6kHDAaMRgwFgYDVQQDEw9jYTIuZXhhbXBsZS5jb22CCQCo4V4I
rj0HsTAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQCvRgu11LrF7G9X
yuvUwBZJ8FgjAMPwXQIAYg47tlvD9ZDiZgXVulWOm6aHpT520MjNO9f0oKpsrSsh
7bsO4GSkbTPgGekbw4P3JtXAvlBEB5uabpdmF37Pg9s7dU/MeXCElzWF+yLVAo7o
Hj1UlENxh08FzlErNw6Djy2FZAADeSZ3LmHUl+50rrp5/DxrEhkHFm8dTTjFVPnK
KrnYLM8R7+v2Ysk6hTy4kwyiTKVZurK7ELRvS0RxWhtbVCXJ2HS1lv/LgEH1hyIP
SwvyZ25JhcGrBAL/jpzTxdDEGsPfUSVfrUhrhDWxg0dzY+ptwdTWHqxyR2YKmOgU
dKYIz/nK
-----END CERTIFICATE-----

View File

@@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDMjCCAhqgAwIBAgIJAKjhXgiuPQexMA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
BAMTD2NhMi5leGFtcGxlLmNvbTAeFw0xNjA2MTgxMzAzMjJaFw0yNjA2MTYxMzAz
MjJaMBoxGDAWBgNVBAMTD2NhMi5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBAMx8S4U3tdeMGn1NEUNWCmD7pIYUCUhtORrn2rqF5b2M
ZQJZXAIfWJ7KrGjn8W7KPx8/V2FREHF1Z6v1fpB2rfCIFo97HszhQEt6lduKup2j
09ItpFjec7RahwaMksYDwl4PaxgKe2OYdLFJ/QIv8+I01vWPXFmHgZkBHQWhR5nV
TvGM6MU834e+PXxCXfcaC8VYpbHYKYxHmM5Sxa5V9WlppBBshB0OL+KrCPXwPqHl
StZPkG2p2qJUjCZ38uDx605RYaORZ0eDhrKj4M3lJzOTTcC4I77BzTb74+GcRT+R
lJMrWrS22jNZONnawBdbTWIFM4PzaqVvE7qVwZK1M5UCAwEAAaN7MHkwHQYDVR0O
BBYEFPooSq3ZvoyIzRQ96/dwUC0LDBvRMEoGA1UdIwRDMEGAFPooSq3ZvoyIzRQ9
6/dwUC0LDBvRoR6kHDAaMRgwFgYDVQQDEw9jYTIuZXhhbXBsZS5jb22CCQCo4V4I
rj0HsTAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQCvRgu11LrF7G9X
yuvUwBZJ8FgjAMPwXQIAYg47tlvD9ZDiZgXVulWOm6aHpT520MjNO9f0oKpsrSsh
7bsO4GSkbTPgGekbw4P3JtXAvlBEB5uabpdmF37Pg9s7dU/MeXCElzWF+yLVAo7o
Hj1UlENxh08FzlErNw6Djy2FZAADeSZ3LmHUl+50rrp5/DxrEhkHFm8dTTjFVPnK
KrnYLM8R7+v2Ysk6hTy4kwyiTKVZurK7ELRvS0RxWhtbVCXJ2HS1lv/LgEH1hyIP
SwvyZ25JhcGrBAL/jpzTxdDEGsPfUSVfrUhrhDWxg0dzY+ptwdTWHqxyR2YKmOgU
dKYIz/nK
-----END CERTIFICATE-----

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAzHxLhTe114wafU0RQ1YKYPukhhQJSG05GufauoXlvYxlAllc
Ah9YnsqsaOfxbso/Hz9XYVEQcXVnq/V+kHat8IgWj3sezOFAS3qV24q6naPT0i2k
WN5ztFqHBoySxgPCXg9rGAp7Y5h0sUn9Ai/z4jTW9Y9cWYeBmQEdBaFHmdVO8Yzo
xTzfh749fEJd9xoLxVilsdgpjEeYzlLFrlX1aWmkEGyEHQ4v4qsI9fA+oeVK1k+Q
banaolSMJnfy4PHrTlFho5FnR4OGsqPgzeUnM5NNwLgjvsHNNvvj4ZxFP5GUkyta
tLbaM1k42drAF1tNYgUzg/NqpW8TupXBkrUzlQIDAQABAoIBAGFMg2LQL2Zw8+nL
UfuIZUfgdViXEBO2ZQW4bQtzyu12cFm9y1n3MGPebEs+klL1STPFH/7eY8SY6MuZ
9K8oyXs6RgHfw7gZNk6z9bqROFrqKVBJB3qB3uxiZv1mxjASednn3D2EP1IUqPHz
EsCHsLRiECaoIHk5USFMtlKHe1pmmsvQrQX7EV9Qg0VSGvQlgxc/Pcg/WeB6uT6u
CS2serWpUE2dBUTJisnUuL7F5/3JbPEPbUG4eeTcO8IafvgdOgFEc5qUlYCFFai0
fvjSabXrJO9QE1Huw0gyC/5FHlVr5x4aJ8NzPKcMRYqn7jpdwA0eyLyBo/KtPIbJ
6s0PFAECgYEA98cKuyaBXpPyG7/Y0C89Mzlt5+Qr0fpPksH6GEelPJVdhrdXP32W
66ROgCVZpf2pQeCCHfXyWdZQwEdSf+8ee1DJMSNgIm4Usqp6yIDS0iZ7pPWz0KSI
un/dm3lRE7hFMIQfbNf3rA0WD8Ani3c76eZruwQ5DNdXNOM+z1DN38UCgYEA00V4
6UOCcA3romkXuIyeyh/tuJ6K1J3ApUxA+E42f4raSMSMgnlAwpL0Wmt11bBOmToi
UAtwFcTfJRJSOvfmM/nd66592FAV/D4xcDIiNGh4xNDi8LSKmSj0WRYPU3YjkdFN
SwI48LmQKMfj3P8fClazKsdcDccfO4pyhEK98ZECgYEAt8QZw1/1hw22/Lm2tgCz
JTCswNXLYjqBldjkAenxNROaf/WucdpVeoMr7YLGEIQnakJ2fn4QtmxrC5BaMaRJ
OTBbZ2RTQnXeR/yEf/x7X31HKrtIF7BP7/Ixi8PYTAXY2vjCzdkHScWS3S+opJlU
CE/rCpNBNLLpbMI1rVDCv/kCgYEAkP3/sg67yQ00prx7JBOVsl/hNK/R1YMCQC8p
838x1axEjGYfjDeM4zwZaKiRMPsTpgMIo2iGHtqCzh1Zw9B38znLPMD+6uJjhD5m
jXpKkS8VmvVEmi89Y0mBEFacZAoS9TLwWccHruWa8vHkBror4luIEJbLLUV3wNQO
LYjkdJECgYBcIjZ1iQiOmFL8lm/JlPOs2JcT33fjnubreHkiG42dZFN2S8D5MdU5
JBP6IVVllPmbptw9T4wcw+bjVa0LQtQMGZLMxdx5nJp5dmFE0Pj8MjLpLy641Vlx
5sv2O+eRpt4yCiuHcuvDrKPGTyM2YqF7ilQwSC5Cfki155InnU2QUg==
-----END RSA PRIVATE KEY-----

View File

@@ -1 +0,0 @@
9EAC6D05226C2216

View File

@@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDMjCCAhqgAwIBAgIJAK/JGxwwmv1jMA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
BAMTD2NhMy5leGFtcGxlLmNvbTAeFw0xNjA2MTgxMzAzNTNaFw0yNjA2MTYxMzAz
NTNaMBoxGDAWBgNVBAMTD2NhMy5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBAO0B9iUp5w0m1NWC9QYWhxSE/emmmKcx99DWnzKZoIbj
TSRQtyhx+9c2z1dYAFZQpdVRSKQFn1IO8s51wlIc01KLFflz4EvSfAKZiAnkOOez
wzVQ8JWgKfOJV/ZctFPo4xtdhQmO1+U+YgSfU0ASEhHvHbIPTUJNRTfkJsGygq4q
/p9uA1TsjM4bh6AkiD1OlGjp0lbkzn3LLYpXWvgGsuejsdVkJS5pn2NKjkqVhhEg
g7hKKqm8Nc3mb+vGhw/fNppN/xeOswpMPaW77LppyFoDd/OmqqWrbzn2Fqw1nELh
zfo7AkKPyRm8eU3wSTIdmaXx1R5qPjqEmYrrDZ2HXa8CAwEAAaN7MHkwHQYDVR0O
BBYEFMR6dBZAeGgkxwSC/62xGwLEdXCdMEoGA1UdIwRDMEGAFMR6dBZAeGgkxwSC
/62xGwLEdXCdoR6kHDAaMRgwFgYDVQQDEw9jYTMuZXhhbXBsZS5jb22CCQCvyRsc
MJr9YzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQB3VgvPnLEEfbj4
Z61q8oKneklZV+WpDyWSodI6M1l/0pXJCTDRROJ37KaQHLJRQo+rMJiYKvQkCU+y
9JhLdRdMEzy++9hIWiNbDiy3BNMUiQOS1234WVFBosQ6uXNhXbL/Anl4xgiFFRZG
FehjPo0XRvxmBHnrnE1Rce0EmU/1bwVglu8e7mG5bs0gQrXTRlTkxvucyi+B6npF
2vuzxj4q+KgeEYURxCt95JoULtMY2c0VifcdweYDO/2sYEhOVi1N+PhPvZxJD6vR
CxIuT6K3nRe58b1J/f7TH/dvURIb1mVG8+EDQVqa1bzH3JfytsIVG5VL1hppQlgZ
Y0G4haMn
-----END CERTIFICATE-----

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA7QH2JSnnDSbU1YL1BhaHFIT96aaYpzH30NafMpmghuNNJFC3
KHH71zbPV1gAVlCl1VFIpAWfUg7yznXCUhzTUosV+XPgS9J8ApmICeQ457PDNVDw
laAp84lX9ly0U+jjG12FCY7X5T5iBJ9TQBISEe8dsg9NQk1FN+QmwbKCrir+n24D
VOyMzhuHoCSIPU6UaOnSVuTOfcstilda+Aay56Ox1WQlLmmfY0qOSpWGESCDuEoq
qbw1zeZv68aHD982mk3/F46zCkw9pbvsumnIWgN386aqpatvOfYWrDWcQuHN+jsC
Qo/JGbx5TfBJMh2ZpfHVHmo+OoSZiusNnYddrwIDAQABAoIBAD87j71YkaFro8sX
NmIabo2l8cx9uyqYZUKdkDnCzRZP3Iv80PEEgClqISVvgB+HQsdH+XZxXZFaFaPJ
vT+FG0hhfUphhQ0VqipTZf0lm50N094MqzNwWOD12rcLAr2EW9s4Nz9WkflCjIop
K9/jMlkAj86q0HUJApen0kNJah4nLPnkqKC9BQipGe2goERHA5N8MS/k/ODJrOzI
qdD77wE5oov5sIePsGp3zCKNw89qoVTfkH8eYos6lPsAibYfgm5z7LwEtfe0ZizG
myQfAYZx3Orl2eNxAb0c1dw+hNYKfeNAwn6h4J8AKuBHawZMb2ztlTj0ZludrhQC
VuwAcrkCgYEA9sFsszjoSO8pXDnbaQ8UNGwy+C1t0fcZIOxIebKPcfipGio0R9vr
SXEEfRQb+YdIFkQpe4hwAHt1Q75zh8z+oOTq8EHprxAwI9bzgyaEIHtGibvs99XT
iWSPtL274CISiwSL8NzMl/orD6sDhmJqiXhwtf2SDubUJu3gz13CeRsCgYEA9eMM
CYiOc4wLxKqyCqe3R86vnBFVauxp9eq9XTLvD+XoGqOksXupP8rE0jx26ILmKiQZ
z99MGJoQicEpo+BW3L9wr6OJQZSrs+NqWCxlmFRJL+p3sw53B4zjgYaimNl5KH4G
8pn7XbyRXtqhSBQ2kuNrkVI4SNxdEi1K+PoZ6v0CgYEAkwVcRsy5WftloVW3rTkW
yMVO+R/YNyoLBtrBtAD4BugpmTVcQRR/dBqqmfvJTzuTb/Dc5oW8dg0ZKWvoWhmB
/Utn0A71tSDoDfKc1J+2ScQpmxclceUtTMdl+EK0Fi827S2gU7q7DDI6RfOW/hLV
d2MThNu4krhl32wMboFmxdECgYACwAhZbvKQ7kcPaw1Uuy18mx4xs6vt5zkELBz0
Fua/mcWvzpa/+W8aLI1pAI4f6Z7jZ8X2Ijw6pjZ7I/LwR0kRbP64qC6X0i7dczS0
ScLVIlQzOf8evJGuPvAoebYF2aDWSBqRyhEaqkpB8lYNdVRq7io81NuWTQipdGI7
SKjTjQKBgBNSbDUWS2CAc+fsM/fBvYHKgrigVcKyvWwvb5LRXpWgPQH4LbqhG4uA
g/mFTB5B1UBg9exN/dX6uegREdRA1/X+jRAzCqXYTFESo0/UrJhJQZ3waFKJ5PZK
WChrSl6Lg5IMF2jYP5W0HwzbPPgRGibyELYBS5gZdAZpHgZToXeT
-----END RSA PRIVATE KEY-----

View File

@@ -1 +0,0 @@
813218563E2DA0DE

View File

@@ -1,17 +0,0 @@
-----BEGIN CERTIFICATE-----
MIICszCCAZsCCQCD6B82WZ9D/jANBgkqhkiG9w0BAQUFADAaMRgwFgYDVQQDEw9j
YTEuZXhhbXBsZS5jb20wHhcNMTYwNjE4MTMzODQ0WhcNMjYwNjE2MTMzODQ0WjAd
MRswGQYDVQQDExJjbGllbjEuZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDH75aclHoZkQfmeH3XpapxyF2/K73SpesY8Y8I3B33WnQc
vIy5y554pPJMtGH3ZwiN6ifo3TBEs/2WjSOWYwxfXh3utllYArApelSgUrI7SBkw
0MqVm9NG+X9cCTeWsCf+nldHOCnCARuyBEpLeRDPVlNmfgdNK2ar0KqqEPnN5UV+
k968nAuqSDtRL7Yl7R/uxEq4MglM/ocxOpGIrLTFh1eclPVaQ/dNsEJpkrnYQlFZ
aI1sWDzWoqtpAO15PgBBNnkW9EJGrF8dAds64U2jYBZLMKuHwvuERkEgOKEdUrB3
uu1dWJxS5BCumWM1C3xs6qsLeonWxZ5GXjjWObZNAgMBAAEwDQYJKoZIhvcNAQEF
BQADggEBAJKME0zm/0eokmXMCLJhKYgm8hDKOHKRFRZl7vwy9SC9cwhdlhcPEeeP
5M+dXQCtEQWgo7phoJX8nBipZ/Y0lsvDD/I3XucIkUlbOW4rk18L83nBIN4paKzW
I4CMJ6FQ72thP7L7wC/lzp3+qUCxmcpGjw9pkU3b1pQPkxBfOvfGtRFMG6E5+xj/
MtL3owJzpIH2f7vtmIszBPcgFWpvB0Sq0eJ+TwuC1huvcnmP+YZ7Iz0JhsSRw+pU
yiO9ByItBbGfK8x+DfUwCVsCL7vNscpjvTCgT3x2FNvS+XmiHZmZtpRGJPzvdI0m
Bd615VD5z+SoG/SiemqDGmt2Ank/zcI=
-----END CERTIFICATE-----

View File

@@ -1,15 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIICYjCCAUoCAQAwHTEbMBkGA1UEAxMSY2xpZW4xLmV4YW1wbGUuY29tMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx++WnJR6GZEH5nh916Wqcchdvyu9
0qXrGPGPCNwd91p0HLyMucueeKTyTLRh92cIjeon6N0wRLP9lo0jlmMMX14d7rZZ
WAKwKXpUoFKyO0gZMNDKlZvTRvl/XAk3lrAn/p5XRzgpwgEbsgRKS3kQz1ZTZn4H
TStmq9CqqhD5zeVFfpPevJwLqkg7US+2Je0f7sRKuDIJTP6HMTqRiKy0xYdXnJT1
WkP3TbBCaZK52EJRWWiNbFg81qKraQDteT4AQTZ5FvRCRqxfHQHbOuFNo2AWSzCr
h8L7hEZBIDihHVKwd7rtXVicUuQQrpljNQt8bOqrC3qJ1sWeRl441jm2TQIDAQAB
oAAwDQYJKoZIhvcNAQEFBQADggEBAEZ67vahAVydtW6LTXFI0cVY88vqunCWpOzz
UgJAzUnWG84CGDiyezj/llv/Nq3YbEEpBuxp/prOEwrJXAi/+tjx7wCh2iLJDqo2
aNRUiAvR/XZgafxq4NUrAze70u7BWR3QX+XSaxmIEEX1z1KJDGTfY6tYpCZNlUr+
/Hl6MXwlpWX0WR26zIrjx5u0dEsY4pviN6NxTZRQJxbQO1H1wHr6poVngOhIdErp
h2ZcqvTcASTkIEdKR6R8E2iYklgxIHNLWKaHZ6aBqW7lW17WKNSiGPfPVAtFhUTk
tBmgdVreAwMj+AdaweBVt0uBqb/9UKhqNThEnh4kJn1I0pMJzP4=
-----END CERTIFICATE REQUEST-----

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAx++WnJR6GZEH5nh916Wqcchdvyu90qXrGPGPCNwd91p0HLyM
ucueeKTyTLRh92cIjeon6N0wRLP9lo0jlmMMX14d7rZZWAKwKXpUoFKyO0gZMNDK
lZvTRvl/XAk3lrAn/p5XRzgpwgEbsgRKS3kQz1ZTZn4HTStmq9CqqhD5zeVFfpPe
vJwLqkg7US+2Je0f7sRKuDIJTP6HMTqRiKy0xYdXnJT1WkP3TbBCaZK52EJRWWiN
bFg81qKraQDteT4AQTZ5FvRCRqxfHQHbOuFNo2AWSzCrh8L7hEZBIDihHVKwd7rt
XVicUuQQrpljNQt8bOqrC3qJ1sWeRl441jm2TQIDAQABAoIBAQCtD942uu7VooxM
GpATUfsvclhzWdF9vNC7TpyY9q+ZpFpNZYgKaw5JL73sV1dVZ4IoFT9mec+GKKag
4pqjWikjg7w1HPJJFEqYHKOUAwDz/3yOnKw+xBslnGF5sSDE9sYnx7eUljDPFVZ7
yOrmWW0Li5W1afG4ApFkt8KCYx9X8E0Mren2nfqoobM2l2LKFcF1Xs+M0iUAOoeN
ojS/NTvxjZibm92CMblp7x6e51y+oq3TJFoUwFSAj3U26jyubL5sYpJeAeTxyZg6
Y+UcEGmCpW5gsZSvRxvNxzCS4bCl9KOZXvyFtcVswHppfTynba6x8hDF7LkfJS7H
z/Ut+e+hAoGBAO2d2M2316eBpwP7x99DQSFpg7E/emdcfdRuHDEonBeJr2X2Fw0a
O/ZtUxcccovy4wrJcqiZsmjqetRUZ6ymsOaaASsPG21ChegFjm9D6+ZtejHpbuo3
8HQ//LW7hqoiRQh3ODihfYCTwwxIIuwAdUzoxpM9Yu57Zg7reYuNh48LAoGBANdn
c003ay1cq1fuuDJKENj10UZGotRdBxt+X1A7MhpMAqSaYJ+V2XOXpuMbIiX/qDfF
M4hcQhJygoCozNzsynztyIjpGtl57AG95igOi2Hah0OOMt/1Z9UwaukIaHHo8Tyk
sPZYoxBTstZcdsyHnGdU6n90SA9oYLBB89E3AocHAoGBAITR27M6FTCLl2jxn0qc
FFbx3OwB2JDYMXnBxr5vvbimfMWYpk/rnyLi/zQG8bxqmyCXdCDsML7WeqwfNgha
8L0lzotcGW+cZK9KE9D7/WvDPC+UFSyU8jJ45fBLjz2ghEf0JBf7pORvM/K0i9ix
dN/1qbH5+Ufm8Chc1Yb9KI37AoGAAbxDoYugwWzNtJenxD/0gsr4NKi9Bxj4xa/u
9KaFcNDL9KeJv79lURkXrxy42bWFlW1xTNfxcFSb2I2DmQQPXZJM202FedsRm7H7
+LalSNSJ4nFy13sSqxUIx3fZ35EQ4HwzMMjmB2ulNTTpgBxXlj2I5h35tqYQoVrm
q/jVfGECgYEAkU0L9bp1NPMzXgVJ2Os1VPSzoOywUQfx4NCJhTA1oZR+20JFsQBN
b6g0q6xean0xDuXjDRrjPET5V/GPOQ7stAPTLtqN42XPuRcFzSNj7Skh+ALTP5JS
bNZgBMwxQsbS89bUjRTDlRK/isuNIyUn0Zn7QlEsZEvJw0cNR3wPOc4=
-----END RSA PRIVATE KEY-----

View File

@@ -1,17 +0,0 @@
-----BEGIN CERTIFICATE-----
MIICtDCCAZwCCQCerG0FImwiFjANBgkqhkiG9w0BAQUFADAaMRgwFgYDVQQDEw9j
YTIuZXhhbXBsZS5jb20wHhcNMTYwNjE4MTgxNDUxWhcNMjYwNjE2MTgxNDUxWjAe
MRwwGgYDVQQDExNjbGllbnQyLmV4YW1wbGUuY29tMIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEA9LaFxeiG1qpBTUIEl2vxvojOSWR1xkayaGtGsXBuqucf
wifulsVC7dw+t7lW42pWutIuR98iflAZ9+tFX7TsEITNIyV64ePn9TF935LW2DFm
AFkqYdZcTJ4qOkdsbbiHznlaIDPbMhIZFd9L8NEhrDTHuTtCav3g5B5V4okJfeNh
iSpKm3WLHP6lFwG1RISLhHTCeIMFxer49iHiQ+A33TV2l0bQGcv4e1+OoRsXGKGs
3oY6RJZ0GqzjeYqoybsLGBvZPPd2e8nH3RZac66XHMexsHHTV7L2tpWZm+JunMRg
hMXONc0b3V9mbUdrjHY/aGDPADEevZA4LLztGUc3VQIDAQABMA0GCSqGSIb3DQEB
BQUAA4IBAQCFo6IXUznH/iSuJtWrMtMkJTEg7o2qKRDgFApzw1J2URdvyYery15w
6FddKFvkYhNLFl9Nb3Z8HLxruZrrItqwjR2kIG9lW00uxnwIcgwTibmwDQL5nr7m
1cWzelhY/TVwBpLXRMg1YOXU8NRkT1VjkTUCpyIETI8b+wed67MkrofOadaY+FUL
gk1F3yDKz35UYIKnlxKwvrdySE8WFza0PmiXQDtTG1moTpe1BDEK1b60vhfudMBK
9vhE8kTooF01+su9gLUcrjVknI9H5PHtXID7FDiZ/disIAaWqSQLuvg/Kvb/cAFd
PwTKgnJQVcTKXkz2leJ6fsvzYwlANob1
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More