1
0
mirror of https://github.com/containous/traefik.git synced 2025-09-14 09:44:21 +03:00

Compare commits

..

5 Commits
v3.0.1 ... v1.4

Author SHA1 Message Date
mmatur
7b6f1c2e09 fix: docs build trusted host 2024-05-16 21:08:36 +02:00
mmatur
631568cc01 fix: docs build alpine version 2024-05-16 16:24:22 +02:00
Fernandez Ludovic
145a485255 fix: mkdocs.yml 2023-01-23 11:04:07 +01:00
mmatur
d3839eae6a feat: add dockerfile for documentation 2021-10-05 10:23:25 +02:00
Fernandez Ludovic
3c96dd3014 doc: fix version in requirements.txt 2018-09-28 23:16:28 +02:00
5097 changed files with 1211947 additions and 288904 deletions

View File

@@ -1,5 +1,3 @@
dist/
!dist/**/traefik
!dist/traefik
site/
vendor/
.idea/

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
# vendor/github.com/go-acme/lego/providers/dns/cloudxns/cloudxns.go eol=crlf

24
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,24 @@
provider/kubernetes/** @containous/kubernetes
provider/rancher/** @containous/rancher
provider/marathon/** @containous/marathon
provider/docker/** @containous/docker
docs/user-guide/kubernetes.md @containous/kubernetes
docs/user-guide/marathon.md @containous/marathon
docs/user-guide/swarm.md @containous/docker
docs/user-guide/swarm-mode.md @containous/docker
docs/configuration/backends/docker.md @containous/docker
docs/configuration/backends/kubernetes.md @containous/kubernetes
docs/configuration/backends/marathon.md @containous/marathon
docs/configuration/backends/rancher.md @containous/rancher
examples/k8s/ @containous/kubernetes
examples/compose-k8s.yaml @containous/kubernetes
examples/k8s.namespace.yaml @containous/kubernetes
examples/compose-rancher.yml @containous/rancher
examples/compose-marathon.yml @containous/marathon
vendor/github.com/gambol99/go-marathon @containous/marathon
vendor/github.com/rancher @containous/rancher
vendor/k8s.io/ @containous/kubernetes

View File

@@ -1,35 +1,31 @@
<!-- PLEASE FOLLOW THE ISSUE TEMPLATE TO HELP TRIAGE AND SUPPORT! -->
### Do you want to request a *feature* or report a *bug*?
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
For end-user related support questions, refer to one of the following:
- the Traefik community forum: https://community.traefik.io/
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
Bug
### Do you want to request a *feature* or report a *bug*?
<!--
The configurations between 1.X and 2.X are NOT compatible.
Please have a look here https://doc.traefik.io/traefik/getting-started/configuration-overview/.
If you intend to ask a support question: DO NOT FILE AN ISSUE.
-->
### What did you do?
<!--
HOW TO WRITE A GOOD BUG REPORT?
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- Respect the issue template as more as possible.
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
@@ -47,12 +43,9 @@ HOW TO WRITE A GOOD BUG REPORT?
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
`latest` is not considered as a valid version.
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
-->
```
@@ -64,13 +57,12 @@ For the Traefik Docker image:
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in DEBUG level (`--log.level=DEBUG` switch)
### If applicable, please paste the log output in debug mode (`--debug` switch)
```
(paste your output here)

View File

@@ -1,82 +0,0 @@
name: Bug Report (Traefik)
description: Create a report to help us improve.
body:
- type: checkboxes
id: terms
attributes:
label: Welcome!
description: |
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please use the [Traefik community forum](https://community.traefik.io/).
All new/updated issues are triaged regularly by the maintainers.
All issues closed by a bot are subsequently double-checked by the maintainers.
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
options:
- label: Yes, I've searched similar issues on [GitHub](https://github.com/traefik/traefik/issues) and didn't find any.
required: true
- label: Yes, I've searched similar issues on the [Traefik community forum](https://community.traefik.io) and didn't find any.
required: true
- type: textarea
attributes:
label: What did you do?
description: |
How to write a good bug report?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
placeholder: What did you do?
validations:
required: true
- type: textarea
attributes:
label: What did you see instead?
placeholder: What did you see instead?
validations:
required: true
- type: textarea
attributes:
label: What version of Traefik are you using?
description: |
`latest` is not considered as a valid version.
Output of `traefik version`.
For the Traefik Docker image (`docker run [IMAGE] version`), example:
```console
$ docker run traefik version
```
placeholder: Paste your output here.
validations:
required: true
- type: textarea
attributes:
label: What is your environment & configuration?
description: arguments, toml, provider, platform, ...
placeholder: Add information here.
value: |
```yaml
# (paste your configuration here)
```
Add more configuration information here.
validations:
required: true
- type: textarea
attributes:
label: If applicable, please paste the log output in DEBUG level
description: "`--log.level=DEBUG` switch."
placeholder: Paste your output here.
validations:
required: false

View File

@@ -1,8 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Traefik Community Support
url: https://community.traefik.io/
about: If you have a question, or are looking for advice, please post on our Discuss forum! The community loves to chime in to help. Happy Coding!
- name: Traefik Helm Chart Issues
url: https://github.com/traefik/traefik-helm-chart
about: Are you submitting an issue or feature enhancement for the Traefik helm chart? Please post in the traefik-helm-chart GitHub Issues.

View File

@@ -1,33 +0,0 @@
name: Feature Request (Traefik)
description: Suggest an idea for this project.
body:
- type: checkboxes
id: terms
attributes:
label: Welcome!
description: |
The issue tracker is for reporting bugs and feature requests only. For end-user related support questions, please refer to one of the following:
- the Traefik community forum: https://community.traefik.io/
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
options:
- label: Yes, I've searched similar issues on [GitHub](https://github.com/traefik/traefik/issues) and didn't find any.
required: true
- label: Yes, I've searched similar issues on the [Traefik community forum](https://community.traefik.io) and didn't find any.
required: true
- type: textarea
attributes:
label: What did you expect to see?
description: |
How to write a good issue?
- Respect the issue template as much as possible.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
placeholder: What did you expect to see?
validations:
required: true

View File

@@ -1,19 +1,18 @@
<!--
PLEASE READ THIS MESSAGE.
Documentation fixes or enhancements:
- for Traefik v2: use branch v2.11
- for Traefik v3: use branch v3.0
HOW TO WRITE A GOOD PULL REQUEST?
Bug fixes:
- for Traefik v2: use branch v2.11
- for Traefik v3: use branch v3.0
Enhancements:
- for Traefik v2: we only accept bug fixes
- for Traefik v3: use branch master
HOW TO WRITE A GOOD PULL REQUEST? https://doc.traefik.io/traefik/contributing/submitting-pull-requests/
- Make it small.
- Do only one thing.
- Avoid re-formatting.
- Make sure the code builds.
- Make sure all tests pass.
- Add tests.
- Write useful descriptions and titles.
- Address review comments in terms of additional commits.
- Do not amend/squash existing ones unless the PR is trivial.
- Read the contributing guide: https://github.com/containous/traefik/blob/master/.github/CONTRIBUTING.md.
-->

View File

@@ -1,7 +0,0 @@
### What does this PR do?
Merge v{{.Version}} into master
### Motivation
Be sync.

View File

@@ -1,7 +0,0 @@
### What does this PR do?
Prepare release v{{.Version}}.
### Motivation
Create a new release.

View File

@@ -1,62 +0,0 @@
name: Build Binaries
on:
pull_request:
branches:
- '*'
env:
GO_VERSION: '1.22'
CGO_ENABLED: 0
jobs:
build-webui:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build webui
run: |
make clean-webui generate-webui
tar czvf webui.tar.gz ./webui/static/
- name: Artifact webui
uses: actions/upload-artifact@v4
with:
name: webui.tar.gz
path: webui.tar.gz
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-latest, macos-latest, windows-latest ]
needs:
- build-webui
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Artifact webui
uses: actions/download-artifact@v4
with:
name: webui.tar.gz
- name: Untar webui
run: tar xvf webui.tar.gz
- name: Build
run: make binary

View File

@@ -1,25 +0,0 @@
name: Check Documentation
on:
pull_request:
branches:
- '*'
jobs:
docs:
name: Check, verify and build documentation
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check documentation
run: make docs-pull-images docs
env:
# These variables are not passed to workflows that are triggered by a pull request from a fork.
DOCS_VERIFY_SKIP: ${{ vars.DOCS_VERIFY_SKIP }}
DOCS_LINT_SKIP: ${{ vars.DOCS_LINT_SKIP }}

View File

@@ -1,70 +0,0 @@
name: "CodeQL"
on:
push:
branches:
- master
- v*
schedule:
- cron: '11 22 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'javascript', 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: setup go
uses: actions/setup-go@v5
if: ${{ matrix.language == 'go' }}
with:
go-version-file: 'go.mod'
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,52 +0,0 @@
name: Build and Publish Documentation
on:
push:
branches:
- master
- v*
env:
STRUCTOR_VERSION: v1.13.2
MIXTUS_VERSION: v0.4.1
jobs:
docs:
name: Doc Process
runs-on: ubuntu-latest
if: github.repository == 'traefik/traefik'
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Install Structor ${{ env.STRUCTOR_VERSION }}
run: curl -sSfL https://raw.githubusercontent.com/traefik/structor/master/godownloader.sh | sh -s -- -b $HOME/bin ${STRUCTOR_VERSION}
- name: Install Seo-doc
run: curl -sSfL https://raw.githubusercontent.com/traefik/seo-doc/master/godownloader.sh | sh -s -- -b "${HOME}/bin"
- name: Install Mixtus ${{ env.MIXTUS_VERSION }}
run: curl -sSfL https://raw.githubusercontent.com/traefik/mixtus/master/godownloader.sh | sh -s -- -b $HOME/bin ${MIXTUS_VERSION}
- name: Build documentation
run: $HOME/bin/structor -o traefik -r traefik --dockerfile-url="https://raw.githubusercontent.com/traefik/traefik/v1.7/docs.Dockerfile" --menu.js-url="https://raw.githubusercontent.com/traefik/structor/master/traefik-menu.js.gotmpl" --rqts-url="https://raw.githubusercontent.com/traefik/structor/master/requirements-override.txt" --force-edit-url --exp-branch=master --debug
env:
STRUCTOR_LATEST_TAG: ${{ vars.STRUCTOR_LATEST_TAG }}
- name: Apply seo
run: $HOME/bin/seo -path=./site -product=traefik
- name: Publish documentation
run: $HOME/bin/mixtus --dst-doc-path="./traefik" --dst-owner=traefik --dst-repo-name=doc --git-user-email="30906710+traefiker@users.noreply.github.com" --git-user-name=traefiker --src-doc-path="./site" --src-owner=traefik --src-repo-name=traefik
env:
GITHUB_TOKEN: ${{ secrets.GH_TOKEN_REPO }}

View File

@@ -1,59 +0,0 @@
name: Build experimental image on branch
on:
push:
branches:
- master
- v*
env:
GO_VERSION: '1.22'
CGO_ENABLED: 0
jobs:
experimental:
if: github.repository == 'traefik/traefik'
name: Build experimental image on branch
runs-on: ubuntu-latest
steps:
# https://github.com/marketplace/actions/checkout
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build webui
run: |
make clean-webui generate-webui
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Build
run: make generate binary
- name: Branch name
run: echo ${GITHUB_REF##*/}
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build docker experimental image
env:
DOCKER_BUILDX_ARGS: "--push"
run: |
make multi-arch-image-experimental-${GITHUB_REF##*/}

View File

@@ -1,35 +0,0 @@
name: Test K8s Gateway API conformance
on:
pull_request:
branches:
- '*'
paths:
- 'pkg/provider/kubernetes/gateway/**'
- 'integration/k8s_conformance_test.go'
env:
GO_VERSION: '1.22'
CGO_ENABLED: 0
jobs:
test-conformance:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Avoid generating webui
run: touch webui/static/index.html
- name: K8s Gateway API conformance test
run: make test-gateway-api-conformance

View File

@@ -1,72 +0,0 @@
name: Test Integration
on:
pull_request:
branches:
- '*'
env:
GO_VERSION: '1.22'
CGO_ENABLED: 0
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Avoid generating webui
run: touch webui/static/index.html
- name: Build binary
run: make binary
test-integration:
runs-on: ubuntu-latest
needs:
- build
strategy:
fail-fast: true
matrix:
parallel: [12]
index: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 , 11]
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Avoid generating webui
run: touch webui/static/index.html
- name: Build binary
run: make binary
- name: Generate go test Slice
id: test_split
uses: hashicorp-forge/go-test-split-action@v1
with:
packages: ./integration
total: ${{ matrix.parallel }}
index: ${{ matrix.index }}
- name: Run Integration tests
run: |
TESTS=$(echo "${{ steps.test_split.outputs.run}}" | sed 's/\$/\$\$/g')
TESTFLAGS="-run \"${TESTS}\"" make test-integration

View File

@@ -1,31 +0,0 @@
name: Test Unit
on:
pull_request:
branches:
- '*'
env:
GO_VERSION: '1.22'
jobs:
test-unit:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Avoid generating webui
run: touch webui/static/index.html
- name: Tests
run: make test-unit

View File

@@ -1,68 +0,0 @@
name: Validate
on:
pull_request:
branches:
- '*'
env:
GO_VERSION: '1.22'
GOLANGCI_LINT_VERSION: v1.57.0
MISSSPELL_VERSION: v0.4.1
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Install golangci-lint ${{ env.GOLANGCI_LINT_VERSION }}
run: curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin ${GOLANGCI_LINT_VERSION}
- name: Install missspell ${{ env.MISSSPELL_VERSION }}
run: curl -sfL https://raw.githubusercontent.com/golangci/misspell/master/install-misspell.sh | sh -s -- -b $(go env GOPATH)/bin ${MISSSPELL_VERSION}
- name: Avoid generating webui
run: touch webui/static/index.html
- name: Validate
run: make validate
validate-generate:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go ${{ env.GO_VERSION }}
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: go generate
run: |
make generate
git diff --exit-code
- name: go mod tidy
run: |
go mod tidy
git diff --exit-code
- name: make generate-crd
run: |
make generate-crd
git diff --exit-code

26
.gitignore vendored
View File

@@ -1,22 +1,14 @@
.idea/
.intellij/
*.iml
.vscode/
.DS_Store
/dist
/webui/.tmp/
/site/
/docs/site/
/autogen/
/autogen/gen.go
.idea
.intellij
*.iml
/traefik
/traefik.toml
/traefik.yml
/static/
.vscode/
/site/
*.log
*.exe
cover.out
vendor/
plugins-storage/
plugins-local/
traefik_changelog.md
integration/tailscale.secret
integration/conformance-reports/
.DS_Store
/example/acme/acme.json

View File

@@ -1,287 +0,0 @@
run:
timeout: 10m
linters-settings:
govet:
enable-all: true
disable:
- shadow
- fieldalignment
gocyclo:
min-complexity: 14
goconst:
min-len: 3
min-occurrences: 4
misspell:
locale: US
funlen:
lines: -1
statements: 120
forbidigo:
forbid:
- ^print(ln)?$
- ^spew\.Print(f|ln)?$
- ^spew\.Dump$
depguard:
rules:
main:
deny:
- pkg: "github.com/instana/testify"
desc: not allowed
- pkg: "github.com/pkg/errors"
desc: Should be replaced by standard lib errors package
- pkg: "k8s.io/api/networking/v1beta1"
desc: This API is deprecated
- pkg: "k8s.io/api/extensions/v1beta1"
desc: This API is deprecated
godox:
keywords:
- FIXME
importas:
no-unaliased: true
alias:
- alias: composeapi
pkg: github.com/docker/compose/v2/pkg/api
# Standard Kubernetes rewrites:
- alias: corev1
pkg: "k8s.io/api/core/v1"
- alias: netv1
pkg: "k8s.io/api/networking/v1"
- alias: admv1
pkg: "k8s.io/api/admission/v1"
- alias: admv1beta1
pkg: "k8s.io/api/admission/v1beta1"
- alias: metav1
pkg: "k8s.io/apimachinery/pkg/apis/meta/v1"
- alias: ktypes
pkg: "k8s.io/apimachinery/pkg/types"
- alias: kerror
pkg: "k8s.io/apimachinery/pkg/api/errors"
- alias: kclientset
pkg: "k8s.io/client-go/kubernetes"
- alias: kinformers
pkg: "k8s.io/client-go/informers"
- alias: ktesting
pkg: "k8s.io/client-go/testing"
- alias: kschema
pkg: "k8s.io/apimachinery/pkg/runtime/schema"
- alias: kscheme
pkg: "k8s.io/client-go/kubernetes/scheme"
- alias: kversion
pkg: "k8s.io/apimachinery/pkg/version"
- alias: kubefake
pkg: "k8s.io/client-go/kubernetes/fake"
- alias: discoveryfake
pkg: "k8s.io/client-go/discovery/fake"
# Kubernetes Gateway rewrites:
- alias: gateclientset
pkg: "sigs.k8s.io/gateway-api/pkg/client/clientset/gateway/versioned"
- alias: gateinformers
pkg: "sigs.k8s.io/gateway-api/pkg/client/informers/gateway/externalversions"
- alias: gatev1alpha2
pkg: "sigs.k8s.io/gateway-api/apis/v1alpha2"
# Traefik Kubernetes rewrites:
- alias: traefikv1alpha1
pkg: "github.com/traefik/traefik/v3/pkg/provider/kubernetes/crd/traefikio/v1alpha1"
- alias: traefikclientset
pkg: "github.com/traefik/traefik/v3/pkg/provider/kubernetes/crd/generated/clientset/versioned"
- alias: traefikinformers
pkg: "github.com/traefik/traefik/v3/pkg/provider/kubernetes/crd/generated/informers/externalversions"
- alias: traefikscheme
pkg: "github.com/traefik/traefik/v3/pkg/provider/kubernetes/crd/generated/clientset/versioned/scheme"
- alias: traefikcrdfake
pkg: "github.com/traefik/traefik/v3/pkg/provider/kubernetes/crd/generated/clientset/versioned/fake"
tagalign:
align: false
sort: true
order:
- description
- json
- toml
- yaml
- yml
- label
- label-slice-as-struct
- file
- kv
- export
revive:
rules:
- name: struct-tag
- name: blank-imports
- name: context-as-argument
- name: context-keys-type
- name: dot-imports
- name: error-return
- name: error-strings
- name: error-naming
- name: exported
disabled: true
- name: if-return
- name: increment-decrement
- name: var-naming
- name: var-declaration
- name: package-comments
disabled: true
- name: range
- name: receiver-naming
- name: time-naming
- name: unexported-return
- name: indent-error-flow
- name: errorf
- name: empty-block
- name: superfluous-else
- name: unused-parameter
disabled: true
- name: unreachable-code
- name: redefines-builtin-id
gomoddirectives:
replace-allow-list:
- github.com/abbot/go-http-auth
- github.com/gorilla/mux
- github.com/mailgun/minheap
- github.com/mailgun/multibuf
- github.com/jaguilar/vt100
- github.com/cucumber/godog
testifylint:
disable:
- suite-dont-use-pkg
- require-error
- go-require
staticcheck:
checks:
- all
- -SA1019
linters:
enable-all: true
disable:
- deadcode # deprecated
- exhaustivestruct # deprecated
- golint # deprecated
- ifshort # deprecated
- interfacer # deprecated
- maligned # deprecated
- nosnakecase # deprecated
- scopelint # deprecated
- scopelint # deprecated
- structcheck # deprecated
- varcheck # deprecated
- sqlclosecheck # not relevant (SQL)
- rowserrcheck # not relevant (SQL)
- execinquery # not relevant (SQL)
- cyclop # duplicate of gocyclo
- lll # Not relevant
- gocyclo # FIXME must be fixed
- gocognit # Too strict
- nestif # Too many false-positive.
- prealloc # Too many false-positive.
- makezero # Not relevant
- dupl # Too strict
- gosec # Too strict
- gochecknoinits
- gochecknoglobals
- wsl # Too strict
- nlreturn # Not relevant
- gomnd # Too strict
- stylecheck # skip because report issues related to some generated files.
- testpackage # Too strict
- tparallel # Not relevant
- paralleltest # Not relevant
- exhaustive # Not relevant
- exhaustruct # Not relevant
- goerr113 # Too strict
- wrapcheck # Too strict
- noctx # Too strict
- bodyclose # too many false-positive
- forcetypeassert # Too strict
- tagliatelle # Too strict
- varnamelen # Not relevant
- nilnil # Not relevant
- ireturn # Not relevant
- contextcheck # too many false-positive
- containedctx # too many false-positive
- maintidx # kind of duplicate of gocyclo
- nonamedreturns # Too strict
- gosmopolitan # not relevant
- exportloopref # Useless with go1.22
- musttag
- intrange # bug (fixed in golangci-lint v1.58)
issues:
exclude-use-default: false
max-issues-per-linter: 0
max-same-issues: 0
exclude-dirs:
- pkg/provider/kubernetes/crd/generated/
exclude:
- 'Error return value of .((os\.)?std(out|err)\..*|.*Close|.*Flush|os\.Remove(All)?|.*printf?|os\.(Un)?Setenv). is not checked'
- "should have a package comment, unless it's in another file for this package"
- 'fmt.Sprintf can be replaced with string'
exclude-rules:
- path: '(.+)_test.go'
linters:
- goconst
- funlen
- godot
- path: '(.+)_test.go'
text: ' always receives '
linters:
- unparam
- path: '(.+)\.go'
text: 'struct-tag: unknown option ''inline'' in JSON tag'
linters:
- revive
- path: pkg/server/service/bufferpool.go
text: 'SA6002: argument should be pointer-like to avoid allocations'
- path: pkg/server/middleware/middlewares.go
text: "Function 'buildConstructor' has too many statements"
linters:
- funlen
- path: pkg/logs/haystack.go
linters:
- goprintffuncname
- path: pkg/tracing/tracing.go
text: "printf-like formatting function 'SetErrorWithEvent' should be named 'SetErrorWithEventf'"
linters:
- goprintffuncname
- path: pkg/tls/tlsmanager_test.go
text: 'SA1019: config.ClientCAs.Subjects has been deprecated since Go 1.18'
- path: pkg/types/tls_test.go
text: 'SA1019: tlsConfig.RootCAs.Subjects has been deprecated since Go 1.18'
- path: pkg/provider/kubernetes/crd/kubernetes.go
text: 'SA1019: middleware.Spec.IPWhiteList is deprecated: please use IPAllowList instead.'
- path: pkg/server/middleware/tcp/middlewares.go
text: 'SA1019: config.IPWhiteList is deprecated: please use IPAllowList instead.'
- path: pkg/server/middleware/middlewares.go
text: 'SA1019: config.IPWhiteList is deprecated: please use IPAllowList instead.'
- path: pkg/provider/kubernetes/(crd|gateway)/client.go
linters:
- interfacebloat
- path: pkg/metrics/metrics.go
linters:
- interfacebloat
- path: integration/healthcheck_test.go
text: 'Duplicate words \(wsp2,\) found'
linters:
- dupword
- path: pkg/types/domain_test.go
text: 'Duplicate words \(sub\) found'
linters:
- dupword
- path: pkg/provider/kubernetes/crd/kubernetes.go
text: "Function 'loadConfigurationFromCRD' has too many statements"
linters:
- funlen
- path: pkg/provider/kubernetes/gateway/client_mock_test.go
text: 'unusedwrite: unused write to field'
linters:
- govet
- path: pkg/cli/deprecation.go
linters:
- goconst
- path: pkg/cli/loader_file.go
linters:
- goconst

View File

@@ -1,66 +0,0 @@
project_name: traefik
dist: "./dist/[[ .GOOS ]]"
[[ if eq .GOOS "linux" ]]
before:
hooks:
- go generate
[[ end ]]
builds:
- binary: traefik
main: ./cmd/traefik/
env:
- CGO_ENABLED=0
ldflags:
- -s -w -X github.com/traefik/traefik/v3/pkg/version.Version={{.Version}} -X github.com/traefik/traefik/v3/pkg/version.Codename={{.Env.CODENAME}} -X github.com/traefik/traefik/v3/pkg/version.BuildDate={{.Date}}
flags:
- -trimpath
goos:
- "[[ .GOOS ]]"
goarch:
- amd64
- '386'
- arm
- arm64
- ppc64le
- s390x
- riscv64
goarm:
- '7'
- '6'
ignore:
- goos: darwin
goarch: '386'
- goos: openbsd
goarch: arm
- goos: openbsd
goarch: arm64
- goos: freebsd
goarch: arm
- goos: freebsd
goarch: arm64
- goos: windows
goarch: arm
changelog:
disable: true
archives:
- id: traefik
name_template: '{{ .ProjectName }}_v{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}'
format: tar.gz
format_overrides:
- goos: windows
format: zip
files:
- LICENSE.md
- CHANGELOG.md
checksum:
name_template: "{{ .ProjectName }}_v{{ .Version }}_checksums.txt"
release:
disable: true

10
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,10 @@
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 44e1753f98b0da305332abe26856c3e621c5c439
hooks:
- id: detect-private-key
- repo: git://github.com/containous/pre-commit-hooks
sha: 35e641b5107671e94102b0ce909648559e568d61
hooks:
- id: goFmt
- id: goLint
- id: goErrcheck

View File

@@ -1,63 +0,0 @@
version: v1.0
name: Traefik
agent:
machine:
type: e1-standard-4
os_image: ubuntu2004
fail_fast:
stop:
when: "branch != 'master'"
auto_cancel:
queued:
when: "branch != 'master'"
running:
when: "branch != 'master'"
global_job_config:
prologue:
commands:
- curl -sSfL https://raw.githubusercontent.com/ldez/semgo/master/godownloader.sh | sudo sh -s -- -b "/usr/local/bin"
- sudo semgo go1.22
- export "GOPATH=$(go env GOPATH)"
- export "SEMAPHORE_GIT_DIR=${GOPATH}/src/github.com/traefik/${SEMAPHORE_PROJECT_NAME}"
- export "PATH=${GOPATH}/bin:${PATH}"
- mkdir -vp "${SEMAPHORE_GIT_DIR}" "${GOPATH}/bin"
- export GOPROXY=https://proxy.golang.org,direct
- curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b "${GOPATH}/bin" v1.57.0
- curl -sSfL https://gist.githubusercontent.com/traefiker/6d7ac019c11d011e4f131bb2cca8900e/raw/goreleaser.sh | bash -s -- -b "${GOPATH}/bin"
- checkout
- cache restore traefik-$(checksum go.sum)
blocks:
- name: Release
dependencies: []
run:
when: "tag =~ '.*'"
task:
agent:
machine:
type: e1-standard-8
os_image: ubuntu2004
secrets:
- name: traefik
env_vars:
- name: GH_VERSION
value: 2.32.1
- name: CODENAME
value: "beaufort"
prologue:
commands:
- export VERSION=${SEMAPHORE_GIT_TAG_NAME}
- curl -sSL -o /tmp/gh_${GH_VERSION}_linux_amd64.tar.gz https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_amd64.tar.gz
- tar -zxvf /tmp/gh_${GH_VERSION}_linux_amd64.tar.gz -C /tmp
- sudo mv /tmp/gh_${GH_VERSION}_linux_amd64/bin/gh /usr/local/bin/gh
- sudo rm -rf ~/.phpbrew ~/.kerl ~/.sbt ~/.nvm ~/.npm ~/.kiex /usr/lib/jvm /opt/az /opt/firefox /usr/lib/google-cloud-sdk ~/.rbenv ~/.pip_download_cache # Remove unnecessary data.
- sudo service docker stop && sudo umount /var/lib/docker && sudo service docker start # Unmounts the docker disk and the whole system disk is usable.
jobs:
- name: Release
commands:
- make release-packages
- gh release create ${SEMAPHORE_GIT_TAG_NAME} ./dist/**/traefik*.{zip,tar.gz} ./dist/traefik*.{tar.gz,txt} --repo traefik/traefik --title ${SEMAPHORE_GIT_TAG_NAME} --notes ${SEMAPHORE_GIT_TAG_NAME}
- ./script/deploy.sh

11
.semaphoreci/setup.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env bash
set -e
sudo -E apt-get -yq update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*
docker version
pip install --user -r requirements.txt
make pull-images
ci_retry make validate

6
.semaphoreci/tests.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -e
make test-unit
ci_retry make test-integration
make -j${N_MAKE_JOBS} crossbinary-default-parallel

37
.semaphoreci/vars Normal file
View File

@@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -e
export REPO='containous/traefik'
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
then
export VERSION
else
export VERSION=''
fi
export CODENAME=roquefort
export N_MAKE_JOBS=2
function ci_retry {
local NRETRY=3
local NSLEEP=5
local n=0
until [ $n -ge $NRETRY ]
do
"$@" && break
n=$[$n+1]
echo "$@ failed, attempt ${n}/${NRETRY}"
sleep $NSLEEP
done
[ $n -lt $NRETRY ]
}
export -f ci_retry

58
.travis.yml Normal file
View File

@@ -0,0 +1,58 @@
sudo: required
dist: trusty
services:
- docker
env:
global:
- REPO: $TRAVIS_REPO_SLUG
- VERSION: $TRAVIS_TAG
- CODENAME: roquefort
- N_MAKE_JOBS: 2
script:
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
before_deploy:
- >
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
export BEFORE_DEPLOY_RUN=1;
sudo -E apt-get -yq update;
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
docker version;
pip install --user -r requirements.txt;
make -j${N_MAKE_JOBS} crossbinary-parallel;
make image-dirty;
mkdocs build --clean;
tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .;
fi
deploy:
- provider: pages
edge: true
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
tags: true
condition: ${TRAVIS_TAG} =~ ^v[0-9]+\.[0-9]+\.[0-9]+$
- provider: releases
api_key: ${GITHUB_TOKEN}
file: dist/traefik*
skip_cleanup: true
file_glob: true
on:
repo: containous/traefik
tags: true
- provider: script
script: sh script/deploy.sh
skip_cleanup: true
on:
repo: containous/traefik
tags: true
- provider: script
script: sh script/deploy-docker.sh
skip_cleanup: true
on:
repo: containous/traefik

BIN
.travis/traefiker_rsa.enc Normal file

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -2,11 +2,17 @@
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
@@ -16,52 +22,53 @@ Examples of behavior that contributes to creating a positive environment include
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or our community.
Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at contact@traefik.io
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
The project team is obligated to maintain confidentiality with regard to the reporter of an incident.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at contact@containo.us
All complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
When an inapropriate behavior is reported, maintainers will discuss on the Maintainer's Discord before marking the message as "abuse".
This conversation beforehand avoids one-sided decisions.
The first message will be edited and marked as abuse.
The second edited message and marked as abuse results in a 7-day ban.
The third edited message and marked as abuse results in a permanent ban.
The content of edited messages is:
`Dear user, we want traefik to provide a welcoming and respectful environment. Your [comment/issue/PR] has been reported and marked as abuse according to our [Code of Conduct](./CODE_OF_CONDUCT.md). Thank you.`
The [report must be resolved](https://docs.github.com/en/communities/moderating-comments-and-conversations/managing-reported-content-in-your-organizations-repository#resolving-a-report) accordingly.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,11 +1,228 @@
# Contributing
Here are some guidelines that should help to start contributing to the project.
## Building
- [Submitting pull Requests](https://doc.traefik.io/traefik/contributing/submitting-pull-requests/)
- [Submitting issues](https://doc.traefik.io/traefik/contributing/submitting-issues/)
- [Submitting security issues](https://doc.traefik.io/traefik/contributing/submitting-security-issues/)
- [Advocating for Traefik](https://doc.traefik.io/traefik/contributing/advocating/)
- [Triage Process](https://github.com/traefik/contributors-guide/blob/master/issue_triage.md)
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build Traefik. For changes to its dependencies, the `glide` dependency management tool and `glide-vc` plugin are required.
If you are willing to become a maintainer of the project, please take a look at the [maintainers guidelines](docs/content/contributing/maintainers-guidelines.md).
### Method 1: Using `Docker` and `Makefile`
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
```bash
$ make binary
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
Sending build context to Docker daemon 295.3 MB
Step 0 : FROM golang:1.9-alpine
---> 8c6473912976
Step 1 : RUN go get github.com/Masterminds/glide
[...]
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/emile/dev/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: binary (in .)
$ ls dist/
traefik*
```
### Method 2: Using `go`
##### Setting up your `go` environment
- You need `go` v1.9+
- It is recommended you clone Træfik into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
- Set your `GOPATH` and `PATH` variable to be set to `~/go` via:
```bash
export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
> Note: You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- Verify your environment is setup properly by running `$ go env`. Depending on your OS and environment you should see output similar to:
```bash
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/<yourusername>/go"
GORACE=""
## more go env's will be listed
```
##### Build Træfik
Once your environment is set up and the Træfik repository cloned you can build Træfik. You need get `go-bindata` once to be able to use `go generate` command as part of the build. The steps to build are:
```bash
cd ~/go/src/github.com/containous/traefik
# Get go-bindata. Please note, the ellipses are required
go get github.com/jteeuwen/go-bindata/...
# Start build
go generate
# Standard go build
go build ./cmd/traefik
# run other commands like tests
```
You will find the Træfik executable in the `~/go/src/github.com/containous/traefik` folder as `traefik`.
### Setting up `glide` and `glide-vc` for dependency management
- Glide is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
- The glide plugin `glide-vc` must be installed from source: `go get github.com/sgotti/glide-vc`
If you want to add a dependency, use `$ glide get` to have glide put it into the vendor folder and update the glide manifest/lock files (`glide.yaml` and `glide.lock`, respectively). A following `glide-vc` run should be triggered to trim down the size of the vendor folder. The final result must be committed into VCS.
Care must be taken to choose the right arguments to `glide` when dealing with dependencies, or otherwise risk ending up with a broken build. For that reason, the helper script `script/glide.sh` encapsulates the gory details and conveniently calls `glide-vc` as well. Call it without parameters for basic usage instructions.
Here's a full example using glide to add a new dependency:
```bash
# install the new main dependency github.com/foo/bar and minimize vendor size
$ ./script/glide.sh get github.com/foo/bar
# generate (Only required to integrate other components such as web dashboard)
$ go generate
# Standard go build
$ go build ./cmd/traefik
# run other commands like tests
```
### Tests
#### Method 1: `Docker` and `make`
You can run unit tests using the `test-unit` target and the
integration test using the `test-integration` target.
```bash
$ make test-unit
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
# […]
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/vincent/src/github/vdemeester/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: test-unit (in .)
+ go test -cover -coverprofile=cover.out .
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
Test success
```
For development purposes, you can specify which tests to run by using:
```bash
# Run every tests in the MyTest suite
TESTFLAGS="-check.f MyTestSuite" make test-integration
# Run the test "MyTest" in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
# Run every tests starting with "My", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
# Run every tests ending with "Test", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
```
More: https://labix.org/gocheck
#### Method 2: `go`
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
```
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
```
## Documentation
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
First make sure you have python and pip installed
```shell
$ python --version
Python 2.7.2
$ pip --version
pip 1.5.2
```
Then install mkdocs with pip
```shell
$ pip install mkdocs
```
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
```shell
$ mkdocs serve
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
[I 160505 22:31:24 handlers:59] Start watching changes
[I 160505 22:31:24 handlers:61] Start detecting changes
```
## How to Write a Good Issue
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For end-user related support questions, refer to one of the following:
- the Traefik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
### Title
The title must be short and descriptive. (~60 characters)
### Description
- Respect the issue template as much as possible. [template](.github/ISSUE_TEMPLATE.md)
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
## How to Write a Good Pull Request
### Title
The title must be short and descriptive. (~60 characters)
### Description
- Respect the pull request template as much as possible. [template](.github/PULL_REQUEST_TEMPLATE.md)
- Explain the conditions which led you to write this PR: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
### Content
- Make it small.
- Do only one thing.
- Write useful descriptions and titles.
- Avoid re-formatting.
- Make sure the code builds.
- Make sure all tests pass.
- Add tests.
- Address review comments in terms of additional commits.
- Do not amend/squash existing ones unless the PR is trivial.
- If a PR involves changes to third-party dependencies, the commits pertaining to the vendor folder and the manifest/lock file(s) should be committed separated.
Read [10 tips for better pull requests](http://blog.ploeh.dk/2015/01/15/10-tips-for-better-pull-requests/).

View File

@@ -1,13 +1,5 @@
# syntax=docker/dockerfile:1.2
FROM alpine:3.19
RUN apk --no-cache --no-progress add ca-certificates tzdata \
&& rm -rf /var/cache/apk/*
ARG TARGETPLATFORM
COPY ./dist/$TARGETPLATFORM/traefik /
FROM scratch
COPY script/ca-certificates.crt /etc/ssl/certs/
COPY dist/traefik /
EXPOSE 80
VOLUME ["/tmp"]
ENTRYPOINT ["/traefik"]

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016-2020 Containous SAS; 2020-2024 Traefik Labs
Copyright (c) 2016-2017 Containous SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

144
MAINTAINER.md Normal file
View File

@@ -0,0 +1,144 @@
# Maintainers
## The team
* Emile Vauge [@emilevauge](https://github.com/emilevauge)
* Vincent Demeester [@vdemeester](https://github.com/vdemeester)
* Ed Robinson [@errm](https://github.com/errm)
* Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
* Manuel Zapf [@SantoDE](https://github.com/SantoDE)
* Timo Reimann [@timoreimann](https://github.com/timoreimann)
* Ludovic Fernandez [@ldez](https://github.com/ldez)
* Julien Salleyron [@juliens](https://github.com/juliens)
* Nicolas Mengin [@nmengin](https://github.com/nmengin)
* Marco Jantke [@marco-jantke](https://github.com/marco-jantke)
## PR review process:
* The status `needs-design-review` is only used in complex/heavy/tricky PRs.
* From `1` to `2`: 1 design LGTM in comment, by a senior maintainer, if needed.
* From `2` to `3`: 3 LGTM by any maintainer.
* If needed, a specific maintainer familiar with a particular domain can be requested for the review.
We use [PRM](https://github.com/ldez/prm) to manage locally pull requests.
## Bots
### [Myrmica Lobicornis](https://github.com/containous/lobicornis/)
**Update and Merge Pull Request**
The maintainer giving the final LGTM must add the `status/3-needs-merge` label to trigger the merge bot.
By default, a squash-rebase merge will be carried out.
If you want to preserve commits you must add `bot/merge-method-rebase` before `status/3-needs-merge`.
The status `status/4-merge-in-progress` is only for the bot.
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
In this case you must solve conflicts/CI/... and after you only need to remove `bot/need-human-merge`.
### [Myrmica Bibikoffi](https://github.com/containous/bibikoffi/)
* closes stale issues [cron]
* use some criterion as number of days between creation, last update, labels, ...
### [Myrmica Aloba](https://github.com/containous/aloba)
**Manage GitHub labels**
* Add labels on new PR [GitHub WebHook]
* Add and remove `contributor/waiting-for-corrections` label when a review request changes [GitHub WebHook]
* Weekly report of PR status on Slack (CaptainPR) [cron]
## Labels
If we open/look an issue/PR, we must add a `kind/*` and an `area/*`.
### Contributor
* `contributor/need-more-information`: we need more information from the contributor in order to analyze a problem.
* `contributor/waiting-for-feedback`: we need the contributor to give us feedback.
* `contributor/waiting-for-corrections`: we need the contributor to take actions in order to move forward with a PR. **(only for PR)**
* `contributor/needs-resolve-conflicts`: use it only when there is some conflicts (and an automatic rebase is not possible). **(only for PR)** _[bot, humans]_
### Kind
* `kind/enhancement`: a new or improved feature.
* `kind/question`: It's a question. **(only for issue)**
* `kind/proposal`: proposal PR/issues need a public debate.
* _Proposal issues_ are design proposal that need to be refined with multiple contributors.
* _Proposal PRs_ are technical prototypes that need to be refined with multiple contributors.
* `kind/bug/possible`: if we need to analyze to understand if it's a bug or not. **(only for issues)** _[bot only]_
* `kind/bug/confirmed`: we are sure, it's a bug. **(only for issues)**
* `kind/bug/fix`: it's a bug fix. **(only for PR)**
### Resolution
* `resolution/duplicate`: it's a duplicate issue/PR.
* `resolution/declined`: Rule #1 of open-source: no is temporary, yes is forever.
* `WIP`: Work In Progress. **(only for PR)**
### Platform
* `platform/windows`: Windows related.
### Area
* `area/acme`: ACME related.
* `area/api`: Traefik API related.
* `area/authentication`: Authentication related.
* `area/cluster`: Traefik clustering related.
* `area/documentation`: regards improving/adding documentation.
* `area/infrastructure`: related to CI or Traefik building scripts.
* `area/healthcheck`: Health-check related.
* `area/logs`: Traefik logs related.
* `area/middleware`: Middleware related.
* `area/middleware/metrics`: Metrics related. (Prometheus, StatsD, ...)
* `area/oxy`: Oxy related.
* `area/provider`: related to all providers.
* `area/provider/boltdb`: Boltd DB related.
* `area/provider/consul`: Consul related.
* `area/provider/docker`: Docker and Swarm related.
* `area/provider/ecs`: ECS related.
* `area/provider/etcd`: Etcd related.
* `area/provider/eureka`: Eureka related.
* `area/provider/file`: file provider related.
* `area/provider/k8s`: Kubernetes related.
* `area/provider/marathon`: Marathon related.
* `area/provider/mesos`: Mesos related.
* `area/provider/rancher`: Rancher related.
* `area/provider/zk`: Zoo Keeper related.
* `area/sticky-session`: Sticky session related.
* `area/tls`: TLS related.
* `area/websocket`: WebSocket related.
* `area/webui`: Web UI related.
### Priority
* `priority/P0`: needs hot fix. **(only for issue)**
* `priority/P1`: need to be fixed in next release. **(only for issue)**
* `priority/P2`: need to be fixed in the future. **(only for issue)**
* `priority/P3`: maybe. **(only for issue)**
### PR size
* `size/S`: small PR. **(only for PR)** _[bot only]_
* `size/M`: medium PR. **(only for PR)** _[bot only]_
* `size/L`: Large PR. **(only for PR)** _[bot only]_
### Status - Workflow
The `status/*` labels represent the desired state in the workflow.
* `status/0-needs-triage`: all new issue or PR have this status. _[bot only]_
* `status/1-needs-design-review`: need a design review. **(only for PR)**
* `status/2-needs-review`: need a code/documentation review. **(only for PR)**
* `status/3-needs-merge`: ready to merge. **(only for PR)**
* `status/4-merge-in-progress`: merge in progress. _[bot only]_

268
Makefile
View File

@@ -1,198 +1,120 @@
.PHONY: all
TRAEFIK_ENVS := \
-e OS_ARCH_ARG \
-e OS_PLATFORM_ARG \
-e TESTFLAGS \
-e VERBOSE \
-e VERSION \
-e CODENAME \
-e TESTDIRS \
-e CI \
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
TAG_NAME := $(shell git tag -l --contains HEAD)
SHA := $(shell git rev-parse HEAD)
VERSION_GIT := $(if $(TAG_NAME),$(TAG_NAME),$(SHA))
VERSION := $(if $(VERSION),$(VERSION),$(VERSION_GIT))
BIND_DIR := "dist"
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(subst /,-,$(GIT_BRANCH)))
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
BIN_NAME := traefik
CODENAME ?= cheddar
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
DATE := $(shell date -u '+%Y-%m-%d_%I:%M:%S%p')
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
# Default build target
GOOS := $(shell go env GOOS)
GOARCH := $(shell go env GOARCH)
LINT_EXECUTABLES = misspell shellcheck
print-%: ; @echo $*=$($*)
DOCKER_BUILD_PLATFORMS ?= linux/amd64,linux/arm64
default: binary
.PHONY: default
#? default: Run `make generate` and `make binary`
default: generate binary
all: generate-webui build ## validate all checks, build linux binary, run all tests\ncross non-linux binaries
$(DOCKER_RUN_TRAEFIK) ./script/make.sh
#? dist: Create the "dist" directory
dist:
mkdir -p dist
binary: generate-webui build ## build the linux binary
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary
.PHONY: build-webui-image
#? build-webui-image: Build WebUI Docker image
build-webui-image:
crossbinary: generate-webui build ## cross build the non-linux binaries
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate crossbinary
crossbinary-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-default crossbinary-others
crossbinary-default: generate-webui build
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-default
crossbinary-default-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-default
crossbinary-others: generate-webui build
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-others
crossbinary-others-parallel:
$(MAKE) generate-webui
$(MAKE) build crossbinary-others
test: build ## run the unit and integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
test-unit: build ## run the unit tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit
test-integration: build ## run the integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
validate: build ## validate gofmt, golint and go vet
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell validate-vendor
build: dist
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
build-webui:
docker build -t traefik-webui -f webui/Dockerfile webui
.PHONY: clean-webui
#? clean-webui: Clean WebUI static generated assets
clean-webui:
rm -r webui/static
mkdir -p webui/static
printf 'For more information see `webui/readme.md`' > webui/static/DONT-EDIT-FILES-IN-THIS-DIRECTORY.md
build-no-cache: dist
docker build --no-cache -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
webui/static/index.html:
$(MAKE) build-webui-image
docker run --rm -v "$(PWD)/webui/static":'/src/webui/static' traefik-webui npm run build:nc
docker run --rm -v "$(PWD)/webui/static":'/src/webui/static' traefik-webui chown -R $(shell id -u):$(shell id -g) ./static
shell: build ## start a shell inside the build env
$(DOCKER_RUN_TRAEFIK) /bin/bash
.PHONY: generate-webui
#? generate-webui: Generate WebUI
generate-webui: webui/static/index.html
image-dirty: binary ## build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
.PHONY: generate
#? generate: Generate code (Dynamic and Static configuration documentation reference files)
generate:
image: clear-static binary ## clean up static directory and build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
clear-static:
rm -rf static
dist:
mkdir dist
run-dev:
go generate
go build ./cmd/traefik
./traefik
.PHONY: binary
#? binary: Build the binary
binary: generate-webui dist
@echo SHA: $(VERSION) $(CODENAME) $(DATE)
CGO_ENABLED=0 GOGC=off GOOS=${GOOS} GOARCH=${GOARCH} go build ${FLAGS[*]} -ldflags "-s -w \
-X github.com/traefik/traefik/v3/pkg/version.Version=$(VERSION) \
-X github.com/traefik/traefik/v3/pkg/version.Codename=$(CODENAME) \
-X github.com/traefik/traefik/v3/pkg/version.BuildDate=$(DATE)" \
-installsuffix nocgo -o "./dist/${GOOS}/${GOARCH}/$(BIN_NAME)" ./cmd/traefik
generate-webui: build-webui
if [ ! -d "static" ]; then \
mkdir -p static; \
docker run --rm -v "$$PWD/static":'/src/static' traefik-webui npm run build; \
echo 'For more informations show `webui/readme.md`' > $$PWD/static/DONT-EDIT-FILES-IN-THIS-DIRECTORY.md; \
fi
binary-linux-arm64: export GOOS := linux
binary-linux-arm64: export GOARCH := arm64
binary-linux-arm64:
@$(MAKE) binary
binary-linux-amd64: export GOOS := linux
binary-linux-amd64: export GOARCH := amd64
binary-linux-amd64:
@$(MAKE) binary
binary-windows-amd64: export GOOS := windows
binary-windows-amd64: export GOARCH := amd64
binary-windows-amd64: export BIN_NAME := traefik.exe
binary-windows-amd64:
@$(MAKE) binary
.PHONY: crossbinary-default
#? crossbinary-default: Build the binary for the standard platforms (linux, darwin, windows)
crossbinary-default: generate generate-webui
$(CURDIR)/script/crossbinary-default.sh
.PHONY: test
#? test: Run the unit and integration tests
test: test-unit test-integration
.PHONY: test-unit
#? test-unit: Run the unit tests
test-unit:
GOOS=$(GOOS) GOARCH=$(GOARCH) go test -cover "-coverprofile=cover.out" -v $(TESTFLAGS) ./pkg/... ./cmd/...
.PHONY: test-integration
#? test-integration: Run the integration tests
test-integration: binary
GOOS=$(GOOS) GOARCH=$(GOARCH) go test ./integration -test.timeout=20m -failfast -v $(TESTFLAGS)
.PHONY: test-gateway-api-conformance
#? test-gateway-api-conformance: Run the conformance tests
test-gateway-api-conformance: build-image-dirty
GOOS=$(GOOS) GOARCH=$(GOARCH) go test ./integration -v -test.run K8sConformanceSuite -k8sConformance $(TESTFLAGS)
.PHONY: pull-images
#? pull-images: Pull all Docker images to avoid timeout during integration tests
pull-images:
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml \
| awk '{print $$2}' \
| sort \
| uniq \
| xargs -P 6 -n 1 docker pull
.PHONY: lint
#? lint: Run golangci-lint
lint:
golangci-lint run
script/validate-golint
.PHONY: validate-files
#? validate-files: Validate code and docs
validate-files: lint
$(foreach exec,$(LINT_EXECUTABLES),\
$(if $(shell which $(exec)),,$(error "No $(exec) in PATH")))
$(CURDIR)/script/validate-misspell.sh
$(CURDIR)/script/validate-shell-script.sh
.PHONY: validate
#? validate: Validate code, docs, and vendor
validate: lint
$(foreach exec,$(EXECUTABLES),\
$(if $(shell which $(exec)),,$(error "No $(exec) in PATH")))
$(CURDIR)/script/validate-vendor.sh
$(CURDIR)/script/validate-misspell.sh
$(CURDIR)/script/validate-shell-script.sh
# Target for building images for multiple architectures.
.PHONY: multi-arch-image-%
multi-arch-image-%: binary-linux-amd64 binary-linux-arm64
docker buildx build $(DOCKER_BUILDX_ARGS) -t traefik/traefik:$* --platform=$(DOCKER_BUILD_PLATFORMS) -f Dockerfile .
.PHONY: build-image
#? build-image: Clean up static directory and build a Docker Traefik image
build-image: export DOCKER_BUILDX_ARGS := --load
build-image: export DOCKER_BUILD_PLATFORMS := linux/$(GOARCH)
build-image: clean-webui
@$(MAKE) multi-arch-image-latest
.PHONY: build-image-dirty
#? build-image-dirty: Build a Docker Traefik image without re-building the webui when it's already built
build-image-dirty: export DOCKER_BUILDX_ARGS := --load
build-image-dirty: export DOCKER_BUILD_PLATFORMS := linux/$(GOARCH)
build-image-dirty:
@$(MAKE) multi-arch-image-latest
.PHONY: docs
#? docs: Build documentation site
docs:
make -C ./docs docs
.PHONY: docs-serve
#? docs-serve: Serve the documentation site locally
docs-serve:
make -C ./docs docs-serve
.PHONY: docs-pull-images
#? docs-pull-images: Pull image for doc building
docs-pull-images:
make -C ./docs docs-pull-images
.PHONY: generate-crd
#? generate-crd: Generate CRD clientset and CRD manifests
generate-crd:
@$(CURDIR)/script/code-gen-docker.sh
.PHONY: generate-genconf
#? generate-genconf: Generate code from dynamic configuration github.com/traefik/genconf
generate-genconf:
go run ./cmd/internal/gen/
.PHONY: release-packages
#? release-packages: Create packages for the release
release-packages: generate-webui
$(CURDIR)/script/release-packages.sh
.PHONY: fmt
#? fmt: Format the Code
fmt:
gofmt -s -l -w $(SRCS)
.PHONY: help
#? help: Get more info on make commands
help: Makefile
@echo " Choose a command run in traefik:"
@sed -n 's/^#?//p' $< | column -t -s ':' | sort | sed -e 's/^/ /'
pull-images:
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml | awk '{print $$2}' | sort | uniq | xargs -P 6 -n 1 docker pull
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

197
README.md
View File

@@ -1,22 +1,19 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/content/assets/img/traefik.logo-dark.png">
<source media="(prefers-color-scheme: light)" srcset="docs/content/assets/img/traefik.logo.png">
<img alt="Traefik" title="Traefik" src="docs/content/assets/img/traefik.logo.png">
</picture>
<img src="docs/img/traefik.logo.png" alt="Træfik" title="Træfik" />
</p>
[![Build Status SemaphoreCI](https://semaphoreci.com/api/v1/containous/traefik/branches/master/shields_badge.svg)](https://semaphoreci.com/containous/traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://doc.traefik.io/traefik)
[![Go Report Card](https://goreportcard.com/badge/traefik/traefik)](https://goreportcard.com/report/traefik/traefik)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/traefik/traefik/blob/master/LICENSE.md)
[![Join the community support forum at https://community.traefik.io/](https://img.shields.io/badge/style-register-green.svg?style=social&label=Discourse)](https://community.traefik.io/)
[![Twitter](https://img.shields.io/twitter/follow/traefik.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://docs.traefik.io)
[![Go Report Card](https://goreportcard.com/badge/containous/traefik)](http://goreportcard.com/report/containous/traefik)
[![](https://images.microbadger.com/badges/image/traefik.svg)](https://microbadger.com/images/traefik)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/containous/traefik/blob/master/LICENSE.md)
[![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Traefik (pronounced _traffic_) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
Traefik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher v2](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
Pointing Traefik at your orchestrator should be the _only_ configuration step you need.
Træfik (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), and a lot more) to manage its configuration automatically and dynamically.
---
@@ -25,137 +22,173 @@ Pointing Traefik at your orchestrator should be the _only_ configuration step yo
**[Supported backends](#supported-backends)** .
**[Quickstart](#quickstart)** .
**[Web UI](#web-ui)** .
**[Test it](#test-it)** .
**[Documentation](#documentation)** .
. **[Support](#support)** .
**[Release cycle](#release-cycle)** .
**[Contributing](#contributing)** .
**[Maintainers](#maintainers)** .
**[Plumbing](#plumbing)** .
**[Credits](#credits)** .
---
:warning: Please be aware that the old configurations for Traefik v1.x are NOT compatible with the v2.x config as of now. If you're running v2, please ensure you are using a [v2 configuration](https://doc.traefik.io/traefik/).
## Overview
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
Now you want users to access these microservices, and you need a reverse proxy.
Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice.
In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
- domain `api.domain.com` will point the microservice `api` in your private network
- path `domain.com/web` will point the microservice `web` in your private network
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
**This is when Traefik can help you!**
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
Traefik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
**Run Traefik and let it do the work for you!**
_(But if you'd rather configure some of your routes manually, Traefik supports that too!)_
Here enters Træfik.
![Architecture](docs/img/architecture.png)
Træfik can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
Routes to your services will be created instantly.
Run it and forget it!
![Architecture](docs/content/assets/img/traefik-architecture.png)
## Features
- Continuously updates its configuration (No restarts!)
- Supports multiple load balancing algorithms
- Provides HTTPS to your microservices by leveraging [Let's Encrypt](https://letsencrypt.org) (wildcard certificates support)
- [It's fast](https://docs.traefik.io/benchmarks)
- No dependency hell, single binary made with go
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
- Rest API
- Hot-reloading of configuration. No need to restart the process
- Circuit breakers, retry
- See the magic through its clean web UI
- Websocket, HTTP/2, gRPC ready
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB 2.X)
- Keeps access logs (JSON, CLF)
- Fast
- Exposes a Rest API
- Packaged as a single binary file (made with :heart: with go) and available as an [official](https://hub.docker.com/r/_/traefik/) docker image
- Round Robin, rebalancer load-balancers
- Metrics (Rest, Prometheus, Datadog, Statd)
- Clean AngularJS Web UI
- Websocket, HTTP/2, GRPC ready
- Access Logs (JSON, CLF)
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
- [Proxy Protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support
- High Availability with cluster mode (beta)
## Supported Backends
## Supported backends
- [Docker](https://doc.traefik.io/traefik/providers/docker/) / [Swarm mode](https://doc.traefik.io/traefik/providers/docker/)
- [Kubernetes](https://doc.traefik.io/traefik/providers/kubernetes-crd/)
- [ECS](https://doc.traefik.io/traefik/providers/ecs/)
- [File](https://doc.traefik.io/traefik/providers/file/)
- [Docker](https://www.docker.com/) / [Swarm mode](https://docs.docker.com/engine/swarm/)
- [Kubernetes](https://kubernetes.io)
- [Mesos](https://github.com/apache/mesos) / [Marathon](https://mesosphere.github.io/marathon/)
- [Rancher](https://rancher.com) (API, Metadata)
- [Consul](https://www.consul.io/) / [Etcd](https://coreos.com/etcd/) / [Zookeeper](https://zookeeper.apache.org) / [BoltDB](https://github.com/boltdb/bolt)
- [Eureka](https://github.com/Netflix/eureka)
- [Amazon ECS](https://aws.amazon.com/ecs)
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- File
- Rest API
## Quickstart
To get your hands on Traefik, you can use the [5-Minute Quickstart](https://doc.traefik.io/traefik/getting-started/quick-start/) in our documentation (you will need Docker).
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers. If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com/).
You will learn Træfik basics in less than 10 minutes.
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](https://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
## Web UI
You can access the simple HTML frontend of Traefik.
You can access the simple HTML frontend of Træfik.
![Web UI Providers](docs/content/assets/img/webui-dashboard.png)
![Web UI Providers](docs/img/web.frontend.png)
![Web UI Health](docs/img/traefik-health.png)
## Documentation
You can find the complete documentation of Traefik v2 at [https://doc.traefik.io/traefik/](https://doc.traefik.io/traefik/).
## Test it
A collection of contributions around Traefik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
## Support
To get community support, you can:
- join the Traefik community forum: [![Join the chat at https://community.traefik.io/](https://img.shields.io/badge/style-register-green.svg?style=social&label=Discourse)](https://community.traefik.io/)
If you need commercial support, please contact [Traefik.io](https://traefik.io) by mail: <mailto:support@traefik.io>.
## Download
- Grab the latest binary from the [releases](https://github.com/traefik/traefik/releases) page and run it with the [sample configuration file](https://raw.githubusercontent.com/traefik/traefik/master/traefik.sample.toml):
- The simple way: grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
```shell
./traefik --configFile=traefik.toml
```
- Or use the official tiny Docker image and run it with the [sample configuration file](https://raw.githubusercontent.com/traefik/traefik/master/traefik.sample.toml):
- Use the tiny Docker image and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
```shell
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
```
- Or get the sources:
- From sources:
```shell
git clone https://github.com/traefik/traefik
git clone https://github.com/containous/traefik
```
## Introductory Videos
You can find high level and deep dive videos on [videos.traefik.io](https://videos.traefik.io).
## Documentation
## Maintainers
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
A collection of contributions around Træfik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
## Support
To get basic support, you can:
- join the Træfik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
## Release cycle
- Release: We try to release a new version every 2 months
- i.e.: 1.3.0, 1.4.0, 1.5.0
- Release candidate: we do RC (1.**x**.0-rc**y**) before the final release (1.**x**.0)
- i.e.: 1.1.0-rc1 -> 1.1.0-rc2 -> 1.1.0-rc3 -> 1.1.0-rc4 -> 1.1.0
- Bug-fixes: For each version we release bug fixes
- i.e.: 1.1.1, 1.1.2, 1.1.3
- those versions contain only bug-fixes
- no additional features are delivered in those versions
- Each version is supported until the next one is released
- i.e.: 1.1.x will be supported until 1.2.0 is out
- We use [Semantic Versioning](http://semver.org/)
We are strongly promoting a philosophy of openness and sharing, and firmly standing against the elitist closed approach. Being part of the core team should be accessible to anyone who is motivated and want to be part of that journey!
This [document](docs/content/contributing/maintainers-guidelines.md) describes how to be part of the [maintainers' team](docs/content/contributing/maintainers.md) as well as various responsibilities and guidelines for Traefik maintainers.
You can also find more information on our process to review pull requests and manage issues [in this document](https://github.com/traefik/contributors-guide/blob/master/issue_triage.md).
## Contributing
If you'd like to contribute to the project, refer to the [contributing documentation](CONTRIBUTING.md).
Please refer to [contributing documentation](CONTRIBUTING.md).
### Code of Conduct
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md).
By participating in this project, you agree to abide by its terms.
By participating in this project you agree to abide by its terms.
## Release Cycle
- We usually release 3/4 new versions (e.g. 1.1.0, 1.2.0, 1.3.0) per year.
- Release Candidates are available before the release (e.g. 1.1.0-rc1, 1.1.0-rc2, 1.1.0-rc3, 1.1.0-rc4, before 1.1.0).
- Bug-fixes (e.g. 1.1.1, 1.1.2, 1.2.1, 1.2.3) are released as needed (no additional features are delivered in those versions, bug-fixes only).
## Maintainers
Each version is supported until the next one is released (e.g. 1.1.x will be supported until 1.2.0 is out).
[Information about process and maintainers](MAINTAINER.md)
We use [Semantic Versioning](https://semver.org/).
## Mailing Lists
## Plumbing
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun folks
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
- [Negroni](https://github.com/urfave/negroni): web middlewares made simple
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
- General announcements, new releases: mail at news+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/news).
- Security announcements: mail at security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
## Credits
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the gopher's logo!.
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo ![logo](docs/img/traefik.icon.png).
Traefik's logo licensed under the Creative Commons 3.0 Attributions license.
The gopher's logo of Traefik is licensed under the Creative Commons 3.0 Attributions license.
The gopher's logo of Traefik was inspired by the gopher stickers made by [Takuya Ueda](https://twitter.com/tenntenn).
The original Go gopher was designed by [Renee French](https://reneefrench.blogspot.com/).
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).

View File

@@ -1,30 +0,0 @@
# Security Policy
You can join our security mailing list to be aware of the latest announcements from our security team.
You can subscribe sending a mail to security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
Reported vulnerabilities can be found on [cve.mitre.org](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=traefik).
## Supported Versions
- We usually release 3/4 new versions (e.g. 1.1.0, 1.2.0, 1.3.0) per year.
- Release Candidates are available before the release (e.g. 1.1.0-rc1, 1.1.0-rc2, 1.1.0-rc3, 1.1.0-rc4, before 1.1.0).
- Bug-fixes (e.g. 1.1.1, 1.1.2, 1.2.1, 1.2.3) are released as needed (no additional features are delivered in those versions, bug-fixes only).
Each version is supported until the next one is released (e.g. 1.1.x will be supported until 1.2.0 is out).
We use [Semantic Versioning](https://semver.org/).
| Version | Supported |
|-----------|--------------------|
| `2.2.x` | :white_check_mark: |
| `< 2.2.x` | :x: |
| `1.7.x` | :white_check_mark: |
| `< 1.7.x` | :x: |
## Reporting a Vulnerability
We want to keep Traefik safe for everyone.
If you've discovered a security vulnerability in Traefik,
we appreciate your help in disclosing it to us in a responsible manner,
by creating a [security advisory](https://github.com/traefik/traefik/security/advisories).

245
acme/account.go Normal file
View File

@@ -0,0 +1,245 @@
package acme
import (
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"fmt"
"reflect"
"sort"
"strings"
"sync"
"time"
"github.com/containous/traefik/log"
"github.com/xenolf/lego/acme"
)
// Account is used to store lets encrypt registration info
type Account struct {
Email string
Registration *acme.RegistrationResource
PrivateKey []byte
DomainsCertificate DomainsCertificates
ChallengeCerts map[string]*ChallengeCert
}
// ChallengeCert stores a challenge certificate
type ChallengeCert struct {
Certificate []byte
PrivateKey []byte
certificate *tls.Certificate
}
// Init inits account struct
func (a *Account) Init() error {
err := a.DomainsCertificate.Init()
if err != nil {
return err
}
for _, cert := range a.ChallengeCerts {
if cert.certificate == nil {
certificate, err := tls.X509KeyPair(cert.Certificate, cert.PrivateKey)
if err != nil {
return err
}
cert.certificate = &certificate
}
if cert.certificate.Leaf == nil {
leaf, err := x509.ParseCertificate(cert.certificate.Certificate[0])
if err != nil {
return err
}
cert.certificate.Leaf = leaf
}
}
return nil
}
// NewAccount creates an account
func NewAccount(email string) (*Account, error) {
// Create a user. New accounts need an email and private key to start
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, err
}
domainsCerts := DomainsCertificates{Certs: []*DomainsCertificate{}}
domainsCerts.Init()
return &Account{
Email: email,
PrivateKey: x509.MarshalPKCS1PrivateKey(privateKey),
DomainsCertificate: DomainsCertificates{Certs: domainsCerts.Certs},
ChallengeCerts: map[string]*ChallengeCert{}}, nil
}
// GetEmail returns email
func (a *Account) GetEmail() string {
return a.Email
}
// GetRegistration returns lets encrypt registration resource
func (a *Account) GetRegistration() *acme.RegistrationResource {
return a.Registration
}
// GetPrivateKey returns private key
func (a *Account) GetPrivateKey() crypto.PrivateKey {
if privateKey, err := x509.ParsePKCS1PrivateKey(a.PrivateKey); err == nil {
return privateKey
}
log.Errorf("Cannot unmarshall private key %+v", a.PrivateKey)
return nil
}
// Certificate is used to store certificate info
type Certificate struct {
Domain string
CertURL string
CertStableURL string
PrivateKey []byte
Certificate []byte
}
// DomainsCertificates stores a certificate for multiple domains
type DomainsCertificates struct {
Certs []*DomainsCertificate
lock sync.RWMutex
}
func (dc *DomainsCertificates) Len() int {
return len(dc.Certs)
}
func (dc *DomainsCertificates) Swap(i, j int) {
dc.Certs[i], dc.Certs[j] = dc.Certs[j], dc.Certs[i]
}
func (dc *DomainsCertificates) Less(i, j int) bool {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[j].Domains) {
return dc.Certs[i].tlsCert.Leaf.NotAfter.After(dc.Certs[j].tlsCert.Leaf.NotAfter)
}
if dc.Certs[i].Domains.Main == dc.Certs[j].Domains.Main {
return strings.Join(dc.Certs[i].Domains.SANs, ",") < strings.Join(dc.Certs[j].Domains.SANs, ",")
}
return dc.Certs[i].Domains.Main < dc.Certs[j].Domains.Main
}
func (dc *DomainsCertificates) removeDuplicates() {
sort.Sort(dc)
for i := 0; i < len(dc.Certs); i++ {
for i2 := i + 1; i2 < len(dc.Certs); i2++ {
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[i2].Domains) {
// delete
log.Warnf("Remove duplicate cert: %+v, expiration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
dc.Certs = append(dc.Certs[:i2], dc.Certs[i2+1:]...)
i2--
}
}
}
}
// Init inits DomainsCertificates
func (dc *DomainsCertificates) Init() error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
if err != nil {
return err
}
domainsCertificate.tlsCert = &tlsCert
if domainsCertificate.tlsCert.Leaf == nil {
leaf, err := x509.ParseCertificate(domainsCertificate.tlsCert.Certificate[0])
if err != nil {
return err
}
domainsCertificate.tlsCert.Leaf = leaf
}
}
dc.removeDuplicates()
return nil
}
func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain Domain) error {
dc.lock.Lock()
defer dc.lock.Unlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domain, domainsCertificate.Domains) {
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return err
}
domainsCertificate.Certificate = acmeCert
domainsCertificate.tlsCert = &tlsCert
return nil
}
}
return fmt.Errorf("certificate to renew not found for domain %s", domain.Main)
}
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
dc.lock.Lock()
defer dc.lock.Unlock()
tlsCert, err := tls.X509KeyPair(acmeCert.Certificate, acmeCert.PrivateKey)
if err != nil {
return nil, err
}
cert := DomainsCertificate{Domains: domain, Certificate: acmeCert, tlsCert: &tlsCert}
dc.Certs = append(dc.Certs, &cert)
return &cert, nil
}
func (dc *DomainsCertificates) getCertificateForDomain(domainToFind string) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
domains := []string{}
domains = append(domains, domainsCertificate.Domains.Main)
domains = append(domains, domainsCertificate.Domains.SANs...)
for _, domain := range domains {
if domain == domainToFind {
return domainsCertificate, true
}
}
}
return nil, false
}
func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate, bool) {
dc.lock.RLock()
defer dc.lock.RUnlock()
for _, domainsCertificate := range dc.Certs {
if reflect.DeepEqual(domainToFind, domainsCertificate.Domains) {
return domainsCertificate, true
}
}
return nil, false
}
// DomainsCertificate contains a certificate for multiple domains
type DomainsCertificate struct {
Domains Domain
Certificate *Certificate
tlsCert *tls.Certificate
}
func (dc *DomainsCertificate) needRenew() bool {
for _, c := range dc.tlsCert.Certificate {
crt, err := x509.ParseCertificate(c)
if err != nil {
// If there's an error, we assume the cert is broken, and needs update
return true
}
// <= 30 days left, renew certificate
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 30 * time.Hour))) {
return true
}
}
return false
}

634
acme/acme.go Normal file
View File

@@ -0,0 +1,634 @@
package acme
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io/ioutil"
fmtlog "log"
"os"
"regexp"
"strings"
"time"
"github.com/BurntSushi/ty/fun"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/eapache/channels"
"github.com/xenolf/lego/acme"
"github.com/xenolf/lego/providers/dns"
)
var (
// OSCPMustStaple enables OSCP stapling as from https://github.com/xenolf/lego/issues/270
OSCPMustStaple = false
)
// ACME allows to connect to lets encrypt and retrieve certs
type ACME struct {
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
Storage string `description:"File or key used for certificates storage."`
StorageFile string // deprecated
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
DNSProvider string `description:"Use a DNS based challenge provider rather than HTTPS."`
DelayDontCheckDNS int `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
ACMELogging bool `description:"Enable debug logging of ACME actions."`
client *acme.Client
defaultCertificate *tls.Certificate
store cluster.Store
challengeProvider *challengeProvider
checkOnDemandDomain func(domain string) bool
jobs *channels.InfiniteChannel
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
}
//Domains parse []Domain
type Domains []Domain
//Set []Domain
func (ds *Domains) Set(str string) error {
fargs := func(c rune) bool {
return c == ',' || c == ';'
}
// get function
slice := strings.FieldsFunc(str, fargs)
if len(slice) < 1 {
return fmt.Errorf("Parse error ACME.Domain. Imposible to parse %s", str)
}
d := Domain{
Main: slice[0],
SANs: []string{},
}
if len(slice) > 1 {
d.SANs = slice[1:]
}
*ds = append(*ds, d)
return nil
}
//Get []Domain
func (ds *Domains) Get() interface{} { return []Domain(*ds) }
//String returns []Domain in string
func (ds *Domains) String() string { return fmt.Sprintf("%+v", *ds) }
//SetValue sets []Domain into the parser
func (ds *Domains) SetValue(val interface{}) {
*ds = Domains(val.([]Domain))
}
// Domain holds a domain name with SANs
type Domain struct {
Main string
SANs []string
}
func (a *ACME) init() error {
if a.ACMELogging {
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
} else {
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
}
// no certificates in TLS config, so we add a default one
cert, err := generateDefaultCertificate()
if err != nil {
return err
}
a.defaultCertificate = cert
// TODO: to remove in the futurs
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
log.Warn("ACME.StorageFile is deprecated, use ACME.Storage instead")
a.Storage = a.StorageFile
}
a.jobs = channels.NewInfiniteChannel()
return nil
}
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a key for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
listener := func(object cluster.Object) error {
account := object.(*Account)
account.Init()
if !leadership.IsLeader() {
a.client, err = a.buildACMEClient(account)
if err != nil {
log.Errorf("Error building ACME client %+v: %s", object, err.Error())
}
}
return nil
}
datastore, err := cluster.NewDataStore(
leadership.Pool.Ctx(),
staert.KvSource{
Store: leadership.Store,
Prefix: a.Storage,
},
&Account{},
listener)
if err != nil {
return err
}
a.store = datastore
a.challengeProvider = &challengeProvider{store: a.store}
ticker := time.NewTicker(24 * time.Hour)
leadership.Pool.AddGoCtx(func(ctx context.Context) {
log.Info("Starting ACME renew job...")
defer log.Info("Stopped ACME renew job...")
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
a.renewCertificates()
}
}
})
leadership.AddListener(func(elected bool) error {
if elected {
_, err := a.store.Load()
if err != nil {
return err
}
transaction, object, err := a.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
account.Init()
var needRegister bool
if account == nil || len(account.Email) == 0 {
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
if err != nil {
return err
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Debug("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debug("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
}
return nil
})
return nil
}
// CreateLocalConfig creates a tls.config using local ACME configuration
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
}
if len(a.Storage) == 0 {
return errors.New("Empty Store, please provide a filename for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
localStore := NewLocalStore(a.Storage)
a.store = localStore
a.challengeProvider = &challengeProvider{store: a.store}
var needRegister bool
var account *Account
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
log.Info("Loading ACME Account...")
// load account
object, err := localStore.Load()
if err != nil {
return err
}
account = object.(*Account)
} else {
log.Info("Generating ACME Account...")
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Info("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debug("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
}
// save account
transaction, _, err := a.store.Begin()
if err != nil {
return err
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
for range ticker.C {
a.renewCertificates()
}
})
return nil
}
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
if providedCertificate := a.getProvidedCertificate([]string{domain}); providedCertificate != nil {
return providedCertificate, nil
}
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
log.Debugf("ACME got challenge %s", domain)
return challengeCert, nil
}
if domainCert, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
log.Debugf("ACME got domain cert %s", domain)
return domainCert.tlsCert, nil
}
if a.OnDemand {
if a.checkOnDemandDomain != nil && !a.checkOnDemandDomain(domain) {
return nil, nil
}
return a.loadCertificateOnDemand(clientHello)
}
log.Debugf("ACME got nothing %s", domain)
return nil, nil
}
func (a *ACME) retrieveCertificates() {
a.jobs.In() <- func() {
log.Info("Retrieving ACME certificates...")
for _, domain := range a.Domains {
// check if cert isn't already loaded
account := a.store.Get().(*Account)
if _, exists := account.DomainsCertificate.exists(domain); !exists {
domains := []string{}
domains = append(domains, domain.Main)
domains = append(domains, domain.SANs...)
certificateResource, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificate for domain %s: %s", domains, err.Error())
continue
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating ACME store transaction from domain %s: %s", domain, err.Error())
continue
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificateResource, domain)
if err != nil {
log.Errorf("Error adding ACME certificate for domain %s: %s", domains, err.Error())
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
}
}
log.Info("Retrieved ACME certificates")
}
}
func (a *ACME) renewCertificates() {
a.jobs.In() <- func() {
log.Debug("Testing certificate renew...")
account := a.store.Get().(*Account)
for _, certificateResource := range account.DomainsCertificate.Certs {
if certificateResource.needRenew() {
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true, OSCPMustStaple)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
renewedACMECert := &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
}
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
account = object.(*Account)
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
continue
}
}
}
}
}
func dnsOverrideDelay(delay int) error {
var err error
if delay > 0 {
log.Debugf("Delaying %d seconds rather than validating DNS propagation", delay)
acme.PreCheckDNS = func(_, _ string) (bool, error) {
time.Sleep(time.Duration(delay) * time.Second)
return true, nil
}
} else if delay < 0 {
err = fmt.Errorf("Invalid negative DelayDontCheckDNS: %d", delay)
}
return err
}
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
log.Debug("Building ACME client...")
caServer := "https://acme-v01.api.letsencrypt.org/directory"
if len(a.CAServer) > 0 {
caServer = a.CAServer
}
client, err := acme.NewClient(caServer, account, acme.RSA4096)
if err != nil {
return nil, err
}
if len(a.DNSProvider) > 0 {
log.Debugf("Using DNS Challenge provider: %s", a.DNSProvider)
err = dnsOverrideDelay(a.DelayDontCheckDNS)
if err != nil {
return nil, err
}
var provider acme.ChallengeProvider
provider, err = dns.NewDNSChallengeProviderByName(a.DNSProvider)
if err != nil {
return nil, err
}
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
err = client.SetChallengeProvider(acme.DNS01, provider)
} else {
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
}
if err != nil {
return nil, err
}
return client, nil
}
func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
if certificateResource, ok := account.DomainsCertificate.getCertificateForDomain(domain); ok {
return certificateResource.tlsCert, nil
}
certificate, err := a.getDomainsCertificates([]string{domain})
if err != nil {
return nil, err
}
log.Debugf("Got certificate on demand for domain %s", domain)
transaction, object, err := a.store.Begin()
if err != nil {
return nil, err
}
account = object.(*Account)
cert, err := account.DomainsCertificate.addCertificateForDomains(certificate, Domain{Main: domain})
if err != nil {
return nil, err
}
if err = transaction.Commit(account); err != nil {
return nil, err
}
return cert.tlsCert, nil
}
// LoadCertificateForDomains loads certificates from ACME for given domains
func (a *ACME) LoadCertificateForDomains(domains []string) {
a.jobs.In() <- func() {
log.Debugf("LoadCertificateForDomains %v...", domains)
if len(domains) == 0 {
// no domain
return
}
domains = fun.Map(types.CanonicalDomain, domains).([]string)
// Check provided certificates
if a.getProvidedCertificate(domains) != nil {
return
}
operation := func() error {
if a.client == nil {
return errors.New("ACME client still not built")
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting ACME client: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 30 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting ACME client: %v", err)
return
}
account := a.store.Get().(*Account)
var domain Domain
if len(domains) > 1 {
domain = Domain{Main: domains[0], SANs: domains[1:]}
} else {
domain = Domain{Main: domains[0]}
}
if _, exists := account.DomainsCertificate.exists(domain); exists {
// domain already exists
return
}
certificate, err := a.getDomainsCertificates(domains)
if err != nil {
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
return
}
log.Debugf("Got certificate for domains %+v", domains)
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating transaction %+v : %v", domains, err)
return
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
if err != nil {
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
return
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %v", account, err)
return
}
}
}
// Get provided certificate which check a domains list (Main and SANs)
func (a *ACME) getProvidedCertificate(domains []string) *tls.Certificate {
// Use regex to test for provided certs that might have been added into TLSConfig
providedCertMatch := false
log.Debugf("Look for provided certificate to validate %s...", domains)
for k := range a.TLSConfig.NameToCertificate {
selector := "^" + strings.Replace(k, "*.", "[^\\.]*\\.?", -1) + "$"
for _, domainToCheck := range domains {
providedCertMatch, _ = regexp.MatchString(selector, domainToCheck)
if !providedCertMatch {
break
}
}
if providedCertMatch {
log.Debugf("Got provided certificate for domains %s", domains)
return a.TLSConfig.NameToCertificate[k]
}
}
log.Debugf("No provided certificate found for domains %s, get ACME certificate.", domains)
return nil
}
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
domains = fun.Map(types.CanonicalDomain, domains).([]string)
log.Debugf("Loading ACME certificates %s...", domains)
bundle := true
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
if len(failures) > 0 {
log.Error(failures)
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
}
log.Debugf("Loaded ACME certificates %s", domains)
return &Certificate{
Domain: certificate.Domain,
CertURL: certificate.CertURL,
CertStableURL: certificate.CertStableURL,
PrivateKey: certificate.PrivateKey,
Certificate: certificate.Certificate,
}, nil
}
func (a *ACME) runJobs() {
safe.Go(func() {
for job := range a.jobs.Out() {
function := job.(func())
function()
}
})
}

296
acme/acme_test.go Normal file
View File

@@ -0,0 +1,296 @@
package acme
import (
"crypto/tls"
"encoding/base64"
"net/http"
"net/http/httptest"
"reflect"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/xenolf/lego/acme"
)
func TestDomainsSet(t *testing.T) {
checkMap := map[string]Domains{
"": {},
"foo.com": {Domain{Main: "foo.com", SANs: []string{}}},
"foo.com,bar.net": {Domain{Main: "foo.com", SANs: []string{"bar.net"}}},
"foo.com,bar1.net,bar2.net,bar3.net": {Domain{Main: "foo.com", SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
}
for in, check := range checkMap {
ds := Domains{}
ds.Set(in)
if !reflect.DeepEqual(check, ds) {
t.Errorf("Expected %+v\nGot %+v", check, ds)
}
}
}
func TestDomainsSetAppend(t *testing.T) {
inSlice := []string{
"",
"foo1.com",
"foo2.com,bar.net",
"foo3.com,bar1.net,bar2.net,bar3.net",
}
checkSlice := []Domains{
{},
{
Domain{
Main: "foo1.com",
SANs: []string{}}},
{
Domain{
Main: "foo1.com",
SANs: []string{}},
Domain{
Main: "foo2.com",
SANs: []string{"bar.net"}}},
{
Domain{
Main: "foo1.com",
SANs: []string{}},
Domain{
Main: "foo2.com",
SANs: []string{"bar.net"}},
Domain{Main: "foo3.com",
SANs: []string{"bar1.net", "bar2.net", "bar3.net"}}},
}
ds := Domains{}
for i, in := range inSlice {
ds.Set(in)
if !reflect.DeepEqual(checkSlice[i], ds) {
t.Errorf("Expected %s %+v\nGot %+v", in, checkSlice[i], ds)
}
}
}
func TestCertificatesRenew(t *testing.T) {
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
Main: "foo1.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
},
},
{
Domains: Domain{
Main: "foo2.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo2.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo2Key,
Certificate: foo2Cert,
},
},
},
}
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
newCertificate := &Certificate{
Domain: "foo1.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo1Key,
Certificate: foo1Cert,
}
err := domainsCertificates.renewCertificates(
newCertificate,
Domain{
Main: "foo1.com",
SANs: []string{}})
if err != nil {
t.Errorf("Error in renewCertificates :%v", err)
}
if len(domainsCertificates.Certs) != 2 {
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
}
if !reflect.DeepEqual(domainsCertificates.Certs[0].Certificate, newCertificate) {
t.Errorf("Expected new certificate %+v \nGot %+v", newCertificate, domainsCertificates.Certs[0].Certificate)
}
}
func TestRemoveDuplicates(t *testing.T) {
now := time.Now()
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
barCert, barKey, _ := generateKeyPair("bar.com", now)
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo24Key,
Certificate: foo24Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: fooKey,
Certificate: fooCert,
},
},
{
Domains: Domain{
Main: "bar.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "bar.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: barKey,
Certificate: barCert,
},
},
{
Domains: Domain{
Main: "foo.com",
SANs: []string{}},
Certificate: &Certificate{
Domain: "foo.com",
CertURL: "url",
CertStableURL: "url",
PrivateKey: foo48Key,
Certificate: foo48Cert,
},
},
},
}
domainsCertificates.Init()
if len(domainsCertificates.Certs) != 2 {
t.Errorf("Expected domainsCertificates length %d %+v\nGot %+v", 2, domainsCertificates.Certs, len(domainsCertificates.Certs))
}
for _, cert := range domainsCertificates.Certs {
switch cert.Domains.Main {
case "bar.com":
continue
case "foo.com":
if !cert.tlsCert.Leaf.NotAfter.Equal(now.Add(48 * time.Hour).Truncate(1 * time.Second)) {
t.Errorf("Bad expiration %s date for domain %+v, now %s", cert.tlsCert.Leaf.NotAfter.String(), cert, now.Add(48*time.Hour).Truncate(1*time.Second).String())
}
default:
t.Errorf("Unknown domain %+v", cert)
}
}
}
func TestNoPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(0)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS != nil {
t.Error("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
}
}
func TestSillyPreCheckOverride(t *testing.T) {
err := dnsOverrideDelay(-5)
if err == nil {
t.Error("Missing expected error in dnsOverrideDelay!")
}
}
func TestPreCheckOverride(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
err := dnsOverrideDelay(5)
if err != nil {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS == nil {
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcmeClientCreation(t *testing.T) {
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
// Lengthy setup to avoid external web requests - oh for easier golang testing!
account := &Account{Email: "f@f"}
account.PrivateKey, _ = base64.StdEncoding.DecodeString(`
MIIBPAIBAAJBAMp2Ni92FfEur+CAvFkgC12LT4l9D53ApbBpDaXaJkzzks+KsLw9zyAxvlrfAyTCQ
7tDnEnIltAXyQ0uOFUUdcMCAwEAAQJAK1FbipATZcT9cGVa5x7KD7usytftLW14heQUPXYNV80r/3
lmnpvjL06dffRpwkYeN8DATQF/QOcy3NNNGDw/4QIhAPAKmiZFxA/qmRXsuU8Zhlzf16WrNZ68K64
asn/h3qZrAiEA1+wFR3WXCPIolOvd7AHjfgcTKQNkoMPywU4FYUNQ1AkCIQDv8yk0qPjckD6HVCPJ
llJh9MC0svjevGtNlxJoE3lmEQIhAKXy1wfZ32/XtcrnENPvi6lzxI0T94X7s5pP3aCoPPoJAiEAl
cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`{
"new-authz": "https://foo/acme/new-authz",
"new-cert": "https://foo/acme/new-cert",
"new-reg": "https://foo/acme/new-reg",
"revoke-cert": "https://foo/acme/revoke-cert"
}`))
}))
defer ts.Close()
a := ACME{DNSProvider: "manual", DelayDontCheckDNS: 10, CAServer: ts.URL}
client, err := a.buildACMEClient(account)
if err != nil {
t.Errorf("Error in buildACMEClient: %v", err)
}
if client == nil {
t.Error("No client from buildACMEClient!")
}
if acme.PreCheckDNS == nil {
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcme_getProvidedCertificate(t *testing.T) {
mm := make(map[string]*tls.Certificate)
mm["*.containo.us"] = &tls.Certificate{}
mm["traefik.acme.io"] = &tls.Certificate{}
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
domains := []string{"traefik.containo.us", "trae.containo.us"}
certificate := a.getProvidedCertificate(domains)
assert.NotNil(t, certificate)
domains = []string{"traefik.acme.io", "trae.acme.io"}
certificate = a.getProvidedCertificate(domains)
assert.Nil(t, certificate)
}

97
acme/challengeProvider.go Normal file
View File

@@ -0,0 +1,97 @@
package acme
import (
"crypto/tls"
"fmt"
"strings"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
type challengeProvider struct {
store cluster.Store
lock sync.RWMutex
}
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
log.Debugf("Challenge GetCertificate %s", domain)
if !strings.HasSuffix(domain, ".acme.invalid") {
return nil, false
}
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.ChallengeCerts == nil {
return nil, false
}
account.Init()
var result *tls.Certificate
operation := func() error {
for _, cert := range account.ChallengeCerts {
for _, dns := range cert.certificate.Leaf.DNSNames {
if domain == dns {
result = cert.certificate
return nil
}
}
}
return fmt.Errorf("cannot find challenge cert for domain %s", domain)
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting cert: %v", err)
return nil, false
}
return result, true
}
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
if err != nil {
return err
}
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if account.ChallengeCerts == nil {
account.ChallengeCerts = map[string]*ChallengeCert{}
}
account.ChallengeCerts[domain] = &cert
return transaction.Commit(account)
}
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
delete(account.ChallengeCerts, domain)
return transaction.Commit(account)
}
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
}

133
acme/crypto.go Normal file
View File

@@ -0,0 +1,133 @@
package acme
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/hex"
"encoding/pem"
"fmt"
"math/big"
"time"
)
func generateDefaultCertificate() (*tls.Certificate, error) {
randomBytes := make([]byte, 100)
_, err := rand.Read(randomBytes)
if err != nil {
return nil, err
}
zBytes := sha256.Sum256(randomBytes)
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
if err != nil {
return nil, err
}
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
return nil, err
}
return &certificate, nil
}
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, nil, err
}
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
if err != nil {
return nil, nil, err
}
return certPEM, keyPEM, nil
}
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
derBytes, err := generateDerCert(privKey, expiration, domain)
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}), nil
}
func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain string) ([]byte, error) {
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, err
}
if expiration.IsZero() {
expiration = time.Now().Add(365)
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: "TRAEFIK DEFAULT CERT",
},
NotBefore: time.Now(),
NotAfter: expiration,
KeyUsage: x509.KeyUsageKeyEncipherment,
BasicConstraintsValid: true,
DNSNames: []string{domain},
}
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
}
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
// generate a new RSA key for the certificates
var tempPrivKey crypto.PrivateKey
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return ChallengeCert{}, "", err
}
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
rsaPrivPEM := pemEncode(rsaPrivKey)
zBytes := sha256.Sum256([]byte(keyAuth))
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
if err != nil {
return ChallengeCert{}, "", err
}
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return ChallengeCert{}, "", err
}
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
}
func pemEncode(data interface{}) []byte {
var pemBlock *pem.Block
switch key := data.(type) {
case *ecdsa.PrivateKey:
keyBytes, _ := x509.MarshalECPrivateKey(key)
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
case *rsa.PrivateKey:
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
case *x509.CertificateRequest:
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
case []byte:
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
}
return pem.EncodeToMemory(pemBlock)
}

97
acme/localStore.go Normal file
View File

@@ -0,0 +1,97 @@
package acme
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"sync"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
)
var _ cluster.Store = (*LocalStore)(nil)
// LocalStore is a store using a file as storage
type LocalStore struct {
file string
storageLock sync.RWMutex
account *Account
}
// NewLocalStore create a LocalStore
func NewLocalStore(file string) *LocalStore {
return &LocalStore{
file: file,
}
}
// Get atomically a struct from the file storage
func (s *LocalStore) Get() cluster.Object {
s.storageLock.RLock()
defer s.storageLock.RUnlock()
return s.account
}
// Load loads file into store
func (s *LocalStore) Load() (cluster.Object, error) {
s.storageLock.Lock()
defer s.storageLock.Unlock()
account := &Account{}
err := checkPermissions(s.file)
if err != nil {
return nil, err
}
f, err := os.Open(s.file)
if err != nil {
return nil, err
}
defer f.Close()
file, err := ioutil.ReadAll(f)
if err != nil {
return nil, err
}
if err := json.Unmarshal(file, &account); err != nil {
return nil, err
}
account.Init()
s.account = account
log.Infof("Loaded ACME config from store %s", s.file)
return account, nil
}
// Begin creates a transaction with the KV store.
func (s *LocalStore) Begin() (cluster.Transaction, cluster.Object, error) {
s.storageLock.Lock()
return &localTransaction{LocalStore: s}, s.account, nil
}
var _ cluster.Transaction = (*localTransaction)(nil)
type localTransaction struct {
*LocalStore
dirty bool
}
// Commit allows to set an object in the file storage
func (t *localTransaction) Commit(object cluster.Object) error {
t.LocalStore.account = object.(*Account)
defer t.storageLock.Unlock()
if t.dirty {
return fmt.Errorf("transaction already used, please begin a new one")
}
// write account to file
data, err := json.MarshalIndent(object, "", " ")
if err != nil {
return err
}
err = ioutil.WriteFile(t.file, data, 0600)
if err != nil {
return err
}
t.dirty = true
return nil
}

25
acme/localStore_unix.go Normal file
View File

@@ -0,0 +1,25 @@
// +build !windows
package acme
import (
"fmt"
"os"
)
// Check file permissions
func checkPermissions(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
if fi.Mode().Perm()&0077 != 0 {
return fmt.Errorf("permissions %o for %s are too open, please use 600", fi.Mode().Perm(), name)
}
return nil
}

View File

@@ -0,0 +1,6 @@
package acme
// Do not check file permissions on Windows right now
func checkPermissions(name string) error {
return nil
}

31
build.Dockerfile Normal file
View File

@@ -0,0 +1,31 @@
FROM golang:1.9-alpine
RUN apk --update upgrade \
&& apk --no-cache --no-progress add git mercurial bash gcc musl-dev curl tar \
&& rm -rf /var/cache/apk/*
RUN go get github.com/jteeuwen/go-bindata/... \
&& go get github.com/golang/lint/golint \
&& go get github.com/kisielk/errcheck \
&& go get github.com/client9/misspell/cmd/misspell \
&& go get github.com/mattfarina/glide-hash \
&& go get github.com/sgotti/glide-vc
# Which docker version to test on
ARG DOCKER_VERSION=17.03.2
# Which glide version to test on
ARG GLIDE_VERSION=v0.12.3
# Download glide
RUN mkdir -p /usr/local/bin \
&& curl -fL https://github.com/Masterminds/glide/releases/download/${GLIDE_VERSION}/glide-${GLIDE_VERSION}-linux-amd64.tar.gz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
# Download docker
RUN mkdir -p /usr/local/bin \
&& curl -fL https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
WORKDIR /go/src/github.com/containous/traefik
COPY . /go/src/github.com/containous/traefik

247
cluster/datastore.go Normal file
View File

@@ -0,0 +1,247 @@
package cluster
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/staert"
"github.com/containous/traefik/job"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/docker/libkv/store"
"github.com/satori/go.uuid"
)
// Metadata stores Object plus metadata
type Metadata struct {
object Object
Object []byte
Lock string
}
// NewMetadata returns new Metadata
func NewMetadata(object Object) *Metadata {
return &Metadata{object: object}
}
// Marshall marshalls object
func (m *Metadata) Marshall() error {
var err error
m.Object, err = json.Marshal(m.object)
return err
}
func (m *Metadata) unmarshall() error {
if len(m.Object) == 0 {
return nil
}
return json.Unmarshal(m.Object, m.object)
}
// Listener is called when Object has been changed in KV store
type Listener func(Object) error
var _ Store = (*Datastore)(nil)
// Datastore holds a struct synced in a KV store
type Datastore struct {
kv staert.KvSource
ctx context.Context
localLock *sync.RWMutex
meta *Metadata
lockKey string
listener Listener
}
// NewDataStore creates a Datastore
func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object, listener Listener) (*Datastore, error) {
datastore := Datastore{
kv: kvSource,
ctx: ctx,
meta: &Metadata{object: object},
lockKey: kvSource.Prefix + "/lock",
localLock: &sync.RWMutex{},
listener: listener,
}
err := datastore.watchChanges()
if err != nil {
return nil, err
}
return &datastore, nil
}
func (d *Datastore) watchChanges() error {
stopCh := make(chan struct{})
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
if err != nil {
return err
}
safe.Go(func() {
ctx, cancel := context.WithCancel(d.ctx)
operation := func() error {
for {
select {
case <-ctx.Done():
stopCh <- struct{}{}
return nil
case _, ok := <-kvCh:
if !ok {
cancel()
return err
}
err = d.reload()
if err != nil {
return err
}
if d.listener != nil {
err := d.listener(d.meta.object)
if err != nil {
log.Errorf("Error calling datastore listener: %s", err)
}
}
}
}
}
notify := func(err error, time time.Duration) {
log.Errorf("Error in watch datastore: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
log.Errorf("Error in watch datastore: %v", err)
}
})
return nil
}
func (d *Datastore) reload() error {
log.Debug("Datastore reload")
_, err := d.Load()
return err
}
// Begin creates a transaction with the KV store.
func (d *Datastore) Begin() (Transaction, Object, error) {
id := uuid.NewV4().String()
log.Debugf("Transaction %s begins", id)
remoteLock, err := d.kv.NewLock(d.lockKey, &store.LockOptions{TTL: 20 * time.Second, Value: []byte(id)})
if err != nil {
return nil, nil, err
}
stopCh := make(chan struct{})
ctx, cancel := context.WithCancel(d.ctx)
var errLock error
go func() {
_, errLock = remoteLock.Lock(stopCh)
cancel()
}()
select {
case <-ctx.Done():
if errLock != nil {
return nil, nil, errLock
}
case <-d.ctx.Done():
stopCh <- struct{}{}
return nil, nil, d.ctx.Err()
}
// we got the lock! Now make sure we are synced with KV store
operation := func() error {
meta := d.get()
if meta.Lock != id {
return fmt.Errorf("Object lock value: expected %s, got %s", id, meta.Lock)
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Datastore sync error: %v, retrying in %s", err, time)
err = d.reload()
if err != nil {
log.Errorf("Error reloading: %+v", err)
}
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
return nil, nil, fmt.Errorf("Datastore cannot sync: %v", err)
}
// we synced with KV store, we can now return Setter
return &datastoreTransaction{
Datastore: d,
remoteLock: remoteLock,
id: id,
}, d.meta.object, nil
}
func (d *Datastore) get() *Metadata {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta
}
// Load load atomically a struct from the KV store
func (d *Datastore) Load() (Object, error) {
d.localLock.Lock()
defer d.localLock.Unlock()
// clear Object first, as mapstructure's decoder doesn't have ZeroFields set to true for merging purposes
d.meta.Object = d.meta.Object[:0]
err := d.kv.LoadConfig(d.meta)
if err != nil {
return nil, err
}
err = d.meta.unmarshall()
if err != nil {
return nil, err
}
return d.meta.object, nil
}
// Get atomically a struct from the KV store
func (d *Datastore) Get() Object {
d.localLock.RLock()
defer d.localLock.RUnlock()
return d.meta.object
}
var _ Transaction = (*datastoreTransaction)(nil)
type datastoreTransaction struct {
*Datastore
remoteLock store.Locker
dirty bool
id string
}
// Commit allows to set an object in the KV store
func (s *datastoreTransaction) Commit(object Object) error {
s.localLock.Lock()
defer s.localLock.Unlock()
if s.dirty {
return fmt.Errorf("Transaction already used, please begin a new one")
}
s.Datastore.meta.object = object
err := s.Datastore.meta.Marshall()
if err != nil {
return fmt.Errorf("Marshall error: %s", err)
}
err = s.kv.StoreConfig(s.Datastore.meta)
if err != nil {
return fmt.Errorf("StoreConfig error: %s", err)
}
err = s.remoteLock.Unlock()
if err != nil {
return fmt.Errorf("Unlock error: %s", err)
}
s.dirty = true
log.Debugf("Transaction committed %s", s.id)
return nil
}

104
cluster/leadership.go Normal file
View File

@@ -0,0 +1,104 @@
package cluster
import (
"context"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/docker/leadership"
)
// Leadership allows leadership election using a KV store
type Leadership struct {
*safe.Pool
*types.Cluster
candidate *leadership.Candidate
leader *safe.Safe
listeners []LeaderListener
}
// NewLeadership creates a leadership
func NewLeadership(ctx context.Context, cluster *types.Cluster) *Leadership {
return &Leadership{
Pool: safe.NewPool(ctx),
Cluster: cluster,
candidate: leadership.NewCandidate(cluster.Store, cluster.Store.Prefix+"/leader", cluster.Node, 20*time.Second),
listeners: []LeaderListener{},
leader: safe.New(false),
}
}
// LeaderListener is called when leadership has changed
type LeaderListener func(elected bool) error
// Participate tries to be a leader
func (l *Leadership) Participate(pool *safe.Pool) {
pool.GoCtx(func(ctx context.Context) {
log.Debugf("Node %s running for election", l.Cluster.Node)
defer log.Debugf("Node %s no more running for election", l.Cluster.Node)
backOff := backoff.NewExponentialBackOff()
operation := func() error {
return l.run(ctx, l.candidate)
}
notify := func(err error, time time.Duration) {
log.Errorf("Leadership election error %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), backOff, notify)
if err != nil {
log.Errorf("Cannot elect leadership %+v", err)
}
})
}
// AddListener adds a leadership listener
func (l *Leadership) AddListener(listener LeaderListener) {
l.listeners = append(l.listeners, listener)
}
// Resign resigns from being a leader
func (l *Leadership) Resign() {
l.candidate.Resign()
log.Infof("Node %s resigned", l.Cluster.Node)
}
func (l *Leadership) run(ctx context.Context, candidate *leadership.Candidate) error {
electedCh, errCh := candidate.RunForElection()
for {
select {
case elected := <-electedCh:
l.onElection(elected)
case err := <-errCh:
return err
case <-ctx.Done():
l.candidate.Resign()
return nil
}
}
}
func (l *Leadership) onElection(elected bool) {
if elected {
log.Infof("Node %s elected leader ♚", l.Cluster.Node)
l.leader.Set(true)
l.Start()
} else {
log.Infof("Node %s elected worker ♝", l.Cluster.Node)
l.leader.Set(false)
l.Stop()
}
for _, listener := range l.listeners {
err := listener(elected)
if err != nil {
log.Errorf("Error calling Leadership listener: %s", err)
}
}
}
// IsLeader returns true if current node is leader
func (l *Leadership) IsLeader() bool {
return l.leader.Get().(bool)
}

16
cluster/store.go Normal file
View File

@@ -0,0 +1,16 @@
package cluster
// Object is the struct to store
type Object interface{}
// Store is a generic interface to represents a storage
type Store interface {
Load() (Object, error)
Get() Object
Begin() (Transaction, Object, error)
}
// Transaction allows to set a struct in the KV store
type Transaction interface {
Commit(object Object) error
}

View File

@@ -1,38 +0,0 @@
package cmd
import (
"time"
ptypes "github.com/traefik/paerser/types"
"github.com/traefik/traefik/v3/pkg/config/static"
)
// TraefikCmdConfiguration wraps the static configuration and extra parameters.
type TraefikCmdConfiguration struct {
static.Configuration `export:"true"`
// ConfigFile is the path to the configuration file.
ConfigFile string `description:"Configuration file to use. If specified all other flags are ignored." export:"true"`
}
// NewTraefikConfiguration creates a TraefikCmdConfiguration with default values.
func NewTraefikConfiguration() *TraefikCmdConfiguration {
return &TraefikCmdConfiguration{
Configuration: static.Configuration{
Global: &static.Global{
CheckNewVersion: true,
},
EntryPoints: make(static.EntryPoints),
Providers: &static.Providers{
ProvidersThrottleDuration: ptypes.Duration(2 * time.Second),
},
ServersTransport: &static.ServersTransport{
MaxIdleConnsPerHost: 200,
},
TCPServersTransport: &static.TCPServersTransport{
DialTimeout: ptypes.Duration(30 * time.Second),
DialKeepAlive: ptypes.Duration(15 * time.Second),
},
},
ConfigFile: "",
}
}

View File

@@ -1,79 +0,0 @@
package healthcheck
import (
"errors"
"fmt"
"net/http"
"os"
"time"
"github.com/traefik/paerser/cli"
"github.com/traefik/traefik/v3/pkg/config/static"
)
// NewCmd builds a new HealthCheck command.
func NewCmd(traefikConfiguration *static.Configuration, loaders []cli.ResourceLoader) *cli.Command {
return &cli.Command{
Name: "healthcheck",
Description: `Calls Traefik /ping endpoint (disabled by default) to check the health of Traefik.`,
Configuration: traefikConfiguration,
Run: runCmd(traefikConfiguration),
Resources: loaders,
}
}
func runCmd(traefikConfiguration *static.Configuration) func(_ []string) error {
return func(_ []string) error {
traefikConfiguration.SetEffectiveConfiguration()
resp, errPing := Do(*traefikConfiguration)
if resp != nil {
resp.Body.Close()
}
if errPing != nil {
fmt.Printf("Error calling healthcheck: %s\n", errPing)
os.Exit(1)
}
if resp.StatusCode != http.StatusOK {
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
os.Exit(1)
}
fmt.Printf("OK: %s\n", resp.Request.URL)
os.Exit(0)
return nil
}
}
// Do try to do a healthcheck.
func Do(staticConfiguration static.Configuration) (*http.Response, error) {
if staticConfiguration.Ping == nil {
return nil, errors.New("please enable `ping` to use health check")
}
ep := staticConfiguration.Ping.EntryPoint
if ep == "" {
ep = "traefik"
}
pingEntryPoint, ok := staticConfiguration.EntryPoints[ep]
if !ok {
return nil, fmt.Errorf("ping: missing %s entry point", ep)
}
client := &http.Client{Timeout: 5 * time.Second}
protocol := "http"
// TODO Handle TLS on ping etc...
// if pingEntryPoint.TLS != nil {
// protocol = "https"
// tr := &http.Transport{
// TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
// }
// client.Transport = tr
// }
path := "/"
return client.Head(protocol + "://" + pingEntryPoint.GetAddress() + path + "ping")
}

View File

@@ -1,332 +0,0 @@
package main
import (
"bytes"
"fmt"
"go/format"
"go/importer"
"go/token"
"go/types"
"io"
"log"
"os"
"path"
"path/filepath"
"reflect"
"slices"
"sort"
"strings"
"golang.org/x/tools/imports"
)
// File a kind of AST element that represents a file.
type File struct {
Package string
Imports []string
Elements []Element
}
// Element is a simplified version of a symbol.
type Element struct {
Name string
Value string
}
// Centrifuge a centrifuge.
// Generate Go Structures from Go structures.
type Centrifuge struct {
IncludedImports []string
ExcludedTypes []string
ExcludedFiles []string
TypeCleaner func(types.Type, string) string
PackageCleaner func(string) string
rootPkg string
fileSet *token.FileSet
pkg *types.Package
}
// NewCentrifuge creates a new Centrifuge.
func NewCentrifuge(rootPkg string) (*Centrifuge, error) {
fileSet := token.NewFileSet()
pkg, err := importer.ForCompiler(fileSet, "source", nil).Import(rootPkg)
if err != nil {
return nil, err
}
return &Centrifuge{
fileSet: fileSet,
pkg: pkg,
rootPkg: rootPkg,
TypeCleaner: func(typ types.Type, _ string) string {
return typ.String()
},
PackageCleaner: func(s string) string {
return s
},
}, nil
}
// Run runs the code extraction and the code generation.
func (c Centrifuge) Run(dest string, pkgName string) error {
files := c.run(c.pkg.Scope(), c.rootPkg, pkgName)
err := fileWriter{baseDir: dest}.Write(files)
if err != nil {
return err
}
for _, p := range c.pkg.Imports() {
if slices.Contains(c.IncludedImports, p.Path()) {
fls := c.run(p.Scope(), p.Path(), p.Name())
err = fileWriter{baseDir: filepath.Join(dest, p.Name())}.Write(fls)
if err != nil {
return err
}
}
}
return err
}
func (c Centrifuge) run(sc *types.Scope, rootPkg string, pkgName string) map[string]*File {
files := map[string]*File{}
for _, name := range sc.Names() {
if slices.Contains(c.ExcludedTypes, name) {
continue
}
o := sc.Lookup(name)
if !o.Exported() {
continue
}
filename := filepath.Base(c.fileSet.File(o.Pos()).Name())
if slices.Contains(c.ExcludedFiles, path.Join(rootPkg, filename)) {
continue
}
fl, ok := files[filename]
if !ok {
files[filename] = &File{Package: pkgName}
fl = files[filename]
}
elt := Element{
Name: name,
}
switch ob := o.(type) {
case *types.TypeName:
switch obj := ob.Type().(*types.Named).Underlying().(type) {
case *types.Struct:
elt.Value = c.writeStruct(name, obj, rootPkg, fl)
case *types.Map:
elt.Value = fmt.Sprintf("type %s map[%s]%s\n", name, obj.Key().String(), c.TypeCleaner(obj.Elem(), rootPkg))
case *types.Slice:
elt.Value = fmt.Sprintf("type %s []%v\n", name, c.TypeCleaner(obj.Elem(), rootPkg))
case *types.Basic:
elt.Value = fmt.Sprintf("type %s %v\n", name, obj.Name())
default:
log.Printf("OTHER TYPE::: %s %T\n", name, o.Type().(*types.Named).Underlying())
continue
}
default:
log.Printf("OTHER::: %s %T\n", name, o)
continue
}
if len(elt.Value) > 0 {
fl.Elements = append(fl.Elements, elt)
}
}
return files
}
func (c Centrifuge) writeStruct(name string, obj *types.Struct, rootPkg string, elt *File) string {
b := strings.Builder{}
b.WriteString(fmt.Sprintf("type %s struct {\n", name))
for i := range obj.NumFields() {
field := obj.Field(i)
if !field.Exported() {
continue
}
fPkg := c.PackageCleaner(extractPackage(field.Type()))
if fPkg != "" && fPkg != rootPkg {
elt.Imports = append(elt.Imports, fPkg)
}
fType := c.TypeCleaner(field.Type(), rootPkg)
if field.Embedded() {
b.WriteString(fmt.Sprintf("\t%s\n", fType))
continue
}
values, ok := lookupTagValue(obj.Tag(i), "json")
if len(values) > 0 && values[0] == "-" {
continue
}
b.WriteString(fmt.Sprintf("\t%s %s", field.Name(), fType))
if ok {
b.WriteString(fmt.Sprintf(" `json:\"%s\"`", strings.Join(values, ",")))
}
b.WriteString("\n")
}
b.WriteString("}\n")
return b.String()
}
func lookupTagValue(raw, key string) ([]string, bool) {
value, ok := reflect.StructTag(raw).Lookup(key)
if !ok {
return nil, ok
}
values := strings.Split(value, ",")
if len(values) < 1 {
return nil, true
}
return values, true
}
func extractPackage(t types.Type) string {
switch tu := t.(type) {
case *types.Named:
return tu.Obj().Pkg().Path()
case *types.Slice:
if v, ok := tu.Elem().(*types.Named); ok {
return v.Obj().Pkg().Path()
}
return ""
case *types.Map:
if v, ok := tu.Elem().(*types.Named); ok {
return v.Obj().Pkg().Path()
}
return ""
case *types.Pointer:
return extractPackage(tu.Elem())
default:
return ""
}
}
type fileWriter struct {
baseDir string
}
func (f fileWriter) Write(files map[string]*File) error {
err := os.MkdirAll(f.baseDir, 0o755)
if err != nil {
return err
}
for name, file := range files {
err = f.writeFile(name, file)
if err != nil {
return err
}
}
return nil
}
func (f fileWriter) writeFile(name string, desc *File) error {
if len(desc.Elements) == 0 {
return nil
}
filename := filepath.Join(f.baseDir, name)
file, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
}
defer func() { _ = file.Close() }()
b := bytes.NewBufferString("package ")
b.WriteString(desc.Package)
b.WriteString("\n")
b.WriteString("// Code generated by centrifuge. DO NOT EDIT.\n")
b.WriteString("\n")
f.writeImports(b, desc.Imports)
b.WriteString("\n")
for _, elt := range desc.Elements {
b.WriteString(elt.Value)
b.WriteString("\n")
}
// gofmt
source, err := format.Source(b.Bytes())
if err != nil {
log.Println(b.String())
return fmt.Errorf("failed to format sources: %w", err)
}
// goimports
process, err := imports.Process(filename, source, nil)
if err != nil {
log.Println(string(source))
return fmt.Errorf("failed to format imports: %w", err)
}
_, err = file.Write(process)
if err != nil {
return err
}
return nil
}
func (f fileWriter) writeImports(b io.StringWriter, imports []string) {
if len(imports) == 0 {
return
}
uniq := map[string]struct{}{}
sort.Strings(imports)
_, _ = b.WriteString("import (\n")
for _, s := range imports {
if _, exist := uniq[s]; exist {
continue
}
uniq[s] = struct{}{}
_, _ = b.WriteString(fmt.Sprintf(` "%s"`+"\n", s))
}
_, _ = b.WriteString(")\n")
}

View File

@@ -1,124 +0,0 @@
package main
import (
"fmt"
"go/build"
"go/types"
"log"
"os"
"path"
"path/filepath"
"strings"
)
const rootPkg = "github.com/traefik/traefik/v3/pkg/config/dynamic"
const (
destModuleName = "github.com/traefik/genconf"
destPkg = "dynamic"
)
const marsh = `package %s
import "encoding/json"
type JSONPayload struct {
*Configuration
}
func (c JSONPayload) MarshalJSON() ([]byte, error) {
if c.Configuration == nil {
return nil, nil
}
return json.Marshal(c.Configuration)
}
`
// main generate Go Structures from Go structures.
// Allows to create an external module (destModuleName) used by the plugin's providers
// that contains Go structs of the dynamic configuration and nothing else.
// These Go structs do not have any non-exported fields and do not rely on any external dependencies.
func main() {
dest := filepath.Join(path.Join(build.Default.GOPATH, "src"), destModuleName, destPkg)
log.Println("Output:", dest)
err := run(dest)
if err != nil {
log.Fatal(err)
}
}
func run(dest string) error {
centrifuge, err := NewCentrifuge(rootPkg)
if err != nil {
return err
}
centrifuge.IncludedImports = []string{
"github.com/traefik/traefik/v3/pkg/tls",
"github.com/traefik/traefik/v3/pkg/types",
}
centrifuge.ExcludedTypes = []string{
// tls
"CertificateStore", "Manager",
// dynamic
"Message", "Configurations",
// types
"HTTPCodeRanges", "HostResolverConfig",
}
centrifuge.ExcludedFiles = []string{
"github.com/traefik/traefik/v3/pkg/types/logs.go",
"github.com/traefik/traefik/v3/pkg/types/metrics.go",
}
centrifuge.TypeCleaner = cleanType
centrifuge.PackageCleaner = cleanPackage
err = centrifuge.Run(dest, destPkg)
if err != nil {
return err
}
return os.WriteFile(filepath.Join(dest, "marshaler.go"), []byte(fmt.Sprintf(marsh, destPkg)), 0o666)
}
func cleanType(typ types.Type, base string) string {
if typ.String() == "github.com/traefik/traefik/v3/pkg/types.FileOrContent" {
return "string"
}
if typ.String() == "[]github.com/traefik/traefik/v3/pkg/types.FileOrContent" {
return "[]string"
}
if typ.String() == "github.com/traefik/paerser/types.Duration" {
return "string"
}
if strings.Contains(typ.String(), base) {
return strings.ReplaceAll(typ.String(), base+".", "")
}
if strings.Contains(typ.String(), "github.com/traefik/traefik/v3/pkg/") {
return strings.ReplaceAll(typ.String(), "github.com/traefik/traefik/v3/pkg/", "")
}
return typ.String()
}
func cleanPackage(src string) string {
switch src {
case "github.com/traefik/paerser/types":
return ""
case "github.com/traefik/traefik/v3/pkg/tls":
return path.Join(destModuleName, destPkg, "tls")
case "github.com/traefik/traefik/v3/pkg/types":
return path.Join(destModuleName, destPkg, "types")
default:
return src
}
}

View File

@@ -0,0 +1,136 @@
package anonymize
import (
"encoding/json"
"fmt"
"reflect"
"regexp"
"github.com/mitchellh/copystructure"
"github.com/mvdan/xurls"
)
const (
maskShort = "xxxx"
maskLarge = maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort
)
// Do configuration.
func Do(baseConfig interface{}, indent bool) (string, error) {
anomConfig, err := copystructure.Copy(baseConfig)
if err != nil {
return "", err
}
val := reflect.ValueOf(anomConfig)
err = doOnStruct(val)
if err != nil {
return "", err
}
configJSON, err := marshal(anomConfig, indent)
if err != nil {
return "", err
}
return doOnJSON(string(configJSON)), nil
}
func doOnJSON(input string) string {
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, maskLarge+"\""), maskLarge)
}
func doOnStruct(field reflect.Value) error {
switch field.Kind() {
case reflect.Ptr:
if !field.IsNil() {
if err := doOnStruct(field.Elem()); err != nil {
return err
}
}
case reflect.Struct:
for i := 0; i < field.NumField(); i++ {
fld := field.Field(i)
stField := field.Type().Field(i)
if !isExported(stField) {
continue
}
if stField.Tag.Get("export") == "true" {
if err := doOnStruct(fld); err != nil {
return err
}
} else {
if err := reset(fld, stField.Name); err != nil {
return err
}
}
}
case reflect.Map:
for _, key := range field.MapKeys() {
if err := doOnStruct(field.MapIndex(key)); err != nil {
return err
}
}
case reflect.Slice:
for j := 0; j < field.Len(); j++ {
if err := doOnStruct(field.Index(j)); err != nil {
return err
}
}
}
return nil
}
func reset(field reflect.Value, name string) error {
if !field.CanSet() {
return fmt.Errorf("cannot reset field %s", name)
}
switch field.Kind() {
case reflect.Ptr:
if !field.IsNil() {
field.Set(reflect.Zero(field.Type()))
}
case reflect.Struct:
if field.IsValid() {
field.Set(reflect.Zero(field.Type()))
}
case reflect.String:
if field.String() != "" {
field.Set(reflect.ValueOf(maskShort))
}
case reflect.Map:
if field.Len() > 0 {
field.Set(reflect.MakeMap(field.Type()))
}
case reflect.Slice:
if field.Len() > 0 {
field.Set(reflect.MakeSlice(field.Type(), 0, 0))
}
case reflect.Interface:
if !field.IsNil() {
return reset(field.Elem(), "")
}
default:
// Primitive type
field.Set(reflect.Zero(field.Type()))
}
return nil
}
// isExported return true is a struct field is exported, else false
func isExported(f reflect.StructField) bool {
if f.PkgPath != "" && !f.Anonymous {
return false
}
return true
}
func marshal(anomConfig interface{}, indent bool) ([]byte, error) {
if indent {
return json.MarshalIndent(anomConfig, "", " ")
}
return json.Marshal(anomConfig)
}

View File

@@ -0,0 +1,670 @@
package anonymize
import (
"crypto/tls"
"testing"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/middlewares"
"github.com/containous/traefik/provider"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/kv"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/web"
"github.com/containous/traefik/provider/zk"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
thoas_stats "github.com/thoas/stats"
)
func TestDo_globalConfiguration(t *testing.T) {
config := &configuration.GlobalConfiguration{}
config.GraceTimeOut = flaeg.Duration(666 * time.Second)
config.Debug = true
config.CheckNewVersion = true
config.AccessLogsFile = "AccessLogsFile"
config.AccessLog = &types.AccessLog{
FilePath: "AccessLog FilePath",
Format: "AccessLog Format",
}
config.TraefikLogsFile = "TraefikLogsFile"
config.LogLevel = "LogLevel"
config.EntryPoints = configuration.EntryPoints{
"foo": {
Network: "foo Network",
Address: "foo Address",
TLS: &configuration.TLS{
MinVersion: "foo MinVersion",
CipherSuites: []string{"foo CipherSuites 1", "foo CipherSuites 2", "foo CipherSuites 3"},
Certificates: configuration.Certificates{
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
},
ClientCAFiles: []string{"foo ClientCAFiles 1", "foo ClientCAFiles 2", "foo ClientCAFiles 3"},
},
Redirect: &configuration.Redirect{
Replacement: "foo Replacement",
Regex: "foo Regex",
EntryPoint: "foo EntryPoint",
},
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "foo Basic UsersFile",
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "foo Digest UsersFile",
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
},
Forward: &types.Forward{
Address: "foo Address",
TLS: &types.ClientTLS{
CA: "foo CA",
Cert: "foo Cert",
Key: "foo Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
WhitelistSourceRange: []string{"foo WhitelistSourceRange 1", "foo WhitelistSourceRange 2", "foo WhitelistSourceRange 3"},
Compress: true,
ProxyProtocol: &configuration.ProxyProtocol{
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
},
},
"fii": {
Network: "fii Network",
Address: "fii Address",
TLS: &configuration.TLS{
MinVersion: "fii MinVersion",
CipherSuites: []string{"fii CipherSuites 1", "fii CipherSuites 2", "fii CipherSuites 3"},
Certificates: configuration.Certificates{
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
},
ClientCAFiles: []string{"fii ClientCAFiles 1", "fii ClientCAFiles 2", "fii ClientCAFiles 3"},
},
Redirect: &configuration.Redirect{
Replacement: "fii Replacement",
Regex: "fii Regex",
EntryPoint: "fii EntryPoint",
},
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "fii Basic UsersFile",
Users: types.Users{"fii Basic Users 1", "fii Basic Users 2", "fii Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "fii Digest UsersFile",
Users: types.Users{"fii Digest Users 1", "fii Digest Users 2", "fii Digest Users 3"},
},
Forward: &types.Forward{
Address: "fii Address",
TLS: &types.ClientTLS{
CA: "fii CA",
Cert: "fii Cert",
Key: "fii Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
WhitelistSourceRange: []string{"fii WhitelistSourceRange 1", "fii WhitelistSourceRange 2", "fii WhitelistSourceRange 3"},
Compress: true,
ProxyProtocol: &configuration.ProxyProtocol{
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
},
},
}
config.Cluster = &types.Cluster{
Node: "Cluster Node",
Store: &types.Store{
Prefix: "Cluster Store Prefix",
// ...
},
}
config.Constraints = types.Constraints{
{
Key: "Constraints Key 1",
Regex: "Constraints Regex 2",
MustMatch: true,
},
{
Key: "Constraints Key 1",
Regex: "Constraints Regex 2",
MustMatch: true,
},
}
config.ACME = &acme.ACME{
Email: "acme Email",
Domains: []acme.Domain{
{
Main: "Domains Main",
SANs: []string{"Domains acme SANs 1", "Domains acme SANs 2", "Domains acme SANs 3"},
},
},
Storage: "Storage",
StorageFile: "StorageFile",
OnDemand: true,
OnHostRule: true,
CAServer: "CAServer",
EntryPoint: "EntryPoint",
DNSProvider: "DNSProvider",
DelayDontCheckDNS: 666,
ACMELogging: true,
TLSConfig: &tls.Config{
InsecureSkipVerify: true,
// ...
},
}
config.DefaultEntryPoints = configuration.DefaultEntryPoints{"DefaultEntryPoints 1", "DefaultEntryPoints 2", "DefaultEntryPoints 3"}
config.ProvidersThrottleDuration = flaeg.Duration(666 * time.Second)
config.MaxIdleConnsPerHost = 666
config.IdleTimeout = flaeg.Duration(666 * time.Second)
config.InsecureSkipVerify = true
config.RootCAs = configuration.RootCAs{"RootCAs 1", "RootCAs 2", "RootCAs 3"}
config.Retry = &configuration.Retry{
Attempts: 666,
}
config.HealthCheck = &configuration.HealthCheckConfig{
Interval: flaeg.Duration(666 * time.Second),
}
config.RespondingTimeouts = &configuration.RespondingTimeouts{
ReadTimeout: flaeg.Duration(666 * time.Second),
WriteTimeout: flaeg.Duration(666 * time.Second),
IdleTimeout: flaeg.Duration(666 * time.Second),
}
config.ForwardingTimeouts = &configuration.ForwardingTimeouts{
DialTimeout: flaeg.Duration(666 * time.Second),
ResponseHeaderTimeout: flaeg.Duration(666 * time.Second),
}
config.Docker = &docker.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "docker Filename",
Constraints: types.Constraints{
{
Key: "docker Constraints Key 1",
Regex: "docker Constraints Regex 2",
MustMatch: true,
},
{
Key: "docker Constraints Key 1",
Regex: "docker Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "docker Endpoint",
Domain: "docker Domain",
TLS: &types.ClientTLS{
CA: "docker CA",
Cert: "docker Cert",
Key: "docker Key",
InsecureSkipVerify: true,
},
ExposedByDefault: true,
UseBindPortIP: true,
SwarmMode: true,
}
config.File = &file.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "file Filename",
Constraints: types.Constraints{
{
Key: "file Constraints Key 1",
Regex: "file Constraints Regex 2",
MustMatch: true,
},
{
Key: "file Constraints Key 1",
Regex: "file Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Directory: "file Directory",
}
config.Web = &web.Provider{
Address: "web Address",
CertFile: "web CertFile",
KeyFile: "web KeyFile",
ReadOnly: true,
Statistics: &types.Statistics{
RecentErrors: 666,
},
Metrics: &types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{6.5, 6.6, 6.7},
},
Datadog: &types.Datadog{
Address: "Datadog Address",
PushInterval: "Datadog PushInterval",
},
StatsD: &types.Statsd{
Address: "StatsD Address",
PushInterval: "StatsD PushInterval",
},
},
Path: "web Path",
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "web Basic UsersFile",
Users: types.Users{"web Basic Users 1", "web Basic Users 2", "web Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "web Digest UsersFile",
Users: types.Users{"web Digest Users 1", "web Digest Users 2", "web Digest Users 3"},
},
Forward: &types.Forward{
Address: "web Address",
TLS: &types.ClientTLS{
CA: "web CA",
Cert: "web Cert",
Key: "web Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
Debug: true,
CurrentConfigurations: &safe.Safe{},
Stats: &thoas_stats.Stats{
Uptime: time.Now(),
Pid: 666,
ResponseCounts: map[string]int{"foo": 1, "fii": 2, "fuu": 3},
TotalResponseCounts: map[string]int{"foo": 1, "fii": 2, "fuu": 3},
TotalResponseTime: time.Now(),
},
StatsRecorder: &middlewares.StatsRecorder{},
}
config.Marathon = &marathon.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "marathon Filename",
Constraints: types.Constraints{
{
Key: "marathon Constraints Key 1",
Regex: "marathon Constraints Regex 2",
MustMatch: true,
},
{
Key: "marathon Constraints Key 1",
Regex: "marathon Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "",
Domain: "",
ExposedByDefault: true,
GroupsAsSubDomains: true,
DCOSToken: "",
MarathonLBCompatibility: true,
TLS: &types.ClientTLS{
CA: "marathon CA",
Cert: "marathon Cert",
Key: "marathon Key",
InsecureSkipVerify: true,
},
DialerTimeout: flaeg.Duration(666 * time.Second),
KeepAlive: flaeg.Duration(666 * time.Second),
ForceTaskHostname: true,
Basic: &marathon.Basic{
HTTPBasicAuthUser: "marathon HTTPBasicAuthUser",
HTTPBasicPassword: "marathon HTTPBasicPassword",
},
RespectReadinessChecks: true,
}
config.ConsulCatalog = &consul.CatalogProvider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "ConsulCatalog Filename",
Constraints: types.Constraints{
{
Key: "ConsulCatalog Constraints Key 1",
Regex: "ConsulCatalog Constraints Regex 2",
MustMatch: true,
},
{
Key: "ConsulCatalog Constraints Key 1",
Regex: "ConsulCatalog Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "ConsulCatalog Endpoint",
Domain: "ConsulCatalog Domain",
ExposedByDefault: true,
Prefix: "ConsulCatalog Prefix",
FrontEndRule: "ConsulCatalog FrontEndRule",
}
config.Kubernetes = &kubernetes.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "k8s Filename",
Constraints: types.Constraints{
{
Key: "k8s Constraints Key 1",
Regex: "k8s Constraints Regex 2",
MustMatch: true,
},
{
Key: "k8s Constraints Key 1",
Regex: "k8s Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "k8s Endpoint",
Token: "k8s Token",
CertAuthFilePath: "k8s CertAuthFilePath",
DisablePassHostHeaders: true,
Namespaces: kubernetes.Namespaces{"k8s Namespaces 1", "k8s Namespaces 2", "k8s Namespaces 3"},
LabelSelector: "k8s LabelSelector",
}
config.Mesos = &mesos.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "mesos Filename",
Constraints: types.Constraints{
{
Key: "mesos Constraints Key 1",
Regex: "mesos Constraints Regex 2",
MustMatch: true,
},
{
Key: "mesos Constraints Key 1",
Regex: "mesos Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "mesos Endpoint",
Domain: "mesos Domain",
ExposedByDefault: true,
GroupsAsSubDomains: true,
ZkDetectionTimeout: 666,
RefreshSeconds: 666,
IPSources: "mesos IPSources",
StateTimeoutSecond: 666,
Masters: []string{"mesos Masters 1", "mesos Masters 2", "mesos Masters 3"},
}
config.Eureka = &eureka.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "eureka Filename",
Constraints: types.Constraints{
{
Key: "eureka Constraints Key 1",
Regex: "eureka Constraints Regex 2",
MustMatch: true,
},
{
Key: "eureka Constraints Key 1",
Regex: "eureka Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "eureka Endpoint",
Delay: "eureka Delay",
}
config.ECS = &ecs.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "ecs Filename",
Constraints: types.Constraints{
{
Key: "ecs Constraints Key 1",
Regex: "ecs Constraints Regex 2",
MustMatch: true,
},
{
Key: "ecs Constraints Key 1",
Regex: "ecs Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Domain: "ecs Domain",
ExposedByDefault: true,
RefreshSeconds: 666,
Clusters: ecs.Clusters{"ecs Clusters 1", "ecs Clusters 2", "ecs Clusters 3"},
Cluster: "ecs Cluster",
AutoDiscoverClusters: true,
Region: "ecs Region",
AccessKeyID: "ecs AccessKeyID",
SecretAccessKey: "ecs SecretAccessKey",
}
config.Rancher = &rancher.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "rancher Filename",
Constraints: types.Constraints{
{
Key: "rancher Constraints Key 1",
Regex: "rancher Constraints Regex 2",
MustMatch: true,
},
{
Key: "rancher Constraints Key 1",
Regex: "rancher Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
APIConfiguration: rancher.APIConfiguration{
Endpoint: "rancher Endpoint",
AccessKey: "rancher AccessKey",
SecretKey: "rancher SecretKey",
},
API: &rancher.APIConfiguration{
Endpoint: "rancher Endpoint",
AccessKey: "rancher AccessKey",
SecretKey: "rancher SecretKey",
},
Metadata: &rancher.MetadataConfiguration{
IntervalPoll: true,
Prefix: "rancher Metadata Prefix",
},
Domain: "rancher Domain",
RefreshSeconds: 666,
ExposedByDefault: true,
EnableServiceHealthFilter: true,
}
config.DynamoDB = &dynamodb.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "dynamodb Filename",
Constraints: types.Constraints{
{
Key: "dynamodb Constraints Key 1",
Regex: "dynamodb Constraints Regex 2",
MustMatch: true,
},
{
Key: "dynamodb Constraints Key 1",
Regex: "dynamodb Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
AccessKeyID: "dynamodb AccessKeyID",
RefreshSeconds: 666,
Region: "dynamodb Region",
SecretAccessKey: "dynamodb SecretAccessKey",
TableName: "dynamodb TableName",
Endpoint: "dynamodb Endpoint",
}
config.Etcd = &etcd.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "etcd Filename",
Constraints: types.Constraints{
{
Key: "etcd Constraints Key 1",
Regex: "etcd Constraints Regex 2",
MustMatch: true,
},
{
Key: "etcd Constraints Key 1",
Regex: "etcd Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "etcd Endpoint",
Prefix: "etcd Prefix",
TLS: &types.ClientTLS{
CA: "etcd CA",
Cert: "etcd Cert",
Key: "etcd Key",
InsecureSkipVerify: true,
},
Username: "etcd Username",
Password: "etcd Password",
},
}
config.Zookeeper = &zk.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "zk Filename",
Constraints: types.Constraints{
{
Key: "zk Constraints Key 1",
Regex: "zk Constraints Regex 2",
MustMatch: true,
},
{
Key: "zk Constraints Key 1",
Regex: "zk Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "zk Endpoint",
Prefix: "zk Prefix",
TLS: &types.ClientTLS{
CA: "zk CA",
Cert: "zk Cert",
Key: "zk Key",
InsecureSkipVerify: true,
},
Username: "zk Username",
Password: "zk Password",
},
}
config.Boltdb = &boltdb.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "boltdb Filename",
Constraints: types.Constraints{
{
Key: "boltdb Constraints Key 1",
Regex: "boltdb Constraints Regex 2",
MustMatch: true,
},
{
Key: "boltdb Constraints Key 1",
Regex: "boltdb Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "boltdb Endpoint",
Prefix: "boltdb Prefix",
TLS: &types.ClientTLS{
CA: "boltdb CA",
Cert: "boltdb Cert",
Key: "boltdb Key",
InsecureSkipVerify: true,
},
Username: "boltdb Username",
Password: "boltdb Password",
},
}
config.Consul = &consul.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "consul Filename",
Constraints: types.Constraints{
{
Key: "consul Constraints Key 1",
Regex: "consul Constraints Regex 2",
MustMatch: true,
},
{
Key: "consul Constraints Key 1",
Regex: "consul Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "consul Endpoint",
Prefix: "consul Prefix",
TLS: &types.ClientTLS{
CA: "consul CA",
Cert: "consul Cert",
Key: "consul Key",
InsecureSkipVerify: true,
},
Username: "consul Username",
Password: "consul Password",
},
}
cleanJSON, err := Do(config, true)
if err != nil {
t.Fatal(err, cleanJSON)
}
}

View File

@@ -0,0 +1,239 @@
package anonymize
import (
"testing"
"github.com/stretchr/testify/assert"
)
func Test_doOnJSON(t *testing.T) {
baseConfiguration := `
{
"GraceTimeOut": 10000000000,
"Debug": false,
"CheckNewVersion": true,
"AccessLogsFile": "",
"TraefikLogsFile": "",
"LogLevel": "ERROR",
"EntryPoints": {
"http": {
"Network": "",
"Address": ":80",
"TLS": null,
"Redirect": {
"EntryPoint": "https",
"Regex": "",
"Replacement": ""
},
"Auth": null,
"Compress": false
},
"https": {
"Network": "",
"Address": ":443",
"TLS": {
"MinVersion": "",
"CipherSuites": null,
"Certificates": null,
"ClientCAFiles": null
},
"Redirect": null,
"Auth": null,
"Compress": false
}
},
"Cluster": null,
"Constraints": [],
"ACME": {
"Email": "foo@bar.com",
"Domains": [
{
"Main": "foo@bar.com",
"SANs": null
},
{
"Main": "foo@bar.com",
"SANs": null
}
],
"Storage": "",
"StorageFile": "/acme/acme.json",
"OnDemand": true,
"OnHostRule": true,
"CAServer": "",
"EntryPoint": "https",
"DNSProvider": "",
"DelayDontCheckDNS": 0,
"ACMELogging": false,
"TLSConfig": null
},
"DefaultEntryPoints": [
"https",
"http"
],
"ProvidersThrottleDuration": 2000000000,
"MaxIdleConnsPerHost": 200,
"IdleTimeout": 180000000000,
"InsecureSkipVerify": false,
"Retry": null,
"HealthCheck": {
"Interval": 30000000000
},
"Docker": null,
"File": null,
"Web": null,
"Marathon": null,
"Consul": null,
"ConsulCatalog": null,
"Etcd": null,
"Zookeeper": null,
"Boltdb": null,
"Kubernetes": null,
"Mesos": null,
"Eureka": null,
"ECS": null,
"Rancher": null,
"DynamoDB": null,
"ConfigFile": "/etc/traefik/traefik.toml"
}
`
expectedConfiguration := `
{
"GraceTimeOut": 10000000000,
"Debug": false,
"CheckNewVersion": true,
"AccessLogsFile": "",
"TraefikLogsFile": "",
"LogLevel": "ERROR",
"EntryPoints": {
"http": {
"Network": "",
"Address": ":80",
"TLS": null,
"Redirect": {
"EntryPoint": "https",
"Regex": "",
"Replacement": ""
},
"Auth": null,
"Compress": false
},
"https": {
"Network": "",
"Address": ":443",
"TLS": {
"MinVersion": "",
"CipherSuites": null,
"Certificates": null,
"ClientCAFiles": null
},
"Redirect": null,
"Auth": null,
"Compress": false
}
},
"Cluster": null,
"Constraints": [],
"ACME": {
"Email": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Domains": [
{
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SANs": null
},
{
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SANs": null
}
],
"Storage": "",
"StorageFile": "/acme/acme.json",
"OnDemand": true,
"OnHostRule": true,
"CAServer": "",
"EntryPoint": "https",
"DNSProvider": "",
"DelayDontCheckDNS": 0,
"ACMELogging": false,
"TLSConfig": null
},
"DefaultEntryPoints": [
"https",
"http"
],
"ProvidersThrottleDuration": 2000000000,
"MaxIdleConnsPerHost": 200,
"IdleTimeout": 180000000000,
"InsecureSkipVerify": false,
"Retry": null,
"HealthCheck": {
"Interval": 30000000000
},
"Docker": null,
"File": null,
"Web": null,
"Marathon": null,
"Consul": null,
"ConsulCatalog": null,
"Etcd": null,
"Zookeeper": null,
"Boltdb": null,
"Kubernetes": null,
"Mesos": null,
"Eureka": null,
"ECS": null,
"Rancher": null,
"DynamoDB": null,
"ConfigFile": "/etc/traefik/traefik.toml"
}
`
anomConfiguration := doOnJSON(baseConfiguration)
if anomConfiguration != expectedConfiguration {
t.Errorf("Got %s, want %s.", anomConfiguration, expectedConfiguration)
}
}
func Test_doOnJSON_simple(t *testing.T) {
testCases := []struct {
name string
input string
expectedOutput string
}{
{
name: "email",
input: `{
"email1": "goo@example.com",
"email2": "foo.bargoo@example.com",
"email3": "foo.bargoo@example.com.us"
}`,
expectedOutput: `{
"email1": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"email2": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"email3": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}`,
},
{
name: "url",
input: `{
"URL": "foo domain.com foo",
"URL": "foo sub.domain.com foo",
"URL": "foo sub.sub.domain.com foo",
"URL": "foo sub.sub.sub.domain.com.us foo"
}`,
expectedOutput: `{
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo"
}`,
},
}
for _, test := range testCases {
t.Run(test.name, func(t *testing.T) {
output := doOnJSON(test.input)
assert.Equal(t, test.expectedOutput, output)
})
}
}

View File

@@ -0,0 +1,176 @@
package anonymize
import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
)
type Courgette struct {
Ji string
Ho string
}
type Tomate struct {
Ji string
Ho string
}
type Carotte struct {
Name string
Value int
Courgette Courgette
ECourgette Courgette `export:"true"`
Pourgette *Courgette
EPourgette *Courgette `export:"true"`
Aubergine map[string]string
EAubergine map[string]string `export:"true"`
SAubergine map[string]Tomate
ESAubergine map[string]Tomate `export:"true"`
PSAubergine map[string]*Tomate
EPAubergine map[string]*Tomate `export:"true"`
}
func Test_doOnStruct(t *testing.T) {
testCase := []struct {
name string
base *Carotte
expected *Carotte
hasError bool
}{
{
name: "primitive",
base: &Carotte{
Name: "koko",
Value: 666,
},
expected: &Carotte{
Name: "xxxx",
},
},
{
name: "struct",
base: &Carotte{
Name: "koko",
Courgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
},
},
{
name: "pointer",
base: &Carotte{
Name: "koko",
Pourgette: &Courgette{
Ji: "hoo",
},
},
expected: &Carotte{
Name: "xxxx",
Pourgette: nil,
},
},
{
name: "export struct",
base: &Carotte{
Name: "koko",
ECourgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
ECourgette: Courgette{
Ji: "xxxx",
},
},
},
{
name: "export pointer struct",
base: &Carotte{
Name: "koko",
ECourgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
ECourgette: Courgette{
Ji: "xxxx",
},
},
},
{
name: "export map string/string",
base: &Carotte{
Name: "koko",
EAubergine: map[string]string{
"foo": "bar",
},
},
expected: &Carotte{
Name: "xxxx",
EAubergine: map[string]string{
"foo": "bar",
},
},
},
{
name: "export map string/pointer",
base: &Carotte{
Name: "koko",
EPAubergine: map[string]*Tomate{
"foo": {
Ji: "fdskljf",
},
},
},
expected: &Carotte{
Name: "xxxx",
EPAubergine: map[string]*Tomate{
"foo": {
Ji: "xxxx",
},
},
},
},
{
name: "export map string/struct (UNSAFE)",
base: &Carotte{
Name: "koko",
ESAubergine: map[string]Tomate{
"foo": {
Ji: "JiJiJi",
},
},
},
expected: &Carotte{
Name: "xxxx",
ESAubergine: map[string]Tomate{
"foo": {
Ji: "JiJiJi",
},
},
},
hasError: true,
},
}
for _, test := range testCase {
t.Run(test.name, func(t *testing.T) {
val := reflect.ValueOf(test.base).Elem()
err := doOnStruct(val)
if !test.hasError && err != nil {
t.Fatal(err)
}
if test.hasError && err == nil {
t.Fatal("Got no error but want an error.")
}
assert.EqualValues(t, test.expected, test.base)
})
}
}

169
cmd/traefik/bug.go Normal file
View File

@@ -0,0 +1,169 @@
package main
import (
"bytes"
"fmt"
"net/url"
"os/exec"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/containous/traefik/cmd/traefik/anonymize"
)
const (
bugTracker = "https://github.com/containous/traefik/issues/new"
bugTemplate = `<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
### Do you want to request a *feature* or report a *bug*?
(If you intend to ask a support question: **DO NOT FILE AN ISSUE**.
Use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik)
or [Slack](https://traefik.herokuapp.com) instead.)
### What did you do?
<!--
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as more as possible.
- If it's possible use the command ` + "`" + "traefik bug" + "`" + `. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
### What did you see instead?
### Output of ` + "`" + `traefik version` + "`" + `: (_What version of Traefik are you using?_)
` + "```" + `
{{.Version}}
` + "```" + `
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
` + "```" + `json
{{.Configuration}}
` + "```" + `
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in debug mode (` + "`" + `--debug` + "`" + ` switch)
` + "```" + `
(paste your output here)
` + "```" + `
`
)
// newBugCmd builds a new Bug command
func newBugCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "bug",
Description: `Report an issue on Traefik bugtracker`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: runBugCmd(traefikConfiguration),
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func runBugCmd(traefikConfiguration *TraefikConfiguration) func() error {
return func() error {
body, err := createBugReport(traefikConfiguration)
if err != nil {
return err
}
sendBugReport(body)
return nil
}
}
func createBugReport(traefikConfiguration *TraefikConfiguration) (string, error) {
var version bytes.Buffer
if err := getVersionPrint(&version); err != nil {
return "", err
}
tmpl, err := template.New("bug").Parse(bugTemplate)
if err != nil {
return "", err
}
config, err := anonymize.Do(traefikConfiguration, true)
if err != nil {
return "", err
}
v := struct {
Version string
Configuration string
}{
Version: version.String(),
Configuration: config,
}
var bug bytes.Buffer
if err := tmpl.Execute(&bug, v); err != nil {
return "", err
}
return bug.String(), nil
}
func sendBugReport(body string) {
URL := bugTracker + "?body=" + url.QueryEscape(body)
if err := openBrowser(URL); err != nil {
fmt.Printf("Please file a new issue at %s using this template:\n\n", bugTracker)
fmt.Print(body)
}
}
func openBrowser(URL string) error {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", URL).Start()
case "windows":
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", URL).Start()
case "darwin":
err = exec.Command("open", URL).Start()
default:
err = fmt.Errorf("unsupported platform")
}
return err
}

65
cmd/traefik/bug_test.go Normal file
View File

@@ -0,0 +1,65 @@
package main
import (
"testing"
"github.com/containous/traefik/cmd/traefik/anonymize"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/types"
"github.com/stretchr/testify/assert"
)
func Test_createBugReport(t *testing.T) {
traefikConfiguration := &TraefikConfiguration{
ConfigFile: "FOO",
GlobalConfiguration: configuration.GlobalConfiguration{
EntryPoints: configuration.EntryPoints{
"goo": &configuration.EntryPoint{
Address: "hoo.bar",
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "foo Basic UsersFile",
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "foo Digest UsersFile",
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
},
},
},
},
File: &file.Provider{
Directory: "BAR",
},
RootCAs: configuration.RootCAs{"fllf"},
},
}
report, err := createBugReport(traefikConfiguration)
assert.NoError(t, err, report)
// exported anonymous configuration
assert.NotContains(t, "web Basic Users ", report)
assert.NotContains(t, "foo Digest Users ", report)
assert.NotContains(t, "hoo.bar", report)
}
func Test_anonymize_traefikConfiguration(t *testing.T) {
traefikConfiguration := &TraefikConfiguration{
ConfigFile: "FOO",
GlobalConfiguration: configuration.GlobalConfiguration{
EntryPoints: configuration.EntryPoints{
"goo": &configuration.EntryPoint{
Address: "hoo.bar",
},
},
File: &file.Provider{
Directory: "BAR",
},
},
}
_, err := anonymize.Do(traefikConfiguration, true)
assert.NoError(t, err)
assert.Equal(t, "hoo.bar", traefikConfiguration.GlobalConfiguration.EntryPoints["goo"].Address)
}

View File

@@ -0,0 +1,226 @@
package main
import (
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/middlewares/accesslog"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/web"
"github.com/containous/traefik/provider/zk"
"github.com/containous/traefik/types"
)
// TraefikConfiguration holds GlobalConfiguration and other stuff
type TraefikConfiguration struct {
configuration.GlobalConfiguration `mapstructure:",squash" export:"true"`
ConfigFile string `short:"c" description:"Configuration file to use (TOML)." export:"true"`
}
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
//default Docker
var defaultDocker docker.Provider
defaultDocker.Watch = true
defaultDocker.ExposedByDefault = true
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
defaultDocker.SwarmMode = false
// default File
var defaultFile file.Provider
defaultFile.Watch = true
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
// default Web
var defaultWeb web.Provider
defaultWeb.Address = ":8080"
defaultWeb.Statistics = &types.Statistics{
RecentErrors: 10,
}
// default Metrics
defaultWeb.Metrics = &types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
},
Datadog: &types.Datadog{
Address: "localhost:8125",
PushInterval: "10s",
},
StatsD: &types.Statsd{
Address: "localhost:8125",
PushInterval: "10s",
},
}
// default Marathon
var defaultMarathon marathon.Provider
defaultMarathon.Watch = true
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
defaultMarathon.ExposedByDefault = true
defaultMarathon.Constraints = types.Constraints{}
defaultMarathon.DialerTimeout = flaeg.Duration(60 * time.Second)
defaultMarathon.KeepAlive = flaeg.Duration(10 * time.Second)
// default Consul
var defaultConsul consul.Provider
defaultConsul.Watch = true
defaultConsul.Endpoint = "127.0.0.1:8500"
defaultConsul.Prefix = "traefik"
defaultConsul.Constraints = types.Constraints{}
// default CatalogProvider
var defaultConsulCatalog consul.CatalogProvider
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
defaultConsulCatalog.ExposedByDefault = true
defaultConsulCatalog.Constraints = types.Constraints{}
defaultConsulCatalog.Prefix = "traefik"
defaultConsulCatalog.FrontEndRule = "Host:{{.ServiceName}}.{{.Domain}}"
// default Etcd
var defaultEtcd etcd.Provider
defaultEtcd.Watch = true
defaultEtcd.Endpoint = "127.0.0.1:2379"
defaultEtcd.Prefix = "/traefik"
defaultEtcd.Constraints = types.Constraints{}
//default Zookeeper
var defaultZookeeper zk.Provider
defaultZookeeper.Watch = true
defaultZookeeper.Endpoint = "127.0.0.1:2181"
defaultZookeeper.Prefix = "/traefik"
defaultZookeeper.Constraints = types.Constraints{}
//default Boltdb
var defaultBoltDb boltdb.Provider
defaultBoltDb.Watch = true
defaultBoltDb.Endpoint = "127.0.0.1:4001"
defaultBoltDb.Prefix = "/traefik"
defaultBoltDb.Constraints = types.Constraints{}
//default Kubernetes
var defaultKubernetes kubernetes.Provider
defaultKubernetes.Watch = true
defaultKubernetes.Endpoint = ""
defaultKubernetes.LabelSelector = ""
defaultKubernetes.Constraints = types.Constraints{}
// default Mesos
var defaultMesos mesos.Provider
defaultMesos.Watch = true
defaultMesos.Endpoint = "http://127.0.0.1:5050"
defaultMesos.ExposedByDefault = true
defaultMesos.Constraints = types.Constraints{}
defaultMesos.RefreshSeconds = 30
defaultMesos.ZkDetectionTimeout = 30
defaultMesos.StateTimeoutSecond = 30
//default ECS
var defaultECS ecs.Provider
defaultECS.Watch = true
defaultECS.ExposedByDefault = true
defaultECS.AutoDiscoverClusters = false
defaultECS.Clusters = ecs.Clusters{"default"}
defaultECS.RefreshSeconds = 15
defaultECS.Constraints = types.Constraints{}
//default Rancher
var defaultRancher rancher.Provider
defaultRancher.Watch = true
defaultRancher.ExposedByDefault = true
defaultRancher.RefreshSeconds = 15
// default DynamoDB
var defaultDynamoDB dynamodb.Provider
defaultDynamoDB.Constraints = types.Constraints{}
defaultDynamoDB.RefreshSeconds = 15
defaultDynamoDB.TableName = "traefik"
defaultDynamoDB.Watch = true
// default Eureka
var defaultEureka eureka.Provider
defaultEureka.Delay = "30s"
// default AccessLog
defaultAccessLog := types.AccessLog{
Format: accesslog.CommonFormat,
FilePath: "",
}
// default HealthCheckConfig
healthCheck := configuration.HealthCheckConfig{
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
}
// default RespondingTimeouts
respondingTimeouts := configuration.RespondingTimeouts{
IdleTimeout: flaeg.Duration(configuration.DefaultIdleTimeout),
}
// default ForwardingTimeouts
forwardingTimeouts := configuration.ForwardingTimeouts{
DialTimeout: flaeg.Duration(configuration.DefaultDialTimeout),
}
defaultConfiguration := configuration.GlobalConfiguration{
Docker: &defaultDocker,
File: &defaultFile,
Web: &defaultWeb,
Marathon: &defaultMarathon,
Consul: &defaultConsul,
ConsulCatalog: &defaultConsulCatalog,
Etcd: &defaultEtcd,
Zookeeper: &defaultZookeeper,
Boltdb: &defaultBoltDb,
Kubernetes: &defaultKubernetes,
Mesos: &defaultMesos,
ECS: &defaultECS,
Rancher: &defaultRancher,
Eureka: &defaultEureka,
DynamoDB: &defaultDynamoDB,
Retry: &configuration.Retry{},
HealthCheck: &healthCheck,
AccessLog: &defaultAccessLog,
RespondingTimeouts: &respondingTimeouts,
ForwardingTimeouts: &forwardingTimeouts,
}
return &TraefikConfiguration{
GlobalConfiguration: defaultConfiguration,
}
}
// NewTraefikConfiguration creates a TraefikConfiguration with default values
func NewTraefikConfiguration() *TraefikConfiguration {
return &TraefikConfiguration{
GlobalConfiguration: configuration.GlobalConfiguration{
GraceTimeOut: flaeg.Duration(10 * time.Second),
AccessLogsFile: "",
TraefikLogsFile: "",
LogLevel: "ERROR",
EntryPoints: map[string]*configuration.EntryPoint{},
Constraints: types.Constraints{},
DefaultEntryPoints: []string{},
ProvidersThrottleDuration: flaeg.Duration(2 * time.Second),
MaxIdleConnsPerHost: 200,
IdleTimeout: flaeg.Duration(0),
HealthCheck: &configuration.HealthCheckConfig{
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
},
CheckNewVersion: true,
},
ConfigFile: "",
}
}

View File

@@ -1,89 +0,0 @@
package main
import (
"io"
stdlog "log"
"os"
"strings"
"time"
"github.com/natefinch/lumberjack"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/sirupsen/logrus"
"github.com/traefik/traefik/v3/pkg/config/static"
"github.com/traefik/traefik/v3/pkg/logs"
)
func init() {
// hide the first logs before the setup of the logger.
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
}
func setupLogger(staticConfiguration *static.Configuration) {
// configure log format
w := getLogWriter(staticConfiguration)
// configure log level
logLevel := getLogLevel(staticConfiguration)
// create logger
logCtx := zerolog.New(w).With().Timestamp()
if logLevel <= zerolog.DebugLevel {
logCtx = logCtx.Caller()
}
log.Logger = logCtx.Logger().Level(logLevel)
zerolog.DefaultContextLogger = &log.Logger
zerolog.SetGlobalLevel(logLevel)
// Global logrus replacement (related to lib like go-rancher-metadata, docker, etc.)
logrus.StandardLogger().Out = logs.NoLevel(log.Logger, zerolog.DebugLevel)
// configure default standard log.
stdlog.SetFlags(stdlog.Lshortfile | stdlog.LstdFlags)
stdlog.SetOutput(logs.NoLevel(log.Logger, zerolog.DebugLevel))
}
func getLogWriter(staticConfiguration *static.Configuration) io.Writer {
var w io.Writer = os.Stderr
if staticConfiguration.Log != nil && len(staticConfiguration.Log.FilePath) > 0 {
_, _ = os.Create(staticConfiguration.Log.FilePath)
w = &lumberjack.Logger{
Filename: staticConfiguration.Log.FilePath,
MaxSize: staticConfiguration.Log.MaxSize,
MaxBackups: staticConfiguration.Log.MaxBackups,
MaxAge: staticConfiguration.Log.MaxAge,
Compress: true,
}
}
if staticConfiguration.Log == nil || staticConfiguration.Log.Format != "json" {
w = zerolog.ConsoleWriter{
Out: w,
TimeFormat: time.RFC3339,
NoColor: staticConfiguration.Log != nil && (staticConfiguration.Log.NoColor || len(staticConfiguration.Log.FilePath) > 0),
}
}
return w
}
func getLogLevel(staticConfiguration *static.Configuration) zerolog.Level {
levelStr := "error"
if staticConfiguration.Log != nil && staticConfiguration.Log.Level != "" {
levelStr = strings.ToLower(staticConfiguration.Log.Level)
}
logLevel, err := zerolog.ParseLevel(strings.ToLower(levelStr))
if err != nil {
log.Error().Err(err).
Str("logLevel", levelStr).
Msg("Unspecified or invalid log level, setting the level to default (ERROR)...")
logLevel = zerolog.ErrorLevel
}
return logLevel
}

View File

@@ -1,83 +0,0 @@
package main
import (
"fmt"
"github.com/traefik/traefik/v3/pkg/config/static"
"github.com/traefik/traefik/v3/pkg/plugins"
)
const outputDir = "./plugins-storage/"
func createPluginBuilder(staticConfiguration *static.Configuration) (*plugins.Builder, error) {
client, plgs, localPlgs, err := initPlugins(staticConfiguration)
if err != nil {
return nil, err
}
return plugins.NewBuilder(client, plgs, localPlgs)
}
func initPlugins(staticCfg *static.Configuration) (*plugins.Client, map[string]plugins.Descriptor, map[string]plugins.LocalDescriptor, error) {
err := checkUniquePluginNames(staticCfg.Experimental)
if err != nil {
return nil, nil, nil, err
}
var client *plugins.Client
plgs := map[string]plugins.Descriptor{}
if hasPlugins(staticCfg) {
opts := plugins.ClientOptions{
Output: outputDir,
}
var err error
client, err = plugins.NewClient(opts)
if err != nil {
return nil, nil, nil, fmt.Errorf("unable to create plugins client: %w", err)
}
err = plugins.SetupRemotePlugins(client, staticCfg.Experimental.Plugins)
if err != nil {
return nil, nil, nil, fmt.Errorf("unable to set up plugins environment: %w", err)
}
plgs = staticCfg.Experimental.Plugins
}
localPlgs := map[string]plugins.LocalDescriptor{}
if hasLocalPlugins(staticCfg) {
err := plugins.SetupLocalPlugins(staticCfg.Experimental.LocalPlugins)
if err != nil {
return nil, nil, nil, err
}
localPlgs = staticCfg.Experimental.LocalPlugins
}
return client, plgs, localPlgs, nil
}
func checkUniquePluginNames(e *static.Experimental) error {
if e == nil {
return nil
}
for s := range e.LocalPlugins {
if _, ok := e.Plugins[s]; ok {
return fmt.Errorf("the plugin's name %q must be unique", s)
}
}
return nil
}
func hasPlugins(staticCfg *static.Configuration) bool {
return staticCfg.Experimental != nil && len(staticCfg.Experimental.Plugins) > 0
}
func hasLocalPlugins(staticCfg *static.Configuration) bool {
return staticCfg.Experimental != nil && len(staticCfg.Experimental.LocalPlugins) > 0
}

View File

@@ -1,621 +1,353 @@
package main
import (
"context"
"crypto/x509"
"crypto/tls"
"encoding/json"
"fmt"
"io"
stdlog "log"
fmtlog "log"
"net/http"
"os"
"os/signal"
"sort"
"path/filepath"
"reflect"
"strings"
"syscall"
"time"
"github.com/Sirupsen/logrus"
"github.com/cenk/backoff"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/job"
"github.com/containous/traefik/log"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/server"
"github.com/containous/traefik/server/uuid"
"github.com/containous/traefik/types"
"github.com/containous/traefik/version"
"github.com/coreos/go-systemd/daemon"
"github.com/go-acme/lego/v4/challenge"
gokitmetrics "github.com/go-kit/kit/metrics"
"github.com/rs/zerolog/log"
"github.com/sirupsen/logrus"
"github.com/spiffe/go-spiffe/v2/workloadapi"
"github.com/traefik/paerser/cli"
"github.com/traefik/traefik/v3/cmd"
"github.com/traefik/traefik/v3/cmd/healthcheck"
cmdVersion "github.com/traefik/traefik/v3/cmd/version"
tcli "github.com/traefik/traefik/v3/pkg/cli"
"github.com/traefik/traefik/v3/pkg/collector"
"github.com/traefik/traefik/v3/pkg/config/dynamic"
"github.com/traefik/traefik/v3/pkg/config/runtime"
"github.com/traefik/traefik/v3/pkg/config/static"
"github.com/traefik/traefik/v3/pkg/logs"
"github.com/traefik/traefik/v3/pkg/metrics"
"github.com/traefik/traefik/v3/pkg/middlewares/accesslog"
"github.com/traefik/traefik/v3/pkg/provider/acme"
"github.com/traefik/traefik/v3/pkg/provider/aggregator"
"github.com/traefik/traefik/v3/pkg/provider/tailscale"
"github.com/traefik/traefik/v3/pkg/provider/traefik"
"github.com/traefik/traefik/v3/pkg/safe"
"github.com/traefik/traefik/v3/pkg/server"
"github.com/traefik/traefik/v3/pkg/server/middleware"
"github.com/traefik/traefik/v3/pkg/server/service"
"github.com/traefik/traefik/v3/pkg/tcp"
traefiktls "github.com/traefik/traefik/v3/pkg/tls"
"github.com/traefik/traefik/v3/pkg/tracing"
"github.com/traefik/traefik/v3/pkg/types"
"github.com/traefik/traefik/v3/pkg/version"
"github.com/docker/libkv/store"
)
func main() {
// traefik config inits
tConfig := cmd.NewTraefikConfiguration()
loaders := []cli.ResourceLoader{&tcli.DeprecationLoader{}, &tcli.FileLoader{}, &tcli.FlagLoader{}, &tcli.EnvLoader{}}
cmdTraefik := &cli.Command{
//traefik config inits
traefikConfiguration := NewTraefikConfiguration()
traefikPointersConfiguration := NewTraefikDefaultPointersConfiguration()
//traefik Command init
traefikCmd := &flaeg.Command{
Name: "traefik",
Description: `Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
Description: `traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
Complete documentation is available at https://traefik.io`,
Configuration: tConfig,
Resources: loaders,
Run: func(_ []string) error {
return runCmd(&tConfig.Configuration)
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
globalConfiguration := traefikConfiguration.GlobalConfiguration
if globalConfiguration.File != nil && len(globalConfiguration.File.Filename) == 0 {
// no filename, setting to global config file
if len(traefikConfiguration.ConfigFile) != 0 {
globalConfiguration.File.Filename = traefikConfiguration.ConfigFile
} else {
log.Errorln("Error using file configuration backend, no filename defined")
}
}
if len(traefikConfiguration.ConfigFile) != 0 {
log.Infof("Using TOML configuration file %s", traefikConfiguration.ConfigFile)
}
run(&globalConfiguration)
return nil
},
}
err := cmdTraefik.AddCommand(healthcheck.NewCmd(&tConfig.Configuration, loaders))
if err != nil {
stdlog.Println(err)
os.Exit(1)
//storeconfig Command init
var kv *staert.KvSource
var err error
storeConfigCmd := &flaeg.Command{
Name: "storeconfig",
Description: `Store the static traefik configuration into a Key-value stores. Traefik will not start.`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
if kv == nil {
return fmt.Errorf("Error using command storeconfig, no Key-value store defined")
}
jsonConf, err := json.Marshal(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
fmtlog.Printf("Storing configuration: %s\n", jsonConf)
err = kv.StoreConfig(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
if traefikConfiguration.GlobalConfiguration.ACME != nil && len(traefikConfiguration.GlobalConfiguration.ACME.StorageFile) > 0 {
// convert ACME json file to KV store
localStore := acme.NewLocalStore(traefikConfiguration.GlobalConfiguration.ACME.StorageFile)
object, err := localStore.Load()
if err != nil {
return err
}
meta := cluster.NewMetadata(object)
err = meta.Marshall()
if err != nil {
return err
}
source := staert.KvSource{
Store: kv,
Prefix: traefikConfiguration.GlobalConfiguration.ACME.Storage,
}
err = source.StoreConfig(meta)
if err != nil {
return err
}
}
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
err = cmdTraefik.AddCommand(cmdVersion.NewCmd())
if err != nil {
stdlog.Println(err)
os.Exit(1)
healthCheckCmd := &flaeg.Command{
Name: "healthcheck",
Description: `Calls traefik /ping to check health (web provider must be enabled)`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
traefikConfiguration.GlobalConfiguration.SetEffectiveConfiguration()
if traefikConfiguration.Web == nil {
fmt.Println("Please enable the web provider to use healtcheck.")
os.Exit(1)
}
client := &http.Client{Timeout: 5 * time.Second}
protocol := "http"
if len(traefikConfiguration.Web.CertFile) > 0 {
protocol = "https"
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client.Transport = tr
}
resp, err := client.Head(protocol + "://" + traefikConfiguration.Web.Address + traefikConfiguration.Web.Path + "ping")
if err != nil {
fmt.Printf("Error calling healthcheck: %s\n", err)
os.Exit(1)
}
if resp.StatusCode != http.StatusOK {
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
os.Exit(1)
}
fmt.Printf("OK: %s\n", resp.Request.URL)
os.Exit(0)
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
err = cli.Execute(cmdTraefik)
//init flaeg source
f := flaeg.New(traefikCmd, os.Args[1:])
//add custom parsers
f.AddParser(reflect.TypeOf(configuration.EntryPoints{}), &configuration.EntryPoints{})
f.AddParser(reflect.TypeOf(configuration.DefaultEntryPoints{}), &configuration.DefaultEntryPoints{})
f.AddParser(reflect.TypeOf(configuration.RootCAs{}), &configuration.RootCAs{})
f.AddParser(reflect.TypeOf(types.Constraints{}), &types.Constraints{})
f.AddParser(reflect.TypeOf(kubernetes.Namespaces{}), &kubernetes.Namespaces{})
f.AddParser(reflect.TypeOf(ecs.Clusters{}), &ecs.Clusters{})
f.AddParser(reflect.TypeOf([]acme.Domain{}), &acme.Domains{})
f.AddParser(reflect.TypeOf(types.Buckets{}), &types.Buckets{})
//add commands
f.AddCommand(newVersionCmd())
f.AddCommand(newBugCmd(traefikConfiguration, traefikPointersConfiguration))
f.AddCommand(storeConfigCmd)
f.AddCommand(healthCheckCmd)
usedCmd, err := f.GetCommand()
if err != nil {
log.Error().Err(err).Msg("Command error")
logrus.Exit(1)
fmtlog.Println(err)
os.Exit(-1)
}
logrus.Exit(0)
if _, err := f.Parse(usedCmd); err != nil {
fmtlog.Printf("Error parsing command: %s\n", err)
os.Exit(-1)
}
//staert init
s := staert.NewStaert(traefikCmd)
//init toml source
toml := staert.NewTomlSource("traefik", []string{traefikConfiguration.ConfigFile, "/etc/traefik/", "$HOME/.traefik/", "."})
//add sources to staert
s.AddSource(toml)
s.AddSource(f)
if _, err := s.LoadConfig(); err != nil {
fmtlog.Printf("Error reading TOML config file %s : %s\n", toml.ConfigFileUsed(), err)
os.Exit(-1)
}
traefikConfiguration.ConfigFile = toml.ConfigFileUsed()
kv, err = CreateKvSource(traefikConfiguration)
if err != nil {
fmtlog.Printf("Error creating kv store: %s\n", err)
os.Exit(-1)
}
// IF a KV Store is enable and no sub-command called in args
if kv != nil && usedCmd == traefikCmd {
if traefikConfiguration.Cluster == nil {
traefikConfiguration.Cluster = &types.Cluster{Node: uuid.Get()}
}
if traefikConfiguration.Cluster.Store == nil {
traefikConfiguration.Cluster.Store = &types.Store{Prefix: kv.Prefix, Store: kv.Store}
}
s.AddSource(kv)
operation := func() error {
_, err := s.LoadConfig()
return err
}
notify := func(err error, time time.Duration) {
log.Errorf("Load config error: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
fmtlog.Printf("Error loading configuration: %s\n", err)
os.Exit(-1)
}
}
if err := s.Run(); err != nil {
fmtlog.Printf("Error running traefik: %s\n", err)
os.Exit(-1)
}
os.Exit(0)
}
func runCmd(staticConfiguration *static.Configuration) error {
setupLogger(staticConfiguration)
func run(globalConfiguration *configuration.GlobalConfiguration) {
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
staticConfiguration.SetEffectiveConfiguration()
if err := staticConfiguration.ValidateConfiguration(); err != nil {
return err
}
globalConfiguration.SetEffectiveConfiguration()
log.Info().Str("version", version.Version).
Msgf("Traefik version %s built on %s", version.Version, version.BuildDate)
jsonConf, err := json.Marshal(staticConfiguration)
// logging
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
if err != nil {
log.Error().Err(err).Msg("Could not marshal static configuration")
log.Debug().Interface("staticConfiguration", staticConfiguration).Msg("Static configuration loaded [struct]")
} else {
log.Debug().RawJSON("staticConfiguration", jsonConf).Msg("Static configuration loaded [json]")
log.Error("Error getting level", err)
}
log.SetLevel(level)
if staticConfiguration.Global.CheckNewVersion {
if len(globalConfiguration.TraefikLogsFile) > 0 {
dir := filepath.Dir(globalConfiguration.TraefikLogsFile)
err := os.MkdirAll(dir, 0755)
if err != nil {
log.Errorf("Failed to create log path %s: %s", dir, err)
}
err = log.OpenFile(globalConfiguration.TraefikLogsFile)
defer func() {
if err := log.CloseFile(); err != nil {
log.Error("Error closing log", err)
}
}()
if err != nil {
log.Error("Error opening file", err)
} else {
log.SetFormatter(&logrus.TextFormatter{DisableColors: true, FullTimestamp: true, DisableSorting: true})
}
} else {
log.SetFormatter(&logrus.TextFormatter{FullTimestamp: true, DisableSorting: true})
}
jsonConf, _ := json.Marshal(globalConfiguration)
log.Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
if globalConfiguration.CheckNewVersion {
checkNewVersion()
}
stats(staticConfiguration)
svr, err := setupServer(staticConfiguration)
if err != nil {
return err
}
ctx, _ := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
if staticConfiguration.Ping != nil {
staticConfiguration.Ping.WithContext(ctx)
}
svr.Start(ctx)
log.Debugf("Global configuration loaded %s", string(jsonConf))
svr := server.NewServer(*globalConfiguration)
svr.Start()
defer svr.Close()
sent, err := daemon.SdNotify(false, "READY=1")
if !sent && err != nil {
log.Error().Err(err).Msg("Failed to notify")
log.Error("Fail to notify", err)
}
t, err := daemon.SdWatchdogEnabled(false)
if err != nil {
log.Error().Err(err).Msg("Could not enable Watchdog")
log.Error("Problem with watchdog", err)
} else if t != 0 {
// Send a ping each half time given
t /= 2
log.Info().Msgf("Watchdog activated with timer duration %s", t)
t = t / 2
log.Info("Watchdog activated with timer each ", t)
safe.Go(func() {
tick := time.Tick(t)
for range tick {
resp, errHealthCheck := healthcheck.Do(*staticConfiguration)
if resp != nil {
_ = resp.Body.Close()
}
if staticConfiguration.Ping == nil || errHealthCheck == nil {
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.Error().Msg("Fail to tick watchdog")
}
} else {
log.Error().Err(errHealthCheck).Send()
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.Error("Fail to tick watchdog")
}
}
})
}
svr.Wait()
log.Info().Msg("Shutting down")
return nil
log.Info("Shutting down")
}
func setupServer(staticConfiguration *static.Configuration) (*server.Server, error) {
providerAggregator := aggregator.NewProviderAggregator(*staticConfiguration.Providers)
// CreateKvSource creates KvSource
// TLS support is enable for Consul and Etcd backends
func CreateKvSource(traefikConfiguration *TraefikConfiguration) (*staert.KvSource, error) {
var kv *staert.KvSource
var kvStore store.Store
var err error
ctx := context.Background()
routinesPool := safe.NewPool(ctx)
// adds internal provider
err := providerAggregator.AddProvider(traefik.New(*staticConfiguration))
if err != nil {
return nil, err
}
// ACME
tlsManager := traefiktls.NewManager()
httpChallengeProvider := acme.NewChallengeHTTP()
tlsChallengeProvider := acme.NewChallengeTLSALPN()
err = providerAggregator.AddProvider(tlsChallengeProvider)
if err != nil {
return nil, err
}
acmeProviders := initACMEProvider(staticConfiguration, &providerAggregator, tlsManager, httpChallengeProvider, tlsChallengeProvider)
// Tailscale
tsProviders := initTailscaleProviders(staticConfiguration, &providerAggregator)
// Observability
metricRegistries := registerMetricClients(staticConfiguration.Metrics)
var semConvMetricRegistry *metrics.SemConvMetricsRegistry
if staticConfiguration.Metrics != nil && staticConfiguration.Metrics.OTLP != nil {
semConvMetricRegistry, err = metrics.NewSemConvMetricRegistry(ctx, staticConfiguration.Metrics.OTLP)
if err != nil {
return nil, fmt.Errorf("unable to create SemConv metric registry: %w", err)
switch {
case traefikConfiguration.Consul != nil:
kvStore, err = traefikConfiguration.Consul.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Consul.Prefix,
}
case traefikConfiguration.Etcd != nil:
kvStore, err = traefikConfiguration.Etcd.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Etcd.Prefix,
}
case traefikConfiguration.Zookeeper != nil:
kvStore, err = traefikConfiguration.Zookeeper.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Zookeeper.Prefix,
}
case traefikConfiguration.Boltdb != nil:
kvStore, err = traefikConfiguration.Boltdb.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Boltdb.Prefix,
}
}
metricsRegistry := metrics.NewMultiRegistry(metricRegistries)
accessLog := setupAccessLog(staticConfiguration.AccessLog)
tracer, tracerCloser := setupTracing(staticConfiguration.Tracing)
observabilityMgr := middleware.NewObservabilityMgr(*staticConfiguration, metricsRegistry, semConvMetricRegistry, accessLog, tracer, tracerCloser)
// Entrypoints
serverEntryPointsTCP, err := server.NewTCPEntryPoints(staticConfiguration.EntryPoints, staticConfiguration.HostResolver, metricsRegistry)
if err != nil {
return nil, err
}
serverEntryPointsUDP, err := server.NewUDPEntryPoints(staticConfiguration.EntryPoints)
if err != nil {
return nil, err
}
if staticConfiguration.API != nil {
version.DisableDashboardAd = staticConfiguration.API.DisableDashboardAd
}
// Plugins
pluginBuilder, err := createPluginBuilder(staticConfiguration)
if err != nil {
log.Error().Err(err).Msg("Plugins are disabled because an error has occurred.")
}
// Providers plugins
for name, conf := range staticConfiguration.Providers.Plugin {
if pluginBuilder == nil {
break
}
p, err := pluginBuilder.BuildProvider(name, conf)
if err != nil {
return nil, fmt.Errorf("plugin: failed to build provider: %w", err)
}
err = providerAggregator.AddProvider(p)
if err != nil {
return nil, fmt.Errorf("plugin: failed to add provider: %w", err)
}
}
// Service manager factory
var spiffeX509Source *workloadapi.X509Source
if staticConfiguration.Spiffe != nil && staticConfiguration.Spiffe.WorkloadAPIAddr != "" {
log.Info().Str("workloadAPIAddr", staticConfiguration.Spiffe.WorkloadAPIAddr).
Msg("Waiting on SPIFFE SVID delivery")
spiffeX509Source, err = workloadapi.NewX509Source(
ctx,
workloadapi.WithClientOptions(
workloadapi.WithAddr(
staticConfiguration.Spiffe.WorkloadAPIAddr,
),
),
)
if err != nil {
return nil, fmt.Errorf("unable to create SPIFFE x509 source: %w", err)
}
log.Info().Msg("Successfully obtained SPIFFE SVID.")
}
roundTripperManager := service.NewRoundTripperManager(spiffeX509Source)
dialerManager := tcp.NewDialerManager(spiffeX509Source)
acmeHTTPHandler := getHTTPChallengeHandler(acmeProviders, httpChallengeProvider)
managerFactory := service.NewManagerFactory(*staticConfiguration, routinesPool, observabilityMgr, roundTripperManager, acmeHTTPHandler)
// Router factory
routerFactory := server.NewRouterFactory(*staticConfiguration, managerFactory, tlsManager, observabilityMgr, pluginBuilder, dialerManager)
// Watcher
watcher := server.NewConfigurationWatcher(
routinesPool,
providerAggregator,
getDefaultsEntrypoints(staticConfiguration),
"internal",
)
// TLS
watcher.AddListener(func(conf dynamic.Configuration) {
ctx := context.Background()
tlsManager.UpdateConfigs(ctx, conf.TLS.Stores, conf.TLS.Options, conf.TLS.Certificates)
gauge := metricsRegistry.TLSCertsNotAfterTimestampGauge()
for _, certificate := range tlsManager.GetServerCertificates() {
appendCertMetric(gauge, certificate)
}
})
// Metrics
watcher.AddListener(func(_ dynamic.Configuration) {
metricsRegistry.ConfigReloadsCounter().Add(1)
metricsRegistry.LastConfigReloadSuccessGauge().Set(float64(time.Now().Unix()))
})
// Server Transports
watcher.AddListener(func(conf dynamic.Configuration) {
roundTripperManager.Update(conf.HTTP.ServersTransports)
dialerManager.Update(conf.TCP.ServersTransports)
})
// Switch router
watcher.AddListener(switchRouter(routerFactory, serverEntryPointsTCP, serverEntryPointsUDP))
// Metrics
if metricsRegistry.IsEpEnabled() || metricsRegistry.IsRouterEnabled() || metricsRegistry.IsSvcEnabled() {
var eps []string
for key := range serverEntryPointsTCP {
eps = append(eps, key)
}
watcher.AddListener(func(conf dynamic.Configuration) {
metrics.OnConfigurationUpdate(conf, eps)
})
}
// TLS challenge
watcher.AddListener(tlsChallengeProvider.ListenConfiguration)
// Certificate Resolvers
resolverNames := map[string]struct{}{}
// ACME
for _, p := range acmeProviders {
resolverNames[p.ResolverName] = struct{}{}
watcher.AddListener(p.ListenConfiguration)
}
// Tailscale
for _, p := range tsProviders {
resolverNames[p.ResolverName] = struct{}{}
watcher.AddListener(p.HandleConfigUpdate)
}
// Certificate resolver logs
watcher.AddListener(func(config dynamic.Configuration) {
for rtName, rt := range config.HTTP.Routers {
if rt.TLS == nil || rt.TLS.CertResolver == "" {
continue
}
if _, ok := resolverNames[rt.TLS.CertResolver]; !ok {
log.Error().Err(err).Str(logs.RouterName, rtName).Str("certificateResolver", rt.TLS.CertResolver).
Msg("Router uses a non-existent certificate resolver")
}
}
})
return server.NewServer(routinesPool, serverEntryPointsTCP, serverEntryPointsUDP, watcher, observabilityMgr), nil
}
func getHTTPChallengeHandler(acmeProviders []*acme.Provider, httpChallengeProvider http.Handler) http.Handler {
var acmeHTTPHandler http.Handler
for _, p := range acmeProviders {
if p != nil && p.HTTPChallenge != nil {
acmeHTTPHandler = httpChallengeProvider
break
}
}
return acmeHTTPHandler
}
func getDefaultsEntrypoints(staticConfiguration *static.Configuration) []string {
var defaultEntryPoints []string
// Determines if at least one EntryPoint is configured to be used by default.
var hasDefinedDefaults bool
for _, ep := range staticConfiguration.EntryPoints {
if ep.AsDefault {
hasDefinedDefaults = true
break
}
}
for name, cfg := range staticConfiguration.EntryPoints {
// By default all entrypoints are considered.
// If at least one is flagged, then only flagged entrypoints are included.
if hasDefinedDefaults && !cfg.AsDefault {
continue
}
protocol, err := cfg.GetProtocol()
if err != nil {
// Should never happen because Traefik should not start if protocol is invalid.
log.Error().Err(err).Msg("Invalid protocol")
}
if protocol != "udp" && name != static.DefaultInternalEntryPointName {
defaultEntryPoints = append(defaultEntryPoints, name)
}
}
sort.Strings(defaultEntryPoints)
return defaultEntryPoints
}
func switchRouter(routerFactory *server.RouterFactory, serverEntryPointsTCP server.TCPEntryPoints, serverEntryPointsUDP server.UDPEntryPoints) func(conf dynamic.Configuration) {
return func(conf dynamic.Configuration) {
rtConf := runtime.NewConfig(conf)
routers, udpRouters := routerFactory.CreateRouters(rtConf)
serverEntryPointsTCP.Switch(routers)
serverEntryPointsUDP.Switch(udpRouters)
}
}
// initACMEProvider creates and registers acme.Provider instances corresponding to the configured ACME certificate resolvers.
func initACMEProvider(c *static.Configuration, providerAggregator *aggregator.ProviderAggregator, tlsManager *traefiktls.Manager, httpChallengeProvider, tlsChallengeProvider challenge.Provider) []*acme.Provider {
localStores := map[string]*acme.LocalStore{}
var resolvers []*acme.Provider
for name, resolver := range c.CertificatesResolvers {
if resolver.ACME == nil {
continue
}
if localStores[resolver.ACME.Storage] == nil {
localStores[resolver.ACME.Storage] = acme.NewLocalStore(resolver.ACME.Storage)
}
p := &acme.Provider{
Configuration: resolver.ACME,
Store: localStores[resolver.ACME.Storage],
ResolverName: name,
HTTPChallengeProvider: httpChallengeProvider,
TLSChallengeProvider: tlsChallengeProvider,
}
if err := providerAggregator.AddProvider(p); err != nil {
log.Error().Err(err).Str("resolver", name).Msg("The ACME resolve is skipped from the resolvers list")
continue
}
p.SetTLSManager(tlsManager)
p.SetConfigListenerChan(make(chan dynamic.Configuration))
resolvers = append(resolvers, p)
}
return resolvers
}
// initTailscaleProviders creates and registers tailscale.Provider instances corresponding to the configured Tailscale certificate resolvers.
func initTailscaleProviders(cfg *static.Configuration, providerAggregator *aggregator.ProviderAggregator) []*tailscale.Provider {
var providers []*tailscale.Provider
for name, resolver := range cfg.CertificatesResolvers {
if resolver.Tailscale == nil {
continue
}
tsProvider := &tailscale.Provider{ResolverName: name}
if err := providerAggregator.AddProvider(tsProvider); err != nil {
log.Error().Err(err).Str(logs.ProviderName, name).Msg("Unable to create Tailscale provider")
continue
}
providers = append(providers, tsProvider)
}
return providers
}
func registerMetricClients(metricsConfig *types.Metrics) []metrics.Registry {
if metricsConfig == nil {
return nil
}
var registries []metrics.Registry
if metricsConfig.Prometheus != nil {
logger := log.With().Str(logs.MetricsProviderName, "prometheus").Logger()
prometheusRegister := metrics.RegisterPrometheus(logger.WithContext(context.Background()), metricsConfig.Prometheus)
if prometheusRegister != nil {
registries = append(registries, prometheusRegister)
logger.Debug().Msg("Configured Prometheus metrics")
}
}
if metricsConfig.Datadog != nil {
logger := log.With().Str(logs.MetricsProviderName, "datadog").Logger()
registries = append(registries, metrics.RegisterDatadog(logger.WithContext(context.Background()), metricsConfig.Datadog))
logger.Debug().
Str("address", metricsConfig.Datadog.Address).
Str("pushInterval", metricsConfig.Datadog.PushInterval.String()).
Msgf("Configured Datadog metrics")
}
if metricsConfig.StatsD != nil {
logger := log.With().Str(logs.MetricsProviderName, "statsd").Logger()
registries = append(registries, metrics.RegisterStatsd(logger.WithContext(context.Background()), metricsConfig.StatsD))
logger.Debug().
Str("address", metricsConfig.StatsD.Address).
Str("pushInterval", metricsConfig.StatsD.PushInterval.String()).
Msg("Configured StatsD metrics")
}
if metricsConfig.InfluxDB2 != nil {
logger := log.With().Str(logs.MetricsProviderName, "influxdb2").Logger()
influxDB2Register := metrics.RegisterInfluxDB2(logger.WithContext(context.Background()), metricsConfig.InfluxDB2)
if influxDB2Register != nil {
registries = append(registries, influxDB2Register)
logger.Debug().
Str("address", metricsConfig.InfluxDB2.Address).
Str("bucket", metricsConfig.InfluxDB2.Bucket).
Str("organization", metricsConfig.InfluxDB2.Org).
Str("pushInterval", metricsConfig.InfluxDB2.PushInterval.String()).
Msg("Configured InfluxDB v2 metrics")
}
}
if metricsConfig.OTLP != nil {
logger := log.With().Str(logs.MetricsProviderName, "openTelemetry").Logger()
openTelemetryRegistry := metrics.RegisterOpenTelemetry(logger.WithContext(context.Background()), metricsConfig.OTLP)
if openTelemetryRegistry != nil {
registries = append(registries, openTelemetryRegistry)
logger.Debug().
Str("pushInterval", metricsConfig.OTLP.PushInterval.String()).
Msg("Configured OpenTelemetry metrics")
}
}
return registries
}
func appendCertMetric(gauge gokitmetrics.Gauge, certificate *x509.Certificate) {
sort.Strings(certificate.DNSNames)
labels := []string{
"cn", certificate.Subject.CommonName,
"serial", certificate.SerialNumber.String(),
"sans", strings.Join(certificate.DNSNames, ","),
}
notAfter := float64(certificate.NotAfter.Unix())
gauge.With(labels...).Set(notAfter)
}
func setupAccessLog(conf *types.AccessLog) *accesslog.Handler {
if conf == nil {
return nil
}
accessLoggerMiddleware, err := accesslog.NewHandler(conf)
if err != nil {
log.Warn().Err(err).Msg("Unable to create access logger")
return nil
}
return accessLoggerMiddleware
}
func setupTracing(conf *static.Tracing) (*tracing.Tracer, io.Closer) {
if conf == nil {
return nil, nil
}
tracer, closer, err := tracing.NewTracing(conf)
if err != nil {
log.Warn().Err(err).Msg("Unable to create tracer")
return nil, nil
}
return tracer, closer
return kv, err
}
func checkNewVersion() {
ticker := time.Tick(24 * time.Hour)
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
version.CheckNewVersion()
}
})
}
func stats(staticConfiguration *static.Configuration) {
logger := log.With().Logger()
if staticConfiguration.Global.SendAnonymousUsage {
logger.Info().Msg(`Stats collection is enabled.`)
logger.Info().Msg(`Many thanks for contributing to Traefik's improvement by allowing us to receive anonymous information from your configuration.`)
logger.Info().Msg(`Help us improve Traefik by leaving this feature on :)`)
logger.Info().Msg(`More details on: https://doc.traefik.io/traefik/contributing/data-collection/`)
collect(staticConfiguration)
} else {
logger.Info().Msg(`
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://doc.traefik.io/traefik/contributing/data-collection/
`)
}
}
func collect(staticConfiguration *static.Configuration) {
ticker := time.Tick(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
if err := collector.Collect(staticConfiguration); err != nil {
log.Debug().Err(err).Send()
time.Sleep(10 * time.Minute)
version.CheckNewVersion()
for {
select {
case <-ticker.C:
version.CheckNewVersion()
}
}
})

View File

@@ -1,186 +0,0 @@
package main
import (
"crypto/x509"
"encoding/pem"
"strings"
"testing"
"github.com/go-kit/kit/metrics"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/traefik/traefik/v3/pkg/config/static"
)
// FooCert is a PEM-encoded TLS cert.
// generated from src/crypto/tls:
// go run generate_cert.go --rsa-bits 1024 --host foo.org,foo.com --ca --start-date "Jan 1 00:00:00 1970" --duration=1000000h
const fooCert = `-----BEGIN CERTIFICATE-----
MIICHzCCAYigAwIBAgIQXQFLeYRwc5X21t457t2xADANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQKEwdBY21lIENvMCAXDTcwMDEwMTAwMDAwMFoYDzIwODQwMTI5MTYw
MDAwWjASMRAwDgYDVQQKEwdBY21lIENvMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB
iQKBgQDCjn67GSs/khuGC4GNN+tVo1S+/eSHwr/hWzhfMqO7nYiXkFzmxi+u14CU
Pda6WOeps7T2/oQEFMxKKg7zYOqkLSbjbE0ZfosopaTvEsZm/AZHAAvoOrAsIJOn
SEiwy8h0tLA4z1SNR6rmIVQWyqBZEPAhBTQM1z7tFp48FakCFwIDAQABo3QwcjAO
BgNVHQ8BAf8EBAMCAqQwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDwYDVR0TAQH/BAUw
AwEB/zAdBgNVHQ4EFgQUDHG3ASzeUezElup9zbPpBn/vjogwGwYDVR0RBBQwEoIH
Zm9vLm9yZ4IHZm9vLmNvbTANBgkqhkiG9w0BAQsFAAOBgQBT+VLMbB9u27tBX8Aw
ZrGY3rbNdBGhXVTksrjiF+6ZtDpD3iI56GH9zLxnqvXkgn3u0+Ard5TqF/xmdwVw
NY0V/aWYfcL2G2auBCQrPvM03ozRnVUwVfP23eUzX2ORNHCYhd2ObQx4krrhs7cJ
SWxtKwFlstoXY3K2g9oRD9UxdQ==
-----END CERTIFICATE-----`
// BarCert is a PEM-encoded TLS cert.
// generated from src/crypto/tls:
// go run generate_cert.go --rsa-bits 1024 --host bar.org,bar.com --ca --start-date "Jan 1 00:00:00 1970" --duration=10000h
const barCert = `-----BEGIN CERTIFICATE-----
MIICHTCCAYagAwIBAgIQcuIcNEXzBHPoxna5S6wG4jANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQKEwdBY21lIENvMB4XDTcwMDEwMTAwMDAwMFoXDTcxMDIyMTE2MDAw
MFowEjEQMA4GA1UEChMHQWNtZSBDbzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAqtcrP+KA7D6NjyztGNIPMup9KiBMJ8QL+preog/YHR7SQLO3kGFhpS3WKMab
SzMypC3ZX1PZjBP5ZzwaV3PFbuwlCkPlyxR2lOWmullgI7mjY0TBeYLDIclIzGRp
mpSDDSpkW1ay2iJDSpXjlhmwZr84hrCU7BRTQJo91fdsRTsCAwEAAaN0MHIwDgYD
VR0PAQH/BAQDAgKkMBMGA1UdJQQMMAoGCCsGAQUFBwMBMA8GA1UdEwEB/wQFMAMB
Af8wHQYDVR0OBBYEFK8jnzFQvBAgWtfzOyXY4VSkwrTXMBsGA1UdEQQUMBKCB2Jh
ci5vcmeCB2Jhci5jb20wDQYJKoZIhvcNAQELBQADgYEAJz0ifAExisC/ZSRhWuHz
7qs1i6Nd4+YgEVR8dR71MChP+AMxucY1/ajVjb9xlLys3GPE90TWSdVppabEVjZY
Oq11nPKc50ItTt8dMku6t0JHBmzoGdkN0V4zJCBqdQJxhop8JpYJ0S9CW0eT93h3
ipYQSsmIINGtMXJ8VkP/MlM=
-----END CERTIFICATE-----`
type gaugeMock struct {
metrics map[string]float64
labels string
}
func (g gaugeMock) With(labelValues ...string) metrics.Gauge {
g.labels = strings.Join(labelValues, ",")
return g
}
func (g gaugeMock) Set(value float64) {
g.metrics[g.labels] = value
}
func (g gaugeMock) Add(delta float64) {
panic("implement me")
}
func TestAppendCertMetric(t *testing.T) {
testCases := []struct {
desc string
certs []string
expected map[string]float64
}{
{
desc: "No certs",
certs: []string{},
expected: map[string]float64{},
},
{
desc: "One cert",
certs: []string{fooCert},
expected: map[string]float64{
"cn,,serial,123624926713171615935660664614975025408,sans,foo.com,foo.org": 3.6e+09,
},
},
{
desc: "Two certs",
certs: []string{fooCert, barCert},
expected: map[string]float64{
"cn,,serial,123624926713171615935660664614975025408,sans,foo.com,foo.org": 3.6e+09,
"cn,,serial,152706022658490889223053211416725817058,sans,bar.com,bar.org": 3.6e+07,
},
},
}
for _, test := range testCases {
t.Run(test.desc, func(t *testing.T) {
t.Parallel()
gauge := &gaugeMock{
metrics: map[string]float64{},
}
for _, cert := range test.certs {
block, _ := pem.Decode([]byte(cert))
parsedCert, err := x509.ParseCertificate(block.Bytes)
require.NoError(t, err)
appendCertMetric(gauge, parsedCert)
}
assert.Equal(t, test.expected, gauge.metrics)
})
}
}
func TestGetDefaultsEntrypoints(t *testing.T) {
testCases := []struct {
desc string
entrypoints static.EntryPoints
expected []string
}{
{
desc: "Skips special names",
entrypoints: map[string]*static.EntryPoint{
"web": {
Address: ":80",
},
"traefik": {
Address: ":8080",
},
},
expected: []string{"web"},
},
{
desc: "Two EntryPoints not attachable",
entrypoints: map[string]*static.EntryPoint{
"web": {
Address: ":80",
},
"websecure": {
Address: ":443",
},
},
expected: []string{"web", "websecure"},
},
{
desc: "Two EntryPoints only one attachable",
entrypoints: map[string]*static.EntryPoint{
"web": {
Address: ":80",
},
"websecure": {
Address: ":443",
AsDefault: true,
},
},
expected: []string{"websecure"},
},
{
desc: "Two attachable EntryPoints",
entrypoints: map[string]*static.EntryPoint{
"web": {
Address: ":80",
AsDefault: true,
},
"websecure": {
Address: ":443",
AsDefault: true,
},
},
expected: []string{"web", "websecure"},
},
}
for _, test := range testCases {
t.Run(test.desc, func(t *testing.T) {
actual := getDefaultsEntrypoints(&static.Configuration{
EntryPoints: test.entrypoints,
})
assert.ElementsMatch(t, test.expected, actual)
})
}
}

63
cmd/traefik/version.go Normal file
View File

@@ -0,0 +1,63 @@
package main
import (
"fmt"
"io"
"os"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/containous/traefik/version"
)
var versionTemplate = `Version: {{.Version}}
Codename: {{.Codename}}
Go version: {{.GoVersion}}
Built: {{.BuildTime}}
OS/Arch: {{.Os}}/{{.Arch}}`
// newVersionCmd builds a new Version command
func newVersionCmd() *flaeg.Command {
//version Command init
return &flaeg.Command{
Name: "version",
Description: `Print version`,
Config: struct{}{},
DefaultPointersConfig: struct{}{},
Run: func() error {
if err := getVersionPrint(os.Stdout); err != nil {
return err
}
fmt.Print("\n")
return nil
},
}
}
func getVersionPrint(wr io.Writer) error {
tmpl, err := template.New("").Parse(versionTemplate)
if err != nil {
return err
}
v := struct {
Version string
Codename string
GoVersion string
BuildTime string
Os string
Arch string
}{
Version: version.Version,
Codename: version.Codename,
GoVersion: runtime.Version(),
BuildTime: version.BuildDate,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
return tmpl.Execute(wr, v)
}

View File

@@ -1,60 +0,0 @@
package version
import (
"fmt"
"io"
"os"
"runtime"
"text/template"
"github.com/traefik/paerser/cli"
"github.com/traefik/traefik/v3/pkg/version"
)
var versionTemplate = `Version: {{.Version}}
Codename: {{.Codename}}
Go version: {{.GoVersion}}
Built: {{.BuildTime}}
OS/Arch: {{.Os}}/{{.Arch}}`
// NewCmd builds a new Version command.
func NewCmd() *cli.Command {
return &cli.Command{
Name: "version",
Description: `Shows the current Traefik version.`,
Configuration: nil,
Run: func(_ []string) error {
if err := GetPrint(os.Stdout); err != nil {
return err
}
fmt.Print("\n")
return nil
},
}
}
// GetPrint write Printable version.
func GetPrint(wr io.Writer) error {
tmpl, err := template.New("").Parse(versionTemplate)
if err != nil {
return err
}
v := struct {
Version string
Codename string
GoVersion string
BuildTime string
Os string
Arch string
}{
Version: version.Version,
Codename: version.Codename,
GoVersion: runtime.Version(),
BuildTime: version.BuildDate,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
}
return tmpl.Execute(wr, v)
}

View File

@@ -0,0 +1,537 @@
package configuration
import (
"crypto/tls"
"fmt"
"io/ioutil"
"os"
"strings"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/log"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/web"
"github.com/containous/traefik/provider/zk"
"github.com/containous/traefik/types"
)
const (
// DefaultHealthCheckInterval is the default health check interval.
DefaultHealthCheckInterval = 30 * time.Second
// DefaultDialTimeout when connecting to a backend server.
DefaultDialTimeout = 30 * time.Second
// DefaultIdleTimeout before closing an idle connection.
DefaultIdleTimeout = 180 * time.Second
)
// GlobalConfiguration holds global configuration (with providers, etc.).
// It's populated from the traefik configuration file passed as an argument to the binary.
type GlobalConfiguration struct {
GraceTimeOut flaeg.Duration `short:"g" description:"Duration to give active requests a chance to finish before Traefik stops" export:"true"`
Debug bool `short:"d" description:"Enable debug mode" export:"true"`
CheckNewVersion bool `description:"Periodically check if a new version has been released" export:"true"`
AccessLogsFile string `description:"(Deprecated) Access logs file" export:"true"` // Deprecated
AccessLog *types.AccessLog `description:"Access log settings" export:"true"`
TraefikLogsFile string `description:"Traefik logs file. Stdout is used when omitted or empty" export:"true"`
LogLevel string `short:"l" description:"Log level" export:"true"`
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'" export:"true"`
Cluster *types.Cluster `description:"Enable clustering" export:"true"`
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags" export:"true"`
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL" export:"true"`
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint" export:"true"`
ProvidersThrottleDuration flaeg.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time." export:"true"`
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used" export:"true"`
IdleTimeout flaeg.Duration `description:"(Deprecated) maximum amount of time an idle (keep-alive) connection will remain idle before closing itself." export:"true"` // Deprecated
InsecureSkipVerify bool `description:"Disable SSL certificate verification" export:"true"`
RootCAs RootCAs `description:"Add cert file for self-signed certificate"`
Retry *Retry `description:"Enable retry sending request if network error" export:"true"`
HealthCheck *HealthCheckConfig `description:"Health check parameters" export:"true"`
RespondingTimeouts *RespondingTimeouts `description:"Timeouts for incoming requests to the Traefik instance" export:"true"`
ForwardingTimeouts *ForwardingTimeouts `description:"Timeouts for requests forwarded to the backend servers" export:"true"`
Docker *docker.Provider `description:"Enable Docker backend with default settings" export:"true"`
File *file.Provider `description:"Enable File backend with default settings" export:"true"`
Web *web.Provider `description:"Enable Web backend with default settings" export:"true"`
Marathon *marathon.Provider `description:"Enable Marathon backend with default settings" export:"true"`
Consul *consul.Provider `description:"Enable Consul backend with default settings" export:"true"`
ConsulCatalog *consul.CatalogProvider `description:"Enable Consul catalog backend with default settings" export:"true"`
Etcd *etcd.Provider `description:"Enable Etcd backend with default settings" export:"true"`
Zookeeper *zk.Provider `description:"Enable Zookeeper backend with default settings" export:"true"`
Boltdb *boltdb.Provider `description:"Enable Boltdb backend with default settings" export:"true"`
Kubernetes *kubernetes.Provider `description:"Enable Kubernetes backend with default settings" export:"true"`
Mesos *mesos.Provider `description:"Enable Mesos backend with default settings" export:"true"`
Eureka *eureka.Provider `description:"Enable Eureka backend with default settings" export:"true"`
ECS *ecs.Provider `description:"Enable ECS backend with default settings" export:"true"`
Rancher *rancher.Provider `description:"Enable Rancher backend with default settings" export:"true"`
DynamoDB *dynamodb.Provider `description:"Enable DynamoDB backend with default settings" export:"true"`
}
// SetEffectiveConfiguration adds missing configuration parameters derived from existing ones.
// It also takes care of maintaining backwards compatibility.
func (gc *GlobalConfiguration) SetEffectiveConfiguration() {
if len(gc.EntryPoints) == 0 {
gc.EntryPoints = map[string]*EntryPoint{"http": {
Address: ":80",
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
}}
gc.DefaultEntryPoints = []string{"http"}
}
// ForwardedHeaders must be remove in the next breaking version
for entryPointName := range gc.EntryPoints {
entryPoint := gc.EntryPoints[entryPointName]
if entryPoint.ForwardedHeaders == nil {
entryPoint.ForwardedHeaders = &ForwardedHeaders{Insecure: true}
}
}
if gc.Rancher != nil {
// Ensure backwards compatibility for now
if len(gc.Rancher.AccessKey) > 0 ||
len(gc.Rancher.Endpoint) > 0 ||
len(gc.Rancher.SecretKey) > 0 {
if gc.Rancher.API == nil {
gc.Rancher.API = &rancher.APIConfiguration{
AccessKey: gc.Rancher.AccessKey,
SecretKey: gc.Rancher.SecretKey,
Endpoint: gc.Rancher.Endpoint,
}
}
log.Warn("Deprecated configuration found: rancher.[accesskey|secretkey|endpoint]. " +
"Please use rancher.api.[accesskey|secretkey|endpoint] instead.")
}
if gc.Rancher.Metadata != nil && len(gc.Rancher.Metadata.Prefix) == 0 {
gc.Rancher.Metadata.Prefix = "latest"
}
}
if gc.Debug {
gc.LogLevel = "DEBUG"
}
if gc.Web != nil && (gc.Web.Path == "" || !strings.HasSuffix(gc.Web.Path, "/")) {
gc.Web.Path += "/"
}
}
// DefaultEntryPoints holds default entry points
type DefaultEntryPoints []string
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (dep *DefaultEntryPoints) String() string {
return strings.Join(*dep, ",")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (dep *DefaultEntryPoints) Set(value string) error {
entrypoints := strings.Split(value, ",")
if len(entrypoints) == 0 {
return fmt.Errorf("bad DefaultEntryPoints format: %s", value)
}
for _, entrypoint := range entrypoints {
*dep = append(*dep, entrypoint)
}
return nil
}
// Get return the EntryPoints map
func (dep *DefaultEntryPoints) Get() interface{} {
return DefaultEntryPoints(*dep)
}
// SetValue sets the EntryPoints map with val
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
}
// Type is type of the struct
func (dep *DefaultEntryPoints) Type() string {
return "defaultentrypoints"
}
// RootCAs hold the CA we want to have in root
type RootCAs []FileOrContent
// FileOrContent hold a file path or content
type FileOrContent string
func (f FileOrContent) String() string {
return string(f)
}
func (f FileOrContent) Read() ([]byte, error) {
var content []byte
if _, err := os.Stat(f.String()); err == nil {
content, err = ioutil.ReadFile(f.String())
if err != nil {
return nil, err
}
} else {
content = []byte(f)
}
return content, nil
}
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (r *RootCAs) String() string {
sliceOfString := make([]string, len([]FileOrContent(*r)))
for key, value := range *r {
sliceOfString[key] = value.String()
}
return strings.Join(sliceOfString, ",")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (r *RootCAs) Set(value string) error {
rootCAs := strings.Split(value, ",")
if len(rootCAs) == 0 {
return fmt.Errorf("bad RootCAs format: %s", value)
}
for _, rootCA := range rootCAs {
*r = append(*r, FileOrContent(rootCA))
}
return nil
}
// Get return the EntryPoints map
func (r *RootCAs) Get() interface{} {
return RootCAs(*r)
}
// SetValue sets the EntryPoints map with val
func (r *RootCAs) SetValue(val interface{}) {
*r = RootCAs(val.(RootCAs))
}
// Type is type of the struct
func (r *RootCAs) Type() string {
return "rootcas"
}
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
type EntryPoints map[string]*EntryPoint
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (ep *EntryPoints) String() string {
return fmt.Sprintf("%+v", *ep)
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (ep *EntryPoints) Set(value string) error {
result := parseEntryPointsConfiguration(value)
var configTLS *TLS
if len(result["tls"]) > 0 {
certs := Certificates{}
if err := certs.Set(result["tls"]); err != nil {
return err
}
configTLS = &TLS{
Certificates: certs,
}
} else if len(result["tls_acme"]) > 0 {
configTLS = &TLS{
Certificates: Certificates{},
}
}
if len(result["ca"]) > 0 {
files := strings.Split(result["ca"], ",")
configTLS.ClientCAFiles = files
}
var redirect *Redirect
if len(result["redirect_entrypoint"]) > 0 || len(result["redirect_regex"]) > 0 || len(result["redirect_replacement"]) > 0 {
redirect = &Redirect{
EntryPoint: result["redirect_entrypoint"],
Regex: result["redirect_regex"],
Replacement: result["redirect_replacement"],
}
}
whiteListSourceRange := []string{}
if len(result["whitelistsourcerange"]) > 0 {
whiteListSourceRange = strings.Split(result["whitelistsourcerange"], ",")
}
compress := toBool(result, "compress")
var proxyProtocol *ProxyProtocol
ppTrustedIPs := result["proxyprotocol_trustedips"]
if len(result["proxyprotocol_insecure"]) > 0 || len(ppTrustedIPs) > 0 {
proxyProtocol = &ProxyProtocol{
Insecure: toBool(result, "proxyprotocol_insecure"),
}
if len(ppTrustedIPs) > 0 {
proxyProtocol.TrustedIPs = strings.Split(ppTrustedIPs, ",")
}
}
// TODO must be changed to false by default in the next breaking version.
forwardedHeaders := &ForwardedHeaders{Insecure: true}
if _, ok := result["forwardedheaders_insecure"]; ok {
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
}
fhTrustedIPs := result["forwardedheaders_trustedips"]
if len(fhTrustedIPs) > 0 {
// TODO must be removed in the next breaking version.
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
forwardedHeaders.TrustedIPs = strings.Split(fhTrustedIPs, ",")
}
if proxyProtocol != nil && proxyProtocol.Insecure {
log.Warn("ProxyProtocol.Insecure:true is dangerous. Please use 'ProxyProtocol.TrustedIPs:IPs' and remove 'ProxyProtocol.Insecure:true'")
}
(*ep)[result["name"]] = &EntryPoint{
Address: result["address"],
TLS: configTLS,
Redirect: redirect,
Compress: compress,
WhitelistSourceRange: whiteListSourceRange,
ProxyProtocol: proxyProtocol,
ForwardedHeaders: forwardedHeaders,
}
return nil
}
func parseEntryPointsConfiguration(raw string) map[string]string {
sections := strings.Fields(raw)
config := make(map[string]string)
for _, part := range sections {
field := strings.SplitN(part, ":", 2)
name := strings.ToLower(strings.Replace(field[0], ".", "_", -1))
if len(field) > 1 {
config[name] = field[1]
} else {
if strings.EqualFold(name, "TLS") {
config["tls_acme"] = "TLS"
} else {
config[name] = ""
}
}
}
return config
}
func toBool(conf map[string]string, key string) bool {
if val, ok := conf[key]; ok {
return strings.EqualFold(val, "true") ||
strings.EqualFold(val, "enable") ||
strings.EqualFold(val, "on")
}
return false
}
// Get return the EntryPoints map
func (ep *EntryPoints) Get() interface{} {
return EntryPoints(*ep)
}
// SetValue sets the EntryPoints map with val
func (ep *EntryPoints) SetValue(val interface{}) {
*ep = EntryPoints(val.(EntryPoints))
}
// Type is type of the struct
func (ep *EntryPoints) Type() string {
return "entrypoints"
}
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
type EntryPoint struct {
Network string
Address string
TLS *TLS `export:"true"`
Redirect *Redirect `export:"true"`
Auth *types.Auth `export:"true"`
WhitelistSourceRange []string
Compress bool `export:"true"`
ProxyProtocol *ProxyProtocol `export:"true"`
ForwardedHeaders *ForwardedHeaders `export:"true"`
}
// Redirect configures a redirection of an entry point to another, or to an URL
type Redirect struct {
EntryPoint string
Regex string
Replacement string
}
// TLS configures TLS for an entry point
type TLS struct {
MinVersion string `export:"true"`
CipherSuites []string
Certificates Certificates
ClientCAFiles []string
}
// MinVersion Map of allowed TLS minimum versions
var MinVersion = map[string]uint16{
`VersionTLS10`: tls.VersionTLS10,
`VersionTLS11`: tls.VersionTLS11,
`VersionTLS12`: tls.VersionTLS12,
}
// CipherSuites Map of TLS CipherSuites from crypto/tls
// Available CipherSuites defined at https://golang.org/pkg/crypto/tls/#pkg-constants
var CipherSuites = map[string]uint16{
`TLS_RSA_WITH_RC4_128_SHA`: tls.TLS_RSA_WITH_RC4_128_SHA,
`TLS_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
`TLS_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_RSA_WITH_AES_128_CBC_SHA,
`TLS_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_RSA_WITH_AES_256_CBC_SHA,
`TLS_RSA_WITH_AES_128_CBC_SHA256`: tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
`TLS_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
`TLS_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
`TLS_ECDHE_ECDSA_WITH_RC4_128_SHA`: tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
`TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
`TLS_ECDHE_RSA_WITH_RC4_128_SHA`: tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
`TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`: tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
`TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
`TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
`TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`: tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
`TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`: tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
}
// Certificates defines traefik certificates type
// Certs and Keys could be either a file path, or the file content itself
type Certificates []Certificate
//CreateTLSConfig creates a TLS config from Certificate structures
func (certs *Certificates) CreateTLSConfig() (*tls.Config, error) {
config := &tls.Config{}
config.Certificates = []tls.Certificate{}
certsSlice := []Certificate(*certs)
for _, v := range certsSlice {
var err error
certContent, err := v.CertFile.Read()
if err != nil {
return nil, err
}
keyContent, err := v.KeyFile.Read()
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(certContent, keyContent)
if err != nil {
return nil, err
}
config.Certificates = append(config.Certificates, cert)
}
return config, nil
}
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (certs *Certificates) String() string {
if len(*certs) == 0 {
return ""
}
var result []string
for _, certificate := range *certs {
result = append(result, certificate.CertFile.String()+","+certificate.KeyFile.String())
}
return strings.Join(result, ";")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (certs *Certificates) Set(value string) error {
certificates := strings.Split(value, ";")
for _, certificate := range certificates {
files := strings.Split(certificate, ",")
if len(files) != 2 {
return fmt.Errorf("bad certificates format: %s", value)
}
*certs = append(*certs, Certificate{
CertFile: FileOrContent(files[0]),
KeyFile: FileOrContent(files[1]),
})
}
return nil
}
// Type is type of the struct
func (certs *Certificates) Type() string {
return "certificates"
}
// Certificate holds a SSL cert/key pair
// Certs and Key could be either a file path, or the file content itself
type Certificate struct {
CertFile FileOrContent
KeyFile FileOrContent
}
// Retry contains request retry config
type Retry struct {
Attempts int `description:"Number of attempts" export:"true"`
}
// HealthCheckConfig contains health check configuration parameters.
type HealthCheckConfig struct {
Interval flaeg.Duration `description:"Default periodicity of enabled health checks" export:"true"`
}
// RespondingTimeouts contains timeout configurations for incoming requests to the Traefik instance.
type RespondingTimeouts struct {
ReadTimeout flaeg.Duration `description:"ReadTimeout is the maximum duration for reading the entire request, including the body. If zero, no timeout is set" export:"true"`
WriteTimeout flaeg.Duration `description:"WriteTimeout is the maximum duration before timing out writes of the response. If zero, no timeout is set" export:"true"`
IdleTimeout flaeg.Duration `description:"IdleTimeout is the maximum amount duration an idle (keep-alive) connection will remain idle before closing itself. Defaults to 180 seconds. If zero, no timeout is set" export:"true"`
}
// ForwardingTimeouts contains timeout configurations for forwarding requests to the backend servers.
type ForwardingTimeouts struct {
DialTimeout flaeg.Duration `description:"The amount of time to wait until a connection to a backend server can be established. Defaults to 30 seconds. If zero, no timeout exists" export:"true"`
ResponseHeaderTimeout flaeg.Duration `description:"The amount of time to wait for a server's response headers after fully writing the request (including its body, if any). If zero, no timeout exists" export:"true"`
}
// ProxyProtocol contains Proxy-Protocol configuration
type ProxyProtocol struct {
Insecure bool
TrustedIPs []string
}
// ForwardedHeaders Trust client forwarding headers
type ForwardedHeaders struct {
Insecure bool
TrustedIPs []string
}

View File

@@ -0,0 +1,293 @@
package configuration
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func Test_parseEntryPointsConfiguration(t *testing.T) {
testCases := []struct {
name string
value string
expectedResult map[string]string
}{
{
name: "all parameters",
value: "Name:foo TLS:goo TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:WhiteListSourceRange ProxyProtocol.TrustedIPs:192.168.0.1 ProxyProtocol.Insecure:false Address::8000",
expectedResult: map[string]string{
"name": "foo",
"address": ":8000",
"ca": "car",
"tls": "goo",
"tls_acme": "TLS",
"redirect_entrypoint": "RedirectEntryPoint",
"redirect_regex": "RedirectRegex",
"redirect_replacement": "RedirectReplacement",
"whitelistsourcerange": "WhiteListSourceRange",
"proxyprotocol_trustedips": "192.168.0.1",
"proxyprotocol_insecure": "false",
"compress": "true",
},
},
{
name: "compress on",
value: "name:foo Compress:on",
expectedResult: map[string]string{
"name": "foo",
"compress": "on",
},
},
{
name: "TLS",
value: "Name:foo TLS:goo TLS",
expectedResult: map[string]string{
"name": "foo",
"tls": "goo",
"tls_acme": "TLS",
},
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
conf := parseEntryPointsConfiguration(test.value)
assert.Len(t, conf, len(test.expectedResult))
assert.Equal(t, test.expectedResult, conf)
})
}
}
func Test_toBool(t *testing.T) {
testCases := []struct {
name string
value string
key string
expectedBool bool
}{
{
name: "on",
value: "on",
key: "foo",
expectedBool: true,
},
{
name: "true",
value: "true",
key: "foo",
expectedBool: true,
},
{
name: "enable",
value: "enable",
key: "foo",
expectedBool: true,
},
{
name: "arbitrary string",
value: "bar",
key: "foo",
expectedBool: false,
},
{
name: "no existing entry",
value: "bar",
key: "fii",
expectedBool: false,
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
conf := map[string]string{
"foo": test.value,
}
result := toBool(conf, test.key)
assert.Equal(t, test.expectedBool, result)
})
}
}
func TestEntryPoints_Set(t *testing.T) {
testCases := []struct {
name string
expression string
expectedEntryPointName string
expectedEntryPoint *EntryPoint
}{
{
name: "all parameters camelcase",
expression: "Name:foo Address::8000 TLS:goo,gii TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:Range ProxyProtocol.TrustedIPs:192.168.0.1 ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Address: ":8000",
Redirect: &Redirect{
EntryPoint: "RedirectEntryPoint",
Regex: "RedirectRegex",
Replacement: "RedirectReplacement",
},
Compress: true,
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"192.168.0.1"},
},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
WhitelistSourceRange: []string{"Range"},
TLS: &TLS{
ClientCAFiles: []string{"car"},
Certificates: Certificates{
{
CertFile: FileOrContent("goo"),
KeyFile: FileOrContent("gii"),
},
},
},
},
},
{
name: "all parameters lowercase",
expression: "name:foo address::8000 tls:goo,gii tls ca:car redirect.entryPoint:RedirectEntryPoint redirect.regex:RedirectRegex redirect.replacement:RedirectReplacement compress:true whiteListSourceRange:Range proxyProtocol.trustedIPs:192.168.0.1 forwardedHeaders.trustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Address: ":8000",
Redirect: &Redirect{
EntryPoint: "RedirectEntryPoint",
Regex: "RedirectRegex",
Replacement: "RedirectReplacement",
},
Compress: true,
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"192.168.0.1"},
},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
WhitelistSourceRange: []string{"Range"},
TLS: &TLS{
ClientCAFiles: []string{"car"},
Certificates: Certificates{
{
CertFile: FileOrContent("goo"),
KeyFile: FileOrContent("gii"),
},
},
},
},
},
{
name: "default",
expression: "Name:foo",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "ForwardedHeaders insecure true",
expression: "Name:foo ForwardedHeaders.Insecure:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "ForwardedHeaders insecure false",
expression: "Name:foo ForwardedHeaders.Insecure:false",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: false},
},
},
{
name: "ForwardedHeaders TrustedIPs",
expression: "Name:foo ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
},
},
{
name: "ProxyProtocol insecure true",
expression: "Name:foo ProxyProtocol.Insecure:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{Insecure: true},
},
},
{
name: "ProxyProtocol insecure false",
expression: "Name:foo ProxyProtocol.Insecure:false",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{},
},
},
{
name: "ProxyProtocol TrustedIPs",
expression: "Name:foo ProxyProtocol.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
},
},
{
name: "compress on",
expression: "Name:foo Compress:on",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Compress: true,
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "compress true",
expression: "Name:foo Compress:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Compress: true,
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
eps := EntryPoints{}
err := eps.Set(test.expression)
require.NoError(t, err)
ep := eps[test.expectedEntryPointName]
assert.EqualValues(t, test.expectedEntryPoint, ep)
})
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

151
contrib/scripts/dumpcerts.sh Executable file
View File

@@ -0,0 +1,151 @@
#!/usr/bin/env bash
# Copyright (c) 2017 Brian 'redbeard' Harrington <redbeard@dead-city.org>
#
# dumpcerts.sh - A simple utility to explode a Traefik acme.json file into a
# directory of certificates and a private key
#
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
#
# Dependencies -
# util-linux
# openssl
# jq
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# Exit codes:
# 1 - A component is missing or could not be read
# 2 - There was a problem reading acme.json
# 4 - The destination certificate directory does not exist
# 8 - Missing private key
set -o errexit
set -o pipefail
set -o nounset
USAGE="$(basename "$0") <path to acme> <destination cert directory>"
# Allow us to exit on a missing jq binary
exit_jq() {
echo "
You must have the binary 'jq' to use this.
jq is available at: https://stedolan.github.io/jq/download/
${USAGE}" >&2
exit 1
}
bad_acme() {
echo "
There was a problem parsing your acme.json file.
${USAGE}" >&2
exit 2
}
if [ $# -ne 2 ]; then
echo "
Insufficient number of parameters.
${USAGE}" >&2
exit 1
fi
readonly acmefile="${1}"
readonly certdir="${2%/}"
if [ ! -r "${acmefile}" ]; then
echo "
There was a problem reading from '${acmefile}'
We need to read this file to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 2
fi
if [ ! -d "${certdir}" ]; then
echo "
Path ${certdir} does not seem to be a directory
We need a directory in which to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 4
fi
jq=$(command -v jq) || exit_jq
priv=$(${jq} -e -r '.PrivateKey' "${acmefile}") || bad_acme
if [ ! -n "${priv}" ]; then
echo "
There didn't seem to be a private key in ${acmefile}.
Please ensure that there is a key in this file and try again." >&2
exit 8
fi
# If they do not exist, create the needed subdirectories for our assets
# and place each in a variable for later use, normalizing the path
mkdir -p "${certdir}"/{certs,private}
pdir="${certdir}/private/"
cdir="${certdir}/certs/"
# Save the existing umask, change the default mode to 600, then
# after writing the private key switch it back to the default
oldumask=$(umask)
umask 177
trap 'umask ${oldumask}' EXIT
# traefik stores the private key in stripped base64 format but the certificates
# bundled as a base64 object without stripping headers. This normalizes the
# headers and formatting.
#
# In testing this out it was a balance between the following mechanisms:
# gawk:
# echo ${priv} | awk 'BEGIN {print "-----BEGIN RSA PRIVATE KEY-----"}
# {gsub(/.{64}/,"&\n")}1
# END {print "-----END RSA PRIVATE KEY-----"}' > "${pdir}/letsencrypt.key"
#
# openssl:
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
#
# and sed:
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
# echo ${priv} | sed 's/(.{64})/\1\n/g' >> "${pdir}/letsencrypt.key"
# echo "-----END RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
#
# In the end, openssl was chosen because most users will need this script
# *because* of openssl combined with the fact that it will refuse to write the
# key if it does not parse out correctly. The other mechanisms were left as
# comments so that the user can choose the mechanism most appropriate to them.
echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
| openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
# Process the certificates for each of the domains in acme.json
for domain in $(jq -r '.DomainsCertificate.Certs[].Certificate.Domain' acme.json); do
# Traefik stores a cert bundle for each domain. Within this cert
# bundle there is both proper the certificate and the Let's Encrypt CA
echo "Extracting cert bundle for ${domain}"
cert=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
select (.Domain == $domain )| .Certificate' ${acmefile}) || bad_acme
echo "${cert}" | base64 --decode > "${cdir}/${domain}.pem"
done

View File

@@ -1,41 +1,11 @@
[Unit]
Description=Traefik
Documentation=https://doc.traefik.io/traefik/
#After=network-online.target
#AssertFileIsExecutable=/usr/bin/traefik
#AssertPathExists=/etc/traefik/traefik.toml
[Service]
# Run traefik as its own user (create new user with: useradd -r -s /bin/false -U -M traefik)
#User=traefik
#AmbientCapabilities=CAP_NET_BIND_SERVICE
# configure service behavior
Type=notify
#ExecStart=/usr/bin/traefik --configFile=/etc/traefik/traefik.toml
ExecStart=/usr/bin/traefik --configFile=/etc/traefik.toml
Restart=always
WatchdogSec=1s
# lock down system access
# prohibit any operating system and configuration modification
#ProtectSystem=strict
# create separate, new (and empty) /tmp and /var/tmp filesystems
#PrivateTmp=true
# make /home directories inaccessible
#ProtectHome=true
# turns off access to physical devices (/dev/...)
#PrivateDevices=true
# make kernel settings (procfs and sysfs) read-only
#ProtectKernelTunables=true
# make cgroups /sys/fs/cgroup read-only
#ProtectControlGroups=true
# allow writing of acme.json
#ReadWritePaths=/etc/traefik/acme.json
# depending on log and entrypoint configuration, you may need to allow writing to other paths, too
# limit number of processes in this unit
#LimitNPROC=1
[Install]
WantedBy=multi-user.target

10
docs.Dockerfile Normal file
View File

@@ -0,0 +1,10 @@
FROM alpine:3.7
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin
COPY requirements.txt /mkdocs/
WORKDIR /mkdocs
VOLUME /mkdocs
RUN apk --no-cache --no-progress add py-pip \
&& pip install --trusted-host pypi.python.org --user -r requirements.txt

View File

@@ -1 +0,0 @@
site/

View File

@@ -1,13 +0,0 @@
{
"no-hard-tabs": false,
"MD007": { "indent": 4 },
"MD009": false,
"MD013": false,
"MD024": false,
"MD025": false,
"MD026": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD046": false
}

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
docs.traefik.io

View File

@@ -1,65 +0,0 @@
#######
# This Makefile contains all targets related to the documentation
#######
DOCS_VERIFY_SKIP ?= false
DOCS_LINT_SKIP ?= false
TRAEFIK_DOCS_BUILD_IMAGE ?= traefik-docs
TRAEFIK_DOCS_CHECK_IMAGE ?= $(TRAEFIK_DOCS_BUILD_IMAGE)-check
SITE_DIR := $(CURDIR)/site
DOCKER_RUN_DOC_PORT := 8000
DOCKER_RUN_DOC_MOUNTS := -v $(CURDIR):/mkdocs
DOCKER_RUN_DOC_OPTS := --rm $(DOCKER_RUN_DOC_MOUNTS) -p $(DOCKER_RUN_DOC_PORT):8000
# Default: generates the documentation into $(SITE_DIR)
.PHONY: docs
docs: docs-clean docs-image docs-lint docs-build docs-verify
# Writer Mode: build and serve docs on http://localhost:8000 with livereload
.PHONY: docs-serve
docs-serve: docs-image
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) mkdocs serve
## Pull image for doc building
.PHONY: docs-pull-images
docs-pull-images:
grep --no-filename -E '^FROM' ./*.Dockerfile \
| awk '{print $$2}' \
| sort \
| uniq \
| xargs -P 6 -n 1 docker pull
# Utilities Targets for each step
.PHONY: docs-image
docs-image:
docker build -t $(TRAEFIK_DOCS_BUILD_IMAGE) -f docs.Dockerfile ./
.PHONY: docs-build
docs-build: docs-image
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOCS_BUILD_IMAGE) sh -c "mkdocs build \
&& chown -R $(shell id -u):$(shell id -g) ./site"
.PHONY: docs-verify
docs-verify: docs-build
ifneq ("$(DOCS_VERIFY_SKIP)", "true")
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /verify.sh
else
echo "DOCS_VERIFY_SKIP is true: no verification done."
endif
.PHONY: docs-lint
docs-lint:
ifneq ("$(DOCS_LINT_SKIP)", "true")
docker build -t $(TRAEFIK_DOCS_CHECK_IMAGE) -f check.Dockerfile ./
docker run --rm -v $(CURDIR):/app $(TRAEFIK_DOCS_CHECK_IMAGE) /lint.sh
else
echo "DOCS_LINT_SKIP is true: no linting done."
endif
.PHONY: docs-clean
docs-clean:
rm -rf $(SITE_DIR)

23
docs/archive.md Normal file
View File

@@ -0,0 +1,23 @@
## Current versions documentation
- [Latest stable](https://docs.traefik.io)
- [Experimental](https://master--traefik-docs.netlify.com/)
## Future version documentation
- [v1.5 RC](http://v1-5.archive.docs.traefik.io/)
## Previous versions documentation
- [v1.4 aka Roquefort](http://v1-4.archive.docs.traefik.io/)
- [v1.3 aka Raclette](http://v1-3.archive.docs.traefik.io/)
- [v1.2 aka Morbier](http://v1-2.archive.docs.traefik.io/)
- [v1.1 aka Camembert](http://v1-1.archive.docs.traefik.io/)
## More
[Change log](https://github.com/containous/traefik/blob/master/CHANGELOG.md)

578
docs/basics.md Normal file
View File

@@ -0,0 +1,578 @@
# Basics
## Concepts
Let's take our example from the [overview](/#overview) again:
> Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
> If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
> - domain `api.domain.com` will point the microservice `api` in your private network
> - path `domain.com/web` will point the microservice `web` in your private network
> - domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
> ![Architecture](img/architecture.png)
Let's zoom on Træfik and have an overview of its internal architecture:
![Architecture](img/internal.png)
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfik (listening port, SSL, traffic redirection...).
- Traffic is then forwarded to a matching [frontend](#frontends). A frontend defines routes from [entrypoints](#entrypoints) to [backends](#backends).
Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can match or not a request.
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
- Finally, the [server](#servers) will forward the request to the corresponding microservice in the private network.
### Entrypoints
Entrypoints are the network entry points into Træfik.
They can be defined using:
- a port (80, 443...)
- SSL (Certificates, Keys, authentication with a client certificate signed by a trusted CA...)
- redirection to another entrypoint (redirect `HTTP` to `HTTPS`)
Here is an example of entrypoints definition:
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- Two entrypoints are defined `http` and `https`.
- `http` listens on port `80` and `https` on port `443`.
- We enable SSL on `https` by giving a certificate and a key.
- We also redirect all the traffic from entrypoint `http` to `https`.
And here is another example with client certificate authentication:
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- We enable SSL on `https` by giving a certificate and a key.
- One or several files containing Certificate Authorities in PEM format are added.
- It is possible to have multiple CA:s in the same file or keep them in separate files.
### Frontends
A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
Rules may be classified in one of two groups: Modifiers and matchers.
#### Modifiers
Modifier rules only modify the request. They do not have any impact on routing decisions being made.
Following is the list of existing modifier rules:
- `AddPrefix: /products`: Add path prefix to the existing request path prior to forwarding the request to the backend.
- `ReplacePath: /serverless-path`: Replaces the path and adds the old path to the `X-Replaced-Path` header. Useful for mapping to AWS Lambda or Google Cloud Functions.
#### Matchers
Matcher rules determine if a particular request should be forwarded to a backend.
Separate multiple rule values by `,` (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches).
Does not work for `Headers` and `HeadersRegexp`.
Separate multiple rule values by `;` (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).
Following is the list of existing matcher rules along with examples:
| Matcher | Description |
|------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Headers: Content-Type, application/json` | Match HTTP header. It accepts a comma-separated key/value pair where both key and value must be literals. |
| `HeadersRegexp: Content-Type, application/(text/json)` | Match HTTP header. It accepts a comma-separated key/value pair where the key must be a literal and the value may be a literal or a regular expression. |
| `Host: traefik.io, www.traefik.io` | Match request host. It accepts a sequence of literal hosts. |
| `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io` | Match request host. It accepts a sequence of literal and regular expression hosts. |
| `Method: GET, POST, PUT` | Match request HTTP method. It accepts a sequence of HTTP methods. |
| `Path: /products/, /articles/{category}/{id:[0-9]+}` | Match exact request path. It accepts a sequence of literal and regular expression paths. |
| `PathStrip: /products/` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal paths. |
| `PathStripRegex: /articles/{category}/{id:[0-9]+}` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression paths. |
| `PathPrefix: /products/, /articles/{category}/{id:[0-9]+}` | Match request prefix path. It accepts a sequence of literal and regular expression prefix paths. |
| `PathPrefixStrip: /products/` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
| `PathPrefixStripRegex: /articles/{category}/{id:[0-9]+}` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
| `Query: foo=bar, bar=baz` | Match Query String parameters. It accepts a sequence of key=value pairs. |
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces. Any pattern supported by [Go's regexp package](https://golang.org/pkg/regexp/) may be used (example: `/posts/{id:[0-9]+}`).
!!! note
The variable has no special meaning; however, it is required by the [gorilla/mux](https://github.com/gorilla/mux) dependency which embeds the regular expression and defines the syntax.
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
You can also optionally enable `passTLSCert` to forward TLS Client certificates to the backend.
##### Path Matcher Usage Guidelines
This section explains when to use the various path matchers.
Use `Path` if your backend listens on the exact path only. For instance, `Path: /products` would match `/products` but not `/products/shoes`.
Use a `*Prefix*` matcher if your backend listens on a particular base path but also serves requests on sub-paths.
For instance, `PathPrefix: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
Since the path is forwarded as-is, your backend is expected to listen on `/products`.
Use a `*Strip` matcher if your backend listens on the root path (`/`) but should be routeable on a specific prefix.
For instance, `PathPrefixStrip: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
Since the path is stripped prior to forwarding, your backend is expected to listen on `/`.
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs.
Continuing on the example, the backend should return `/products/shoes/image.png` (and not `/images.png` which Traefik would likely not be able to associate with the same backend).
The `X-Forwarded-Prefix` header (available since Traefik 1.3) can be queried to build such URLs dynamically.
Instead of distinguishing your backends by path only, you can add a Host matcher to the mix.
That way, namespacing of your backends happens on the basis of hosts in addition to paths.
#### Examples
Here is an example of frontends definition:
```toml
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost,test2.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
passTLSCert = true
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "HostRegexp:localhost,{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
- Three frontends are defined: `frontend1`, `frontend2` and `frontend3`
- `frontend1` will forward the traffic to the `backend2` if the rule `Host:test.localhost,test2.localhost` is matched
- `frontend2` will forward the traffic to the `backend1` if the rule `Host:localhost,{subdomain:[a-z]+}.localhost` is matched (forwarding client `Host` header to the backend)
- `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched
#### Combining multiple rules
As seen in the previous example, you can combine multiple rules.
In TOML file, you can use multiple routes:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Path:/test"
```
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
You can also use the notation using a `;` separator, same result:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
Finally, you can create a rule to bind multiple domains or Path to a frontend, using the `,` separator:
```toml
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
```
#### Rules Order
When combining `Modifier` rules with `Matcher` rules, it is important to remember that `Modifier` rules **ALWAYS** apply after the `Matcher` rules.
The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portion of the rule will apply first, and the `Modifier` will apply later.
- `PathStrip`
- `PathStripRegex`
- `PathPrefixStrip`
- `PathPrefixStripRegex`
`Modifiers` will be applied in a pre-determined order regardless of their order in the `rule` configuration section.
1. `PathStrip`
2. `PathPrefixStrip`
3. `PathStripRegex`
4. `PathPrefixStripRegex`
5. `AddPrefix`
6. `ReplacePath`
#### Priorities
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
`PathPrefix:/12345` will be matched before `PathPrefix:/1234` that will be matched before `PathPrefix:/1`.
You can customize priority by frontend:
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
```
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
#### Custom headers
Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
This allows for setting headers such as `X-Script-Name` to be added to the request, or custom headers to be added to the response.
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.headers.customresponseheaders]
X-Custom-Response-Header = "True"
[frontends.frontend1.headers.customrequestheaders]
X-Script-Name = "test"
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/cheese"
```
In this example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, and the `X-Custom-Response-Header` added to the response.
#### Security headers
Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
This functionality allows for some easy security features to quickly be set.
An example of some of the security headers:
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.headers]
FrameDeny = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/cheddar"
[frontends.frontend2]
backend = "backend2"
[frontends.frontend2.headers]
SSLRedirect = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefixStrip:/stilton"
```
In this example, traffic routed through the first frontend will have the `X-Frame-Options` header set to `DENY`, and the second will only allow HTTPS request through, otherwise will return a 301 HTTPS redirect.
!!! note
The detailed documentation for those security headers can be found in [unrolled/secure](https://github.com/unrolled/secure#available-options).
### Backends
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
Various methods of load-balancing are supported:
- `wrr`: Weighted Round Robin
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others.
It also rolls back to original weights if the servers have changed.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
Initial state is Standby. CB observes the statistics and does not modify the request.
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
Once Tripped timer expires, CB enters Recovering state and resets all stats.
In case the condition does not match and recovery timer expires, CB enters Standby state.
It can be configured using:
- Methods: `LatencyAtQuantileMS`, `NetworkErrorRatio`, `ResponseCodeRatio`
- Operators: `AND`, `OR`, `EQ`, `NEQ`, `LT`, `LE`, `GT`, `GE`
For example:
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in range [500-600) to [0-600)
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can
also be applied to each backend.
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and
`maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to
evaluate the maximum connections.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
```
- `backend1` will return `HTTP code 429 Too Many Requests` if there are already 10 requests in progress for the same Host header.
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
### Sticky sessions
Sticky sessions are supported with both load balancers.
When sticky sessions are enabled, a cookie is set on the initial request.
The default cookie name is an abbreviation of a sha1 (ex: `_1d52e`).
On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy.
If not, a new backend will be assigned.
```toml
[backends]
[backends.backend1]
# Enable sticky session
[backends.backend1.loadbalancer.stickiness]
# Customize the cookie name
#
# Optional
# Default: a sha1 (6 chars)
#
# cookieName = "my_cookie"
```
The deprecated way:
```toml
[backends]
[backends.backend1]
[backends.backend1.loadbalancer]
sticky = true
```
### Health Check
A health check can be configured in order to remove a backend from LB rotation as long as it keeps returning HTTP status codes other than `200 OK` to HTTP GET requests periodically carried out by Traefik.
The check is defined by a pathappended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
Each backend must respond to the health check within 5 seconds.
By default, the port of the backend server is used, however, this may be overridden.
A recovering backend returning 200 OK responses again is being returned to the
LB rotation pool.
For example:
```toml
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
```
To use a different port for the healthcheck:
```toml
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
port = 8080
```
### Servers
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
!!! note
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
Here is an example of backends and servers definition:
```toml
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
```
- Two backends are defined: `backend1` and `backend2`
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1` using default `wrr` load-balancing strategy.
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
## Configuration
Træfik's configuration has two parts:
- The [static Træfik configuration](/basics#static-trfk-configuration) which is loaded only at the beginning.
- The [dynamic Træfik configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
### Static Træfik configuration
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
Træfik can be configured using many configuration sources with the following precedence order.
Each item takes precedence over the item below it:
- [Key-value store](/basics/#key-value-stores)
- [Arguments](/basics/#arguments)
- [Configuration file](/basics/#configuration-file)
- Default
It means that arguments override configuration file, and key-value store overrides arguments.
!!! note
the provider-enabling argument parameters (e.g., `--docker`) set all default values for the specific provider.
It must not be used if a configuration source with less precedence wants to set a non-default provider value.
#### Configuration file
By default, Træfik will try to find a `traefik.toml` in the following places:
- `/etc/traefik/`
- `$HOME/.traefik/`
- `.` _the working directory_
You can override this by setting a `configFile` argument:
```bash
traefik --configFile=foo/bar/myconfigfile.toml
```
Please refer to the [global configuration](/configuration/commons) section to get documentation on it.
#### Arguments
Each argument (and command) is described in the help section:
```bash
traefik --help
```
Note that all default values will be displayed as well.
#### Key-value stores
Træfik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
### Dynamic Træfik configuration
The dynamic configuration concerns :
- [Frontends](/basics/#frontends)
- [Backends](/basics/#backends)
- [Servers](/basics/#servers)
Træfik can hot-reload those rules which could be provided by [multiple configuration backends](/configuration/commons).
We only need to enable `watch` option to make Træfik watch configuration backend changes and generate its configuration automatically.
Routes to services will be created and updated instantly at any changes.
Please refer to the [configuration backends](/configuration/commons) section to get documentation on it.
## Commands
### traefik
Usage:
```bash
traefik [command] [--flag=flag_argument]
```
List of Træfik available commands with description :
- `version` : Print version
- `storeconfig` : Store the static Traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
- `bug`: The easiest way to submit a pre-filled issue.
- `healthcheck`: Calls Traefik `/ping` to check health.
Each command may have related flags.
All those related flags will be displayed with :
```bash
traefik [command] --help
```
Each command is described at the beginning of the help section:
```bash
traefik --help
```
### Command: bug
Here is the easiest way to submit a pre-filled issue on [Træfik GitHub](https://github.com/containous/traefik).
```bash
traefik bug
```
Watch [this demo](https://www.youtube.com/watch?v=Lyz62L8m93I).
### Command: healthcheck
This command allows to check the health of Traefik. Its exit status is `0` if Traefik is healthy and `1` if it is unhealthy.
This can be used with Docker [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck) instruction or any other health check orchestration mechanism.
!!! note
The [`web` provider](/configuration/backends/web) must be enabled to allow `/ping` calls by the `healthcheck` command.
```bash
traefik healthcheck
```
```bash
OK: http://:8082/ping
```

214
docs/benchmarks.md Normal file
View File

@@ -0,0 +1,214 @@
# Benchmarks
## Configuration
I would like to thanks [vincentbernat](https://github.com/vincentbernat) from [exoscale.ch](https://www.exoscale.ch) who kindly provided the infrastructure needed for the benchmarks.
I used 4 VMs for the tests with the following configuration:
- 32 GB RAM
- 8 CPU Cores
- 10 GB SSD
- Ubuntu 14.04 LTS 64-bit
## Setup
1. One VM used to launch the benchmarking tool [wrk](https://github.com/wg/wrk)
2. One VM for Traefik (v1.0.0-beta.416) / nginx (v1.4.6)
3. Two VMs for 2 backend servers in go [whoami](https://github.com/emilevauge/whoamI/)
Each VM has been tuned using the following limits:
```bash
sysctl -w fs.file-max="9999999"
sysctl -w fs.nr_open="9999999"
sysctl -w net.core.netdev_max_backlog="4096"
sysctl -w net.core.rmem_max="16777216"
sysctl -w net.core.somaxconn="65535"
sysctl -w net.core.wmem_max="16777216"
sysctl -w net.ipv4.ip_local_port_range="1025 65535"
sysctl -w net.ipv4.tcp_fin_timeout="30"
sysctl -w net.ipv4.tcp_keepalive_time="30"
sysctl -w net.ipv4.tcp_max_syn_backlog="20480"
sysctl -w net.ipv4.tcp_max_tw_buckets="400000"
sysctl -w net.ipv4.tcp_no_metrics_save="1"
sysctl -w net.ipv4.tcp_syn_retries="2"
sysctl -w net.ipv4.tcp_synack_retries="2"
sysctl -w net.ipv4.tcp_tw_recycle="1"
sysctl -w net.ipv4.tcp_tw_reuse="1"
sysctl -w vm.min_free_kbytes="65536"
sysctl -w vm.overcommit_memory="1"
ulimit -n 9999999
```
### Nginx
Here is the config Nginx file use `/etc/nginx/nginx.conf`:
```
user www-data;
worker_processes auto;
worker_rlimit_nofile 200000;
pid /var/run/nginx.pid;
events {
worker_connections 10000;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 300;
keepalive_requests 10000;
types_hash_max_size 2048;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
server_tokens off;
dav_methods off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log warn;
gzip off;
gzip_vary off;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}
```
Here is the Nginx vhost file used:
```
upstream whoami {
server IP-whoami1:80;
server IP-whoami2:80;
keepalive 300;
}
server {
listen 8001;
server_name test.traefik;
access_log off;
error_log /dev/null crit;
if ($host != "test.traefik") {
return 404;
}
location / {
proxy_pass http://whoami;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Host $host;
}
}
```
### Traefik
Here is the `traefik.toml` file used:
```toml
MaxIdleConnsPerHost = 100000
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":8000"
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://IP-whoami1:80"
weight = 1
[backends.backend1.servers.server2]
url = "http://IP-whoami2:80"
weight = 1
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host: test.traefik"
```
## Results
### whoami:
```shell
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
Running 1m test @ http://IP-whoami:80/bench
20 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 70.28ms 134.72ms 1.91s 89.94%
Req/Sec 2.92k 742.42 8.78k 68.80%
Latency Distribution
50% 10.63ms
75% 75.64ms
90% 205.65ms
99% 668.28ms
3476705 requests in 1.00m, 384.61MB read
Socket errors: connect 0, read 0, write 0, timeout 103
Requests/sec: 57894.35
Transfer/sec: 6.40MB
```
### nginx:
```shell
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
Running 1m test @ http://IP-nginx:8001/bench
20 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 101.25ms 180.09ms 1.99s 89.34%
Req/Sec 1.69k 567.69 9.39k 72.62%
Latency Distribution
50% 15.46ms
75% 129.11ms
90% 302.44ms
99% 846.59ms
2018427 requests in 1.00m, 298.36MB read
Socket errors: connect 0, read 0, write 0, timeout 90
Requests/sec: 33591.67
Transfer/sec: 4.97MB
```
### Traefik:
```shell
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
Running 1m test @ http://IP-traefik:8000/bench
20 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 91.72ms 150.43ms 2.00s 90.50%
Req/Sec 1.43k 266.37 2.97k 69.77%
Latency Distribution
50% 19.74ms
75% 121.98ms
90% 237.39ms
99% 687.49ms
1705073 requests in 1.00m, 188.63MB read
Socket errors: connect 0, read 0, write 0, timeout 7
Requests/sec: 28392.44
Transfer/sec: 3.14MB
```
## Conclusion
Traefik is obviously slower than Nginx, but not so much: Traefik can serve 28392 requests/sec and Nginx 33591 requests/sec which gives a ratio of 85%.
Not bad for young project :) !
Some areas of possible improvements:
- Use [GO_REUSEPORT](https://github.com/kavu/go_reuseport) listener
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with Traefik than with nginx)

View File

@@ -1,43 +0,0 @@
FROM alpine:3.18 as alpine
RUN apk --no-cache --no-progress add \
build-base \
gcompat \
libcurl \
libxml2-dev \
libxslt-dev \
ruby \
ruby-bigdecimal \
ruby-dev \
ruby-etc \
ruby-ffi \
ruby-json \
zlib-dev
RUN gem install nokogiri --version 1.15.3 --no-document -- --use-system-libraries
RUN gem install html-proofer --version 5.0.7 --no-document -- --use-system-libraries
# After Ruby, some NodeJS YAY!
RUN apk --no-cache --no-progress add \
git \
nodejs \
npm
RUN npm install --global \
markdownlint@0.29.0 \
markdownlint-cli@0.35.0
# Finally the shell tools we need for later
# tini helps to terminate properly all the parallelized tasks when sending CTRL-C
RUN apk --no-cache --no-progress add \
ca-certificates \
curl \
tini
COPY ./scripts/verify.sh /verify.sh
COPY ./scripts/lint.sh /lint.sh
WORKDIR /app
VOLUME ["/tmp","/app"]
ENTRYPOINT ["/sbin/tini","-g","sh"]

238
docs/configuration/acme.md Normal file
View File

@@ -0,0 +1,238 @@
# ACME (Let's Encrypt) configuration
See also [Let's Encrypt examples](/user-guide/examples/#lets-encrypt-support) and [Docker & Let's Encrypt user guide](/user-guide/docker-and-lets-encrypt).
## Configuration
```toml
# Sample entrypoint configuration when using ACME.
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# Enable ACME (Let's Encrypt): automatic SSL.
[acme]
# Email address used for registration.
#
# Required
#
email = "test@traefik.io"
# File or key used for certificates storage.
#
# Required
#
storage = "acme.json"
# or `storage = "traefik/acme/account"` if using KV store.
# Entrypoint to proxy acme challenge/apply certificates to.
# WARNING, must point to an entrypoint on port 443
#
# Required
#
entryPoint = "https"
# Use a DNS based acme challenge rather than external HTTPS access
#
#
# Optional
#
# dnsProvider = "digitalocean"
# By default, the dnsProvider will verify the TXT DNS challenge record before letting ACME verify.
# If delayDontCheckDNS is greater than zero, avoid this & instead just wait so many seconds.
# Useful if internal networks block external DNS queries.
#
# Optional
#
# delayDontCheckDNS = 0
# If true, display debug log messages from the acme client library.
#
# Optional
#
# acmeLogging = true
# Enable on demand certificate.
#
# Optional
#
# onDemand = true
# Enable certificate generation on frontends Host rules.
#
# Optional
#
# onHostRule = true
# CA server to use.
# - Uncomment the line to run on the staging let's encrypt server.
# - Leave comment to go to prod.
#
# Optional
#
# caServer = "https://acme-staging.api.letsencrypt.org/directory"
# Domains list.
#
# [[acme.domains]]
# main = "local1.com"
# sans = ["test1.local1.com", "test2.local1.com"]
# [[acme.domains]]
# main = "local2.com"
# sans = ["test1.local2.com", "test2.local2.com"]
# [[acme.domains]]
# main = "local3.com"
# [[acme.domains]]
# main = "local4.com"
```
### `storage`
```toml
[acme]
# ...
storage = "acme.json"
# ...
```
File or key used for certificates storage.
**WARNING** If you use Traefik in Docker, you have 2 options:
- create a file on your host and mount it as a volume:
```toml
storage = "acme.json"
```
```bash
docker run -v "/my/host/acme.json:acme.json" traefik
```
- mount the folder containing the file as a volume
```toml
storage = "/etc/traefik/acme/acme.json"
```
```bash
docker run -v "/my/host/acme:/etc/traefik/acme" traefik
```
### `dnsProvider`
```toml
[acme]
# ...
dnsProvider = "digitalocean"
# ...
```
Use a DNS based acme challenge rather than external HTTPS access, e.g. for a firewalled server.
Select the provider that matches the DNS domain that will host the challenge TXT record, and provide environment variables with access keys to enable setting it:
| Provider | Configuration |
|----------------------------------------------|-----------------------------------------------------------------------------------------------------------|
| [Cloudflare](https://www.cloudflare.com) | `CLOUDFLARE_EMAIL`, `CLOUDFLARE_API_KEY` |
| [DigitalOcean](https://www.digitalocean.com) | `DO_AUTH_TOKEN` |
| [DNSimple](https://dnsimple.com) | `DNSIMPLE_EMAIL`, `DNSIMPLE_OAUTH_TOKEN` |
| [DNS Made Easy](https://dnsmadeeasy.com) | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET` |
| [Exoscale](https://www.exoscale.ch) | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET` |
| [Gandi](https://www.gandi.net) | `GANDI_API_KEY` |
| [Linode](https://www.linode.com) | `LINODE_API_KEY` |
| manual | none, but run Traefik interactively & turn on `acmeLogging` to see instructions & press <kbd>Enter</kbd>. |
| [Namecheap](https://www.namecheap.com) | `NAMECHEAP_API_USER`, `NAMECHEAP_API_KEY` |
| RFC2136 | `RFC2136_TSIG_KEY`, `RFC2136_TSIG_SECRET`, `RFC2136_TSIG_ALGORITHM`, `RFC2136_NAMESERVER` |
| [Route 53](https://aws.amazon.com/route53/) | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, or configured user/instance IAM profile. |
| [dyn](https://dyn.com) | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` |
| [VULTR](https://www.vultr.com) | `VULTR_API_KEY` |
| [OVH](https://www.ovh.com) | `OVH_ENDPOINT`, `OVH_APPLICATION_KEY`, `OVH_APPLICATION_SECRET`, `OVH_CONSUMER_KEY` |
| [pdns](https://www.powerdns.com) | `PDNS_API_KEY`, `PDNS_API_URL` |
### `delayDontCheckDNS`
```toml
[acme]
# ...
delayDontCheckDNS = 0
# ...
```
By default, the dnsProvider will verify the TXT DNS challenge record before letting ACME verify.
If `delayDontCheckDNS` is greater than zero, avoid this & instead just wait so many seconds.
Useful if internal networks block external DNS queries.
### `onDemand`
```toml
[acme]
# ...
onDemand = true
# ...
```
Enable on demand certificate.
This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate.
!!! warning
TLS handshakes will be slow when requesting a hostname certificate for the first time, this can lead to DoS attacks.
!!! warning
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits)
### `onHostRule`
```toml
[acme]
# ...
onHostRule = true
# ...
```
Enable certificate generation on frontends Host rules.
This will request a certificate from Let's Encrypt for each frontend with a Host rule.
For example, a rule `Host:test1.traefik.io,test2.traefik.io` will request a certificate with main domain `test1.traefik.io` and SAN `test2.traefik.io`.
### `caServer`
```toml
[acme]
# ...
caServer = "https://acme-staging.api.letsencrypt.org/directory"
# ...
```
CA server to use.
- Uncomment the line to run on the staging Let's Encrypt server.
- Leave comment to go to prod.
### `domains`
```toml
[acme]
# ...
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
# ...
```
You can provide SANs (alternative domains) to each main domain.
All domains must have A/AAAA records pointing to Traefik.
!!! warning
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
Each domain & SANs will lead to a certificate request.

View File

@@ -0,0 +1,59 @@
# BoltDB Backend
Træfik can be configured to use BoltDB as a backend configuration.
```toml
################################################################
# BoltDB configuration backend
################################################################
# Enable BoltDB configuration backend.
[boltdb]
# BoltDB file.
#
# Required
# Default: "127.0.0.1:4001"
#
endpoint = "/my.db"
# Enable watch BoltDB changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "/traefik"
#
prefix = "/traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
filename = "boltdb.tmpl"
# Use BoltDB user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable BoltDB TLS connection.
#
# Optional
#
# [boltdb.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/boltdb.crt"
# key = "/etc/ssl/boltdb.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).

View File

@@ -0,0 +1,136 @@
# Consul Backend
## Consul Key-Value backend
Træfik can be configured to use Consul as a backend configuration.
```toml
################################################################
# Consul KV configuration backend
################################################################
# Enable Consul KV configuration backend.
[consul]
# Consul server endpoint.
#
# Required
# Default: "127.0.0.1:8500"
#
endpoint = "127.0.0.1:8500"
# Enable watch Consul changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: traefik
#
prefix = "traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "consul.tmpl"
# Use Consul user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable Consul TLS connection.
#
# Optional
#
# [consul.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/consul.crt"
# key = "/etc/ssl/consul.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.
## Consul Catalog backend
Træfik can be configured to use service discovery catalog of Consul as a backend configuration.
```toml
################################################################
# Consul Catalog configuration backend
################################################################
# Enable Consul Catalog configuration backend.
[consulCatalog]
# Consul server endpoint.
#
# Required
# Default: "127.0.0.1:8500"
#
endpoint = "127.0.0.1:8500"
# Expose Consul catalog services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Prefix for Consul catalog tags.
#
# Optional
# Default: "traefik"
#
prefix = "traefik"
# Default frontEnd Rule for Consul services.
#
# The format is a Go Template with:
# - ".ServiceName", ".Domain" and ".Attributes" available
# - "getTag(name, tags, defaultValue)", "hasTag(name, tags)" and "getAttribute(name, tags, defaultValue)" functions are available
# - "getAttribute(...)" function uses prefixed tag names based on "prefix" value
#
# Optional
# Default: "Host:{{.ServiceName}}.{{.Domain}}"
#
#frontEndRule = "Host:{{.ServiceName}}.{{Domain}}"
```
This backend will create routes matching on hostname based on the service name used in Consul.
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
### Tags
Additional settings can be defined using Consul Catalog tags.
| Tag | Description |
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.backend.weight=10` | Assign this weight to the container |
| `traefik.backend.circuitbreaker=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend, ex: `NetworkErrorRatio() > 0.` |
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.backend.loadbalancer=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |

View File

@@ -0,0 +1,199 @@
# Docker Backend
Træfik can be configured to use Docker as a backend configuration.
## Docker
```toml
################################################################
# Docker configuration backend
################################################################
# Enable Docker configuration backend.
[docker]
# Docker server endpoint. Can be a tcp or a unix socket endpoint.
#
# Required
#
endpoint = "unix:///var/run/docker.sock"
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on a container.
#
# Required
#
domain = "docker.localhost"
# Enable watch docker changes.
#
# Optional
#
watch = true
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "docker.tmpl"
# Expose containers by default in Traefik.
# If set to false, containers that don't have `traefik.enable=true` will be ignored.
#
# Optional
# Default: true
#
exposedbydefault = true
# Use the IP address from the binded port instead of the inner network one.
# For specific use-case :)
#
# Optional
# Default: false
#
usebindportip = true
# Use Swarm Mode services as data provider.
#
# Optional
# Default: false
#
swarmmode = false
# Enable docker TLS connection.
#
# Optional
#
# [docker.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/docker.crt"
# key = "/etc/ssl/docker.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Docker Swarm Mode
```toml
################################################################
# Docker Swarmmode configuration backend
################################################################
# Enable Docker configuration backend.
[docker]
# Docker server endpoint.
# Can be a tcp or a unix socket endpoint.
#
# Required
# Default: "unix:///var/run/docker.sock"
#
endpoint = "tcp://127.0.0.1:2375"
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on a services.
#
# Optional
# Default: ""
#
domain = "docker.localhost"
# Enable watch docker changes.
#
# Optional
# Default: true
#
watch = true
# Use Docker Swarm Mode as data provider.
#
# Optional
# Default: false
#
swarmmode = true
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "docker.tmpl"
# Expose services by default in Traefik.
#
# Optional
# Default: true
#
exposedbydefault = false
# Enable docker TLS connection.
#
# Optional
#
# [docker.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/docker.crt"
# key = "/etc/ssl/docker.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Labels: overriding default behaviour
### On Containers
Labels can be used on containers to override default behaviour.
| Label | Description |
|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.backend=foo` | Give the name `foo` to the generated backend for this container. |
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
| `traefik.backend.loadbalancer.swarm=true` | Use Swarm's inbuilt load balancer (only relevant under Swarm Mode). |
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.port=80` | Register this port. Useful when the container exposes multiples ports. |
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.weight=10` | Assign this weight to the container |
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Default: `Host:{containerName}.{domain}` or `Host:{service}.{project_name}.{domain}` if you are using `docker-compose`. |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access. If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
| `traefik.docker.network` | Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with `docker inspect <container_id>`) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name. |
### On Service
Services labels can be used for overriding default behaviour
| Label | Description |
|---------------------------------------------------|--------------------------------------------------------------------------------------------------|
| `traefik.<service-name>.port=PORT` | Overrides `traefik.port`. If several ports need to be exposed, the service labels could be used. |
| `traefik.<service-name>.protocol` | Overrides `traefik.protocol`. |
| `traefik.<service-name>.weight` | Assign this service weight. Overrides `traefik.weight`. |
| `traefik.<service-name>.frontend.backend=BACKEND` | Assign this service frontend to `BACKEND`. Default is to assign to the service backend. |
| `traefik.<service-name>.frontend.entryPoints` | Overrides `traefik.frontend.entrypoints` |
| `traefik.<service-name>.frontend.auth.basic` | Sets a Basic Auth for that frontend |
| `traefik.<service-name>.frontend.passHostHeader` | Overrides `traefik.frontend.passHostHeader`. |
| `traefik.<service-name>.frontend.priority` | Overrides `traefik.frontend.priority`. |
| `traefik.<service-name>.frontend.rule` | Overrides `traefik.frontend.rule`. |
!!! note
if a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
!!! warning
when running inside a container, Træfik will need network access through:
`docker network connect <network> <traefik-container>`

View File

@@ -0,0 +1,71 @@
# DynamoDB Backend
Træfik can be configured to use Amazon DynamoDB as a backend configuration.
## Configuration
```toml
################################################################
# DynamoDB configuration backend
################################################################
# Enable DynamoDB configuration backend.
[dynamodb]
# Region to use when connecting to AWS.
#
# Required
#
region = "us-west-1"
# DyanmoDB Table Name.
#
# Optional
# Default: "traefik"
#
tableName = "traefik"
# Enable watch DynamoDB changes.
#
# Optional
# Default: true
#
watch = true
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# AccessKeyID to use when connecting to AWS.
#
# Optional
#
accessKeyID = "abc"
# SecretAccessKey to use when connecting to AWS.
#
# Optional
#
secretAccessKey = "123"
# Endpoint of local dynamodb instance for testing?
#
# Optional
#
endpoint = "http://localhost:8080"
```
## Table Items
Items in the `dynamodb` table must have three attributes:
- `id` (string): The id is the primary key.
- `name`(string): The name is used as the name of the frontend or backend.
- `frontend` or `backend` (map): This attribute's structure matches exactly the structure of a Frontend or Backend type in Traefik.
See `types/types.go` for details.
The presence or absence of this attribute determines its type.
So an item should never have both a `frontend` and a `backend` attribute.

View File

@@ -0,0 +1,140 @@
# ECS Backend
Træfik can be configured to use Amazon ECS as a backend configuration.
## Configuration
```toml
################################################################
# ECS configuration backend
################################################################
# Enable ECS configuration backend.
[ecs]
# ECS Cluster Name.
#
# DEPRECATED - Please use `clusters`.
#
cluster = "default"
# ECS Clusters Name.
#
# Optional
# Default: ["default"]
#
clusters = ["default"]
# Enable watch ECS changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
#
# Optional
# Default: ""
#
domain = "ecs.localhost"
# Enable auto discover ECS clusters.
#
# Optional
# Default: false
#
autoDiscoverClusters = false
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# Expose ECS services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Region to use when connecting to AWS.
#
# Optional
#
region = "us-east-1"
# AccessKeyID to use when connecting to AWS.
#
# Optional
#
accessKeyID = "abc"
# SecretAccessKey to use when connecting to AWS.
#
# Optional
#
secretAccessKey = "123"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "ecs.tmpl"
```
If `AccessKeyID`/`SecretAccessKey` is not given credentials will be resolved in the following order:
- From environment variables; `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN`.
- Shared credentials, determined by `AWS_PROFILE` and `AWS_SHARED_CREDENTIALS_FILE`, defaults to `default` and `~/.aws/credentials`.
- EC2 instance role or ECS task role
## Policy
Træfik needs the following policy to read ECS information:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TraefikECSReadAccess",
"Effect": "Allow",
"Action": [
"ecs:ListClusters",
"ecs:DescribeClusters",
"ecs:ListTasks",
"ecs:DescribeTasks",
"ecs:DescribeContainerInstances",
"ecs:DescribeTaskDefinition",
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}
```
## Labels: overriding default behaviour
Labels can be used on task containers to override default behaviour:
| Label | Description |
|-----------------------------------------------------------|------------------------------------------------------------------------------------------|
| `traefik.protocol=https` | override the default `http` protocol |
| `traefik.weight=10` | assign this weight to the container |
| `traefik.enable=false` | disable this container in Træfik |
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |

View File

@@ -0,0 +1,61 @@
# Etcd Backend
Træfik can be configured to use Etcd as a backend configuration.
```toml
################################################################
# Etcd configuration backend
################################################################
# Enable Etcd configuration backend.
[etcd]
# Etcd server endpoint.
#
# Required
# Default: "127.0.0.1:2379"
#
endpoint = "127.0.0.1:2379"
# Enable watch Etcd changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "/traefik"
#
prefix = "/traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "etcd.tmpl"
# Use etcd user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable etcd TLS connection.
#
# Optional
#
# [etcd.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/etcd.crt"
# key = "/etc/ssl/etcd.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.

View File

@@ -0,0 +1,32 @@
# Eureka Backend
Træfik can be configured to use Eureka as a backend configuration.
```toml
################################################################
# Eureka configuration backend
################################################################
# Enable Eureka configuration backend.
[eureka]
# Eureka server endpoint.
#
# Required
#
endpoint = "http://my.eureka.server/eureka"
# Override default configuration time between refresh.
#
# Optional
# Default: 30s
#
delay = "1m"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "eureka.tmpl"
```

View File

@@ -0,0 +1,170 @@
# File Backends
Like any other reverse proxy, Træfik can be configured with a file.
You have three choices:
- [Simple](/configuration/backends/file/#simple)
- [Rules in a Separate File](/configuration/backends/file/#rules-in-a-separate-file)
- [Multiple `.toml` Files](/configuration/backends/file/#multiple-toml-files)
To enable the file backend, you must either pass the `--file` option to the Træfik binary or put the `[file]` section (with or without inner settings) in the configuration file.
## Simple
Add your configuration at the end of the global configuration file `traefik.toml`:
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
[file]
# rules
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend2.maxconn]
amount = 10
extractorfunc = "request.host"
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
priority = 10
# restrict access to this frontend to the specified list of IPv4/IPv6 CIDR Nets
# an unset or empty list allows all Source-IPs to access
# if one of the Net-Specifications are invalid, the whole list is invalid
# and allows all Source-IPs to access.
whitelistSourceRange = ["10.42.0.0/16", "152.89.1.33/32", "afed:be44::/16"]
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
```
## Rules in a Separate File
Put your rules in a separate file, for example `rules.toml`:
```toml
# traefik.toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
[file]
filename = "rules.toml"
```
```toml
# rules.toml
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend2.maxconn]
amount = 10
extractorfunc = "request.host"
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
```
## Multiple `.toml` Files
You could have multiple `.toml` files in a directory:
```toml
[file]
directory = "/path/to/config/"
```
If you want Træfik to watch file changes automatically, just add:
```toml
[file]
watch = true
```

View File

@@ -0,0 +1,133 @@
# Kubernetes Ingress Backend
Træfik can be configured to use Kubernetes Ingress as a backend configuration.
See also [Kubernetes user guide](/user-guide/kubernetes).
## Configuration
```toml
################################################################
# Kubernetes Ingress configuration backend
################################################################
# Enable Kubernetes Ingress configuration backend.
[kubernetes]
# Kubernetes server endpoint.
#
# Optional for in-cluster configuration, required otherwise.
# Default: empty
#
# endpoint = "http://localhost:8080"
# Bearer token used for the Kubernetes client configuration.
#
# Optional
# Default: empty
#
# token = "my token"
# Path to the certificate authority file.
# Used for the Kubernetes client configuration.
#
# Optional
# Default: empty
#
# certAuthFilePath = "/my/ca.crt"
# Array of namespaces to watch.
#
# Optional
# Default: all namespaces (empty array).
#
# namespaces = ["default", "production"]
# Ingress label selector to identify Ingress objects that should be processed.
#
# Optional
# Default: empty (process all Ingresses)
#
# labelselector = "A and not B"
# Disable PassHost Headers.
#
# Optional
# Default: false
#
# disablePassHostHeaders = true
```
### `endpoint`
The Kubernetes server endpoint.
When deployed as a replication controller in Kubernetes, Traefik will use the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` to construct the endpoint.
Secure token will be found in `/var/run/secrets/kubernetes.io/serviceaccount/token` and SSL CA cert in `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`
The endpoint may be given to override the environment variable values.
When the environment variables are not found, Traefik will try to connect to the Kubernetes API server with an external-cluster client.
In this case, the endpoint is required.
Specifically, it may be set to the URL used by `kubectl proxy` to connect to a Kubernetes cluster from localhost.
### `labelselector`
Ingress label selector to identify Ingress objects that should be processed.
See [label-selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) for details.
## Annotations
Annotations can be used on containers to override default behaviour for the whole Ingress resource:
- `traefik.frontend.rule.type: PathPrefixStrip`
Override the default frontend rule type. Default: `PathPrefix`.
- `traefik.frontend.priority: "3"`
Override the default frontend rule priority.
Annotations can be used on the Kubernetes service to override default behaviour:
- `traefik.backend.loadbalancer.method=drr`
Override the default `wrr` load balancer algorithm
- `traefik.backend.loadbalancer.stickiness=true`
Enable backend sticky sessions
- `traefik.backend.loadbalancer.stickiness.cookieName=NAME`
Manually set the cookie name for sticky sessions
- `traefik.backend.loadbalancer.sticky=true`
Enable backend sticky sessions (DEPRECATED)
You can find here an example [ingress](https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml) and [replication controller](https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik.yaml).
Additionally, an annotation can be used on Kubernetes services to set the [circuit breaker expression](/basics/#backends) for a backend.
- `traefik.backend.circuitbreaker: <expression>`
Set the circuit breaker expression for the backend. Default: `nil`.
As known from nginx when used as Kubernetes Ingress Controller, a list of IP-Ranges which are allowed to access can be configured by using an ingress annotation:
- `ingress.kubernetes.io/whitelist-source-range: "1.2.3.0/24, fe80::/16"`
An unset or empty list allows all Source-IPs to access.
If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access.
### Authentication
Is possible to add additional authentication annotations in the Ingress rule.
The source of the authentication is a secret that contains usernames and passwords inside the key auth.
- `ingress.kubernetes.io/auth-type`: `basic`
- `ingress.kubernetes.io/auth-secret`: `mysecret`
Contains the usernames and passwords with access to the paths defined in the Ingress Rule.
The secret must be created in the same namespace as the Ingress rule.
Limitations:
- Basic authentication only.
- Realm not configurable; only `traefik` default.
- Secret must contain only single file.

View File

@@ -0,0 +1,188 @@
# Marathon Backend
Træfik can be configured to use Marathon as a backend configuration.
See also [Marathon user guide](/user-guide/marathon).
## Configuration
```toml
################################################################
# Mesos/Marathon configuration backend
################################################################
# Enable Marathon configuration backend.
[marathon]
# Marathon server endpoint.
# You can also specify multiple endpoint for Marathon:
# endpoint = "http://10.241.1.71:8080,10.241.1.72:8080,10.241.1.73:8080"
#
# Required
# Default: "http://127.0.0.1:8080"
#
endpoint = "http://127.0.0.1:8080"
# Enable watch Marathon changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an application.
#
# Required
#
domain = "marathon.localhost"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "marathon.tmpl"
# Expose Marathon apps by default in Traefik.
#
# Optional
# Default: true
#
# exposedByDefault = false
# Convert Marathon groups to subdomains.
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
#
# Optional
# Default: false
#
# groupsAsSubDomains = true
# Enable compatibility with marathon-lb labels.
#
# Optional
# Default: false
#
# marathonLBCompatibility = true
# Enable Marathon basic authentication.
#
# Optional
#
# [marathon.basic]
# httpBasicAuthUser = "foo"
# httpBasicPassword = "bar"
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
#
# Optional
#
# [marathon.TLS]
# CA = "/etc/ssl/ca.crt"
# Cert = "/etc/ssl/marathon.cert"
# Key = "/etc/ssl/marathon.key"
# InsecureSkipVerify = true
# DCOSToken for DCOS environment.
# This will override the Authorization header.
#
# Optional
#
# dcosToken = "xxxxxx"
# Override DialerTimeout.
# Amount of time to allow the Marathon provider to wait to open a TCP connection
# to a Marathon master.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
# values (digits).
# If no units are provided, the value is parsed assuming seconds.
#
# Optional
# Default: "60s"
#
# dialerTimeout = "60s"
# Set the TCP Keep Alive interval for the Marathon HTTP Client.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
# values (digits).
# If no units are provided, the value is parsed assuming seconds.
#
# Optional
# Default: "10s"
#
# keepAlive = "10s"
# By default, a task's IP address (as returned by the Marathon API) is used as
# backend server if an IP-per-task configuration can be found; otherwise, the
# name of the host running the task is used.
# The latter behavior can be enforced by enabling this switch.
#
# Optional
# Default: false
#
# forceTaskHostname = true
# Applications may define readiness checks which are probed by Marathon during
# deployments periodically and the results exposed via the API.
# Enabling the following parameter causes Traefik to filter out tasks
# whose readiness checks have not succeeded.
# Note that the checks are only valid at deployment times.
# See the Marathon guide for details.
#
# Optional
# Default: false
#
# respectReadinessChecks = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Labels: overriding default behaviour
### On Containers
Labels can be used on containers to override default behaviour:
| Label | Description |
|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.backend=foo` | assign the application to `foo` backend |
| `traefik.backend.maxconn.amount=10` | set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.backend.healthcheck.path=/health` | set the Traefik health check path [default: no health checks] |
| `traefik.backend.healthcheck.interval=5s` | sets a custom health check interval in Go-parseable (`time.ParseDuration`) format [default: 30s] |
| `traefik.portIndex=1` | register port by index in the application's ports array. Useful when the application exposes multiple ports. |
| `traefik.port=80` | register the explicit application port value. Cannot be used alongside `traefik.portIndex`. |
| `traefik.protocol=https` | override the default `http` protocol |
| `traefik.weight=10` | assign this weight to the application |
| `traefik.enable=false` | disable this application in Træfik |
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
### On Services
If several ports need to be exposed from a container, the services labels can be used:
| Label | Description |
|--------------------------------------------------------|------------------------------------------------------------------------------------------------------|
| `traefik.<service-name>.port=443` | create a service binding with frontend/backend using this port. Overrides `traefik.port`. |
| `traefik.<service-name>.portIndex=1` | create a service binding with frontend/backend using this port index. Overrides `traefik.portIndex`. |
| `traefik.<service-name>.protocol=https` | assign `https` protocol. Overrides `traefik.protocol`. |
| `traefik.<service-name>.weight=10` | assign this service weight. Overrides `traefik.weight`. |
| `traefik.<service-name>.frontend.backend=fooBackend` | assign this service frontend to `foobackend`. Default is to assign to the service backend. |
| `traefik.<service-name>.frontend.entryPoints=http` | assign this service entrypoints. Overrides `traefik.frontend.entrypoints`. |
| `traefik.<service-name>.frontend.auth.basic=test:EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.<service-name>.frontend.passHostHeader=true` | Forward client `Host` header to the backend. Overrides `traefik.frontend.passHostHeader`. |
| `traefik.<service-name>.frontend.priority=10` | assign the service frontend priority. Overrides `traefik.frontend.priority`. |
| `traefik.<service-name>.frontend.rule=Path:/foo` | assign the service frontend rule. Overrides `traefik.frontend.rule`. |

View File

@@ -0,0 +1,93 @@
# Mesos Generic Backend
Træfik can be configured to use Mesos as a backend configuration.
```toml
################################################################
# Mesos configuration backend
################################################################
# Enable Mesos configuration backend.
[mesos]
# Mesos server endpoint.
# You can also specify multiple endpoint for Mesos:
# endpoint = "192.168.35.40:5050,192.168.35.41:5050,192.168.35.42:5050"
# endpoint = "zk://192.168.35.20:2181,192.168.35.21:2181,192.168.35.22:2181/mesos"
#
# Required
# Default: "http://127.0.0.1:5050"
#
endpoint = "http://127.0.0.1:8080"
# Enable watch Mesos changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an application.
#
# Required
#
domain = "mesos.localhost"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "mesos.tmpl"
# Expose Mesos apps by default in Traefik.
#
# Optional
# Default: true
#
# ExposedByDefault = false
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
#
# Optional
#
# [mesos.TLS]
# InsecureSkipVerify = true
# Zookeeper timeout (in seconds).
#
# Optional
# Default: 30
#
# ZkDetectionTimeout = 30
# Polling interval (in seconds).
#
# Optional
# Default: 30
#
# RefreshSeconds = 30
# IP sources (e.g. host, docker, mesos, rkt).
#
# Optional
#
# IPSources = "host"
# HTTP Timeout (in seconds).
#
# Optional
# Default: 30
#
# StateTimeoutSecond = "30"
# Convert groups to subdomains.
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
#
# Optional
# Default: false
#
# groupsAsSubDomains = true
```

View File

@@ -0,0 +1,131 @@
# Rancher Backend
Træfik can be configured to use Rancher as a backend configuration.
## Global Configuration
```toml
################################################################
# Rancher configuration backend
################################################################
# Enable Rancher configuration backend.
[rancher]
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an service.
#
# Required
#
domain = "rancher.localhost"
# Enable watch Rancher changes.
#
# Optional
# Default: true
#
watch = true
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# Expose Rancher services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Filter services with unhealthy states and inactive states.
#
# Optional
# Default: false
#
enableServiceHealthFilter = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Rancher Metadata Service
```toml
# Enable Rancher metadata service configuration backend instead of the API
# configuration backend.
#
# Optional
# Default: false
#
[rancher.metadata]
# Poll the Rancher metadata service for changes every `rancher.RefreshSeconds`.
# NOTE: this is less accurate than the default long polling technique which
# will provide near instantaneous updates to Traefik
#
# Optional
# Default: false
#
intervalPoll = true
# Prefix used for accessing the Rancher metadata service.
#
# Optional
# Default: "/latest"
#
prefix = "/2016-07-29"
```
## Rancher API
```toml
# Enable Rancher API configuration backend.
#
# Optional
# Default: true
#
[rancher.api]
# Endpoint to use when connecting to the Rancher API.
#
# Required
endpoint = "http://rancherserver.example.com/v1"
# AccessKey to use when connecting to the Rancher API.
#
# Required
accessKey = "XXXXXXXXXXXXXXXXXXXX"
# SecretKey to use when connecting to the Rancher API.
#
# Required
secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
!!! note
If Traefik needs access to the Rancher API, you need to set the `endpoint`, `accesskey` and `secretkey` parameters.
To enable Traefik to fetch information about the Environment it's deployed in only, you need to create an `Environment API Key`.
This can be found within the API Key advanced options.
## Labels: overriding default behaviour
Labels can be used on task containers to override default behaviour:
| Label | Description |
|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------|
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.weight=10` | Assign this weight to the container |
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |

View File

@@ -0,0 +1,349 @@
# Web Backend
Træfik can be configured:
- using a RESTful api.
- to use a monitoring system (like Prometheus, DataDog or StatD, ...).
- to expose a Web Dashboard.
## Configuration
```toml
# Enable web backend.
[web]
# Web administration port.
#
# Required
# Default: ":8080"
#
address = ":8080"
# SSL certificate and key used.
#
# Optional
#
# certFile = "traefik.crt"
# keyFile = "traefik.key"
# Set REST API to read-only mode.
#
# Optional
# Default: false
#
readOnly = true
```
## Web UI
![Web UI Providers](/img/web.frontend.png)
![Web UI Health](/img/traefik-health.png)
### Authentication
!!! note
The `/ping` path of the api is excluded from authentication (since 1.4).
#### Basic Authentication
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate those ones.
Users can be specified directly in the toml file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence.
```toml
[web]
# ...
# To enable basic auth on the webui with 2 user/pass: test:test and test2:test2
[web.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
usersFile = "/path/to/.htpasswd"
# ...
```
#### Digest Authentication
You can use `htdigest` to generate those ones.
Users can be specified directly in the toml file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence
```toml
[web]
# ...
# To enable digest auth on the webui with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
[web.auth.digest]
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05 ", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
usersFile = "/path/to/.htdigest"
# ...
```
## Metrics
You can enable Traefik to export internal metrics to different monitoring systems.
### Prometheus
```toml
[web]
# ...
# To enable Traefik to export internal metrics to Prometheus
[web.metrics.prometheus]
# Buckets for latency metrics
#
# Optional
# Default: [0.1, 0.3, 1.2, 5]
buckets=[0.1,0.3,1.2,5.0]
# ...
```
### DataDog
```toml
[web]
# ...
# DataDog metrics exporter type
[web.metrics.datadog]
# DataDog's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# DataDog push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
### StatsD
```toml
[web]
# ...
# StatsD metrics exporter type
[web.metrics.statsd]
# StatD's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# StatD push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
## Statistics
```toml
[web]
# ...
# Enable more detailed statistics.
[web.statistics]
# Number of recent errors logged.
#
# Default: 10
#
recentErrors = 10
# ...
```
## API
| Path | Method | Description |
|-----------------------------------------------------------------|:-------------:|----------------------------------------------------------------------------------------------------|
| `/` | `GET` | Provides a simple HTML frontend of Træfik |
| `/ping` | `GET`, `HEAD` | A simple endpoint to check for Træfik process liveness. Return a code `200` with the content: `OK` |
| `/health` | `GET` | json health metrics |
| `/api` | `GET` | Configuration for all providers |
| `/api/providers` | `GET` | Providers |
| `/api/providers/{provider}` | `GET`, `PUT` | Get or update provider |
| `/api/providers/{provider}/backends` | `GET` | List backends |
| `/api/providers/{provider}/backends/{backend}` | `GET` | Get backend |
| `/api/providers/{provider}/backends/{backend}/servers` | `GET` | List servers in backend |
| `/api/providers/{provider}/backends/{backend}/servers/{server}` | `GET` | Get a server in a backend |
| `/api/providers/{provider}/frontends` | `GET` | List frontends |
| `/api/providers/{provider}/frontends/{frontend}` | `GET` | Get a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes` | `GET` | List routes in a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes/{route}` | `GET` | Get a route in a frontend |
| `/metrics` | `GET` | Export internal metrics |
### Example
#### Ping
```shell
curl -sv "http://localhost:8080/ping"
```
```shell
* Trying ::1...
* Connected to localhost (::1) port 8080 (#0)
> GET /ping HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 25 Aug 2016 01:35:36 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host localhost left intact
OK
```
#### Health
```shell
curl -s "http://localhost:8080/health" | jq .
```
```json
{
// Træfik PID
"pid": 2458,
// Træfik server uptime (formated time)
"uptime": "39m6.885931127s",
// Træfik server uptime in seconds
"uptime_sec": 2346.885931127,
// current server date
"time": "2015-10-07 18:32:24.362238909 +0200 CEST",
// current server date in seconds
"unixtime": 1444235544,
// count HTTP response status code in realtime
"status_code_count": {
"502": 1
},
// count HTTP response status code since Træfik started
"total_status_code_count": {
"200": 7,
"404": 21,
"502": 13
},
// count HTTP response
"count": 1,
// count HTTP response
"total_count": 41,
// sum of all response time (formated time)
"total_response_time": "35.456865605s",
// sum of all response time in seconds
"total_response_time_sec": 35.456865605,
// average response time (formated time)
"average_response_time": "864.8016ms",
// average response time in seconds
"average_response_time_sec": 0.8648016000000001,
// request statistics [requires --web.statistics to be set]
// ten most recent requests with 4xx and 5xx status codes
"recent_errors": [
{
// status code
"status_code": 500,
// description of status code
"status": "Internal Server Error",
// request HTTP method
"method": "GET",
// request hostname
"host": "localhost",
// request path
"path": "/path",
// RFC 3339 formatted date/time
"time": "2016-10-21T16:59:15.418495872-07:00"
}
]
}
```
#### Provider configurations
```shell
curl -s "http://localhost:8080/api" | jq .
```
```json
{
"file": {
"frontends": {
"frontend2": {
"routes": {
"test_2": {
"rule": "Path:/test"
}
},
"backend": "backend1"
},
"frontend1": {
"routes": {
"test_1": {
"rule": "Host:test.localhost"
}
},
"backend": "backend2"
}
},
"backends": {
"backend2": {
"loadBalancer": {
"method": "drr"
},
"servers": {
"server2": {
"weight": 2,
"URL": "http://172.17.0.5:80"
},
"server1": {
"weight": 1,
"url": "http://172.17.0.4:80"
}
}
},
"backend1": {
"loadBalancer": {
"method": "wrr"
},
"circuitBreaker": {
"expression": "NetworkErrorRatio() > 0.5"
},
"servers": {
"server2": {
"weight": 1,
"url": "http://172.17.0.3:80"
},
"server1": {
"weight": 10,
"url": "http://172.17.0.2:80"
}
}
}
}
}
}
```

View File

@@ -0,0 +1,61 @@
# Zookeeper Backend
Træfik can be configured to use Zookeeper as a backend configuration.
```toml
################################################################
# Zookeeper configuration backend
################################################################
# Enable Zookeeperconfiguration backend.
[zookeeper]
# Zookeeper server endpoint.
#
# Required
# Default: "127.0.0.1:2181"
#
endpoint = "127.0.0.1:2181"
# Enable watch Zookeeper changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "/traefik"
#
prefix = "/traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "zookeeper.tmpl"
# Use Zookeeper user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable Zookeeper TLS connection.
#
# Optional
#
# [zookeeper.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/zookeeper.crt"
# key = "/etc/ssl/zookeeper.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.

View File

@@ -0,0 +1,439 @@
# Global Configuration
## Main Section
```toml
# Duration to give active requests a chance to finish before Traefik stops.
#
# Optional
# Default: "10s"
#
# graceTimeOut = "10s"
# Enable debug mode.
#
# Optional
# Default: false
#
# debug = true
# Periodically check if a new version has been released.
#
# Optional
# Default: true
#
# checkNewVersion = false
# Backends throttle duration.
#
# Optional
# Default: "2s"
#
# ProvidersThrottleDuration = "2s"
# Controls the maximum idle (keep-alive) connections to keep per-host.
#
# Optional
# Default: 200
#
# MaxIdleConnsPerHost = 200
# If set to true invalid SSL certificates are accepted for backends.
# This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
#
# Optional
# Default: false
#
# InsecureSkipVerify = true
# Register Certificates in the RootCA.
#
# Optional
# Default: []
#
# RootCAs = [ "/mycert.cert" ]
# Entrypoints to be used by frontends that do not specify any entrypoint.
# Each frontend can specify its own entrypoints.
#
# Optional
# Default: ["http"]
#
# defaultEntryPoints = ["http", "https"]
```
- `graceTimeOut`: Duration to give active requests a chance to finish before Traefik stops.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
**Note:** in this time frame no new requests are accepted.
- `ProvidersThrottleDuration`: Backends throttle duration: minimum duration in seconds between 2 events from providers before applying a new configuration.
It avoids unnecessary reloads if multiples events are sent in a short amount of time.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `MaxIdleConnsPerHost`: Controls the maximum idle (keep-alive) connections to keep per-host.
If zero, `DefaultMaxIdleConnsPerHost` from the Go standard library net/http module is used.
If you encounter 'too many open files' errors, you can either increase this value or change the `ulimit`.
- `InsecureSkipVerify` : If set to true invalid SSL certificates are accepted for backends.
**Note:** This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
- `RootCAs`: Register Certificates in the RootCA. This certificates will be use for backends calls.
**Note** You can use file path or cert content directly
- `defaultEntryPoints`: Entrypoints to be used by frontends that do not specify any entrypoint.
Each frontend can specify its own entrypoints.
## Constraints
In a micro-service architecture, with a central service discovery, setting constraints limits Træfik scope to a smaller number of routes.
Træfik filters services according to service attributes/tags set in your configuration backends.
Supported filters:
- `tag`
### Simple
```toml
# Simple matching constraint
constraints = ["tag==api"]
# Simple mismatching constraint
constraints = ["tag!=api"]
# Globbing
constraints = ["tag==us-*"]
```
### Multiple
```toml
# Multiple constraints
# - "tag==" must match with at least one tag
# - "tag!=" must match with none of tags
constraints = ["tag!=us-*", "tag!=asia-*"]
```
### Backend-specific
Supported backends:
- Docker
- Consul K/V
- BoltDB
- Zookeeper
- Etcd
- Consul Catalog
- Rancher
- Marathon
- Kubernetes (using a provider-specific mechanism based on label selectors)
```toml
# Backend-specific constraint
[consulCatalog]
# ...
constraints = ["tag==api"]
# Backend-specific constraint
[marathon]
# ...
constraints = ["tag==api", "tag!=v*-beta"]
```
## Logs Definition
### Traefik logs
```toml
# Traefik logs file
# If not defined, logs to stdout
traefikLogsFile = "log/traefik.log"
# Log level
#
# Optional
# Default: "ERROR"
#
# Accepted values, in order of severity: "DEBUG", "INFO", "WARN", "ERROR", "FATAL", "PANIC"
# Messages at and above the selected level will be logged.
#
logLevel = "ERROR"
```
### Access Logs
Access logs are written when `[accessLog]` is defined.
By default it will write to stdout and produce logs in the textual Common Log Format (CLF), extended with additional fields.
To enable access logs using the default settings just add the `[accessLog]` entry.
```toml
[accessLog]
```
To write the logs into a logfile specify the `filePath`.
```toml
[accessLog]
filePath = "/path/to/access.log"
```
To write JSON format logs, specify `json` as the format:
```toml
[accessLog]
filePath = "/path/to/access.log"
format = "json"
```
Deprecated way (before 1.4):
```toml
# Access logs file
#
# DEPRECATED - see [accessLog] lower down
#
accessLogsFile = "log/access.log"
```
### Log Rotation
Traefik will close and reopen its log files, assuming they're configured, on receipt of a USR1 signal.
This allows the logs to be rotated and processed by an external program, such as `logrotate`.
!!! note
This does not work on Windows due to the lack of USR signals.
## Custom Error pages
Custom error pages can be returned, in lieu of the default, according to frontend-configured ranges of HTTP Status codes.
In the example below, if a 503 status is returned from the frontend "website", the custom error page at http://2.3.4.5/503.html is returned with the actual status code set in the HTTP header.
!!! note
The `503.html` page itself is not hosted on Traefik, but some other infrastructure.
```toml
[frontends]
[frontends.website]
backend = "website"
[frontends.website.errors]
[frontends.website.errors.network]
status = ["500-599"]
backend = "error"
query = "/{status}.html"
[frontends.website.routes.website]
rule = "Host: website.mydomain.com"
[backends]
[backends.website]
[backends.website.servers.website]
url = "https://1.2.3.4"
[backends.error]
[backends.error.servers.error]
url = "http://2.3.4.5"
```
In the above example, the error page rendered was based on the status code.
Instead, the query parameter can also be set to some generic error page like so: `query = "/500s.html"`
Now the `500s.html` error page is returned for the configured code range.
The configured status code ranges are inclusive; that is, in the above example, the `500s.html` page will be returned for status codes `500` through, and including, `599`.
Custom error pages are easiest to implement using the file provider.
For dynamic providers, the corresponding template file needs to be customized accordingly and referenced in the Traefik configuration.
## Retry Configuration
```toml
# Enable retry sending request if network error
[retry]
# Number of attempts
#
# Optional
# Default: (number servers in backend) -1
#
# attempts = 3
```
## Health Check Configuration
```toml
# Enable custom health check options.
[healthcheck]
# Set the default health check interval.
#
# Optional
# Default: "30s"
#
# interval = "30s"
```
- `interval` set the default health check interval.
Will only be effective if health check paths are defined.
Given provider-specific support, the value may be overridden on a per-backend basis.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
## Timeouts
### Responding Timeouts
`respondingTimeouts` are timeouts for incoming requests to the Traefik instance.
```toml
[respondingTimeouts]
# readTimeout is the maximum duration for reading the entire request, including the body.
#
# Optional
# Default: "0s"
#
# readTimeout = "5s"
# writeTimeout is the maximum duration before timing out writes of the response.
#
# Optional
# Default: "0s"
#
# writeTimeout = "5s"
# idleTimeout is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
#
# Optional
# Default: "180s"
#
# idleTimeout = "360s"
```
- `readTimeout` is the maximum duration for reading the entire request, including the body.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `writeTimeout` is the maximum duration before timing out writes of the response.
It covers the time from the end of the request header read to the end of the response write.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `idleTimeout` is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
### Forwarding Timeouts
`forwardingTimeouts` are timeouts for requests forwarded to the backend servers.
```toml
[forwardingTimeouts]
# dialTimeout is the amount of time to wait until a connection to a backend server can be established.
#
# Optional
# Default: "30s"
#
# dialTimeout = "30s"
# responseHeaderTimeout is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
#
# Optional
# Default: "0s"
#
# responseHeaderTimeout = "0s"
```
- `dialTimeout` is the amount of time to wait until a connection to a backend server can be established.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `responseHeaderTimeout` is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
### Idle Timeout (deprecated)
Use [respondingTimeouts](/configuration/commons/#responding-timeouts) instead of `IdleTimeout`.
In the case both settings are configured, the deprecated option will be overwritten.
`IdleTimeout` is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself.
This is set to enforce closing of stale client connections.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
```toml
# IdleTimeout
#
# DEPRECATED - see [respondingTimeouts] section.
#
# Optional
# Default: "180s"
#
IdleTimeout = "360s"
```
## Override Default Configuration Template
!!! warning
For advanced users only.
Supported by all backends except: File backend, Web backend and DynamoDB backend.
```toml
[backend_name]
# Override default configuration template. For advanced users :)
#
# Optional
# Default: ""
#
filename = "custom_config_template.tpml"
# Enable debug logging of generated configuration template.
#
# Optional
# Default: false
#
debugLogGeneratedTemplate = true
```
Example:
```toml
[marathon]
filename = "my_custom_config_template.tpml"
```
The template files can be written using functions provided by:
- [go template](https://golang.org/pkg/text/template/)
- [sprig library](https://masterminds.github.io/sprig/)
Example:
```tmpl
[backends]
[backends.backend1]
url = "http://firstserver"
[backends.backend2]
url = "http://secondserver"
{{$frontends := dict "frontend1" "backend1" "frontend2" "backend2"}}
[frontends]
{{range $frontend, $backend := $frontends}}
[frontends.{{$frontend}}]
backend = "{{$backend}}"
{{end}}
```

View File

@@ -0,0 +1,239 @@
# Entry Points Definition
```toml
# Entrypoints definition
#
# Default:
# [entryPoints]
# [entryPoints.http]
# address = ":80"
#
[entryPoints]
[entryPoints.http]
address = ":80"
```
## Redirect HTTP to HTTPS
To redirect an http entrypoint to an https entrypoint (with SNI support).
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
```
## Rewriting URL
To redirect an entrypoint rewriting the URL.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
regex = "^http://localhost/(.*)"
replacement = "http://mydomain/$1"
```
## TLS Mutual Authentication
Only accept clients that present a certificate signed by a specified Certificate Authority (CA).
`ClientCAFiles` can be configured with multiple `CA:s` in the same file or use multiple files containing one or several `CA:s`.
The `CA:s` has to be in PEM format.
All clients will be required to present a valid cert.
The requirement will apply to all server certs in the entrypoint.
In the example below both `snitest.com` and `snitest.org` will require client certs
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
ClientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
```
## Authentication
### Basic Authentication
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate those ones.
Users can be specified directly in the toml file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence.
```toml
# To enable basic auth on an entrypoint with 2 user/pass: test:test and test2:test2
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
usersFile = "/path/to/.htpasswd"
```
### Digest Authentication
You can use `htdigest` to generate those ones.
Users can be specified directly in the toml file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence
```toml
# To enable digest auth on an entrypoint with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.basic]
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05 ", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
usersFile = "/path/to/.htdigest"
```
### Forward Authentication
This configuration will first forward the request to `http://authserver.com/auth`.
If the response code is 2XX, access is granted and the original request is performed.
Otherwise, the response from the auth server is returned.
```toml
[entryPoints]
[entryPoints.http]
# ...
# To enable forward auth on an entrypoint
[entryPoints.http.auth.forward]
address = "https://authserver.com/auth"
# Trust existing X-Forwarded-* headers.
# Useful with another reverse proxy in front of Traefik.
#
# Optional
# Default: false
#
trustForwardHeader = true
# Enable forward auth TLS connection.
#
# Optional
#
[entryPoints.http.auth.forward.tls]
cert = "authserver.crt"
key = "authserver.key"
```
## Specify Minimum TLS Version
To specify an https entry point with a minimum TLS version, and specifying an array of cipher suites (from crypto/tls).
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
minVersion = "VersionTLS12"
cipherSuites = ["TLS_RSA_WITH_AES_256_GCM_SHA384"]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
## Compression
To enable compression support using gzip format.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
```
Responses are compressed when:
* The response body is larger than `512` bytes
* And the `Accept-Encoding` request header contains `gzip`
* And the response is not already compressed, i.e. the `Content-Encoding` response header is not already set.
## Whitelisting
To enable IP whitelisting at the entrypoint level.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
whiteListSourceRange = ["127.0.0.1/32", "192.168.1.7"]
```
## ProxyProtocol
To enable [ProxyProtocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support.
Only IPs in `trustedIPs` will lead to remote client address replacement: you should declare your load-balancer IP or CIDR range here (in testing environment, you can trust everyone using `insecure = true`).
!!! danger
When queuing Træfik behind another load-balancer, be sure to carefully configure Proxy Protocol on both sides.
Otherwise, it could introduce a security risk in your system by forging requests.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
# Enable ProxyProtocol
[entryPoints.http.proxyProtocol]
# List of trusted IPs
#
# Required
# Default: []
#
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
# Insecure mode FOR TESTING ENVIRONNEMENT ONLY
#
# Optional
# Default: false
#
# insecure = true
```
## Forwarded Header
Only IPs in `trustedIPs` will be authorized to trust the client forwarded headers (`X-Forwarded-*`).
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
# Enable Forwarded Headers
[entryPoints.http.forwardedHeaders]
# List of trusted IPs
#
# Required
# Default: []
#
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
```

Some files were not shown because too many files have changed in this diff Show More