1
0
mirror of https://github.com/containous/traefik.git synced 2025-09-06 05:44:21 +03:00

Compare commits

...

596 Commits

Author SHA1 Message Date
Ludovic Fernandez
422109b82f Prepare release v1.5.3 2018-02-27 12:28:03 +01:00
NicoMen
c864a7297b Add DEBUG log when no provided certificate can check a domain 2018-02-27 11:10:03 +01:00
SALLEYRON Julien
8da038041d Default value for lifecycle 2018-02-27 10:24:03 +01:00
Ludovic Fernandez
dd954f3c0a Fix Duration JSON unmarshal 2018-02-26 22:14:03 +01:00
NicoMen
db483e9d34 Check all the C/N and SANs of provided certificates before to generat… 2018-02-26 11:38:03 +01:00
Ludovic Fernandez
700b7a1b51 Add a CLI help command for Docker. 2018-02-26 10:00:05 +01:00
Ludovic Fernandez
ed65d00574 Infinite entry point redirection. 2018-02-26 09:34:03 +01:00
NicoMen
f460c1990e Starting Træfik even if TLS certificates are in error 2018-02-22 14:38:04 +01:00
Pierre Carru
83381e99cf it's -> its 2018-02-21 17:18:05 +01:00
Michael
31550fd2c9 Replace nginx by whoami in integration tests 2018-02-21 16:28:03 +01:00
Emile Vauge
ba046b4d3a Fix doc cipher suites 2018-02-21 08:00:03 +01:00
Ludovic Fernandez
d675d46930 Multiple issue and pull request templates. 2018-02-20 10:44:03 +01:00
Michael
7ea76929d4 Empty ip address when endpoint mode dnsrr 2018-02-20 08:12:02 +01:00
Ludovic Fernandez
f98c537ec2 Smooth dashboard refresh. 2018-02-16 16:02:03 +01:00
Emile Vauge
083bde64ee Fix traffic pronounce dead link 2018-02-16 13:22:02 +01:00
SALLEYRON Julien
45fe218ee2 Isolate backend with same name on different provider 2018-02-16 11:04:04 +01:00
SALLEYRON Julien
d54777236c Update documentation on onHostRule, ping examples, and web deprecation 2018-02-16 10:32:03 +01:00
Ludovic Fernandez
4f3b06472b Check ping configuration. 2018-02-13 23:42:03 +01:00
Michael
52bad03c8d Prepare release v1.5.2 2018-02-12 11:46:03 +01:00
Ludovic Fernandez
2fde3e8679 Continue refresh the configuration after a failure. 2018-02-12 09:28:03 +01:00
Michael
1e71f52b72 Explain how to write entrypoints definition in a compose file 2018-02-09 18:16:04 +01:00
NicoMen
2b1d2853cd Compress ACME certificates in KV stores. 2018-02-09 10:38:03 +01:00
SALLEYRON Julien
f07e8f58e6 Fix goroutine leaks in websocket 2018-02-08 08:24:03 +01:00
Ludovic Fernandez
7b19cb5631 Migrate to dep 0.4 2018-02-07 23:30:05 +01:00
djeeg
dbd173b4e4 Docs: regex+replacement hints for URL rewriting 2018-02-07 13:42:04 +01:00
Sune Keller
85cfd87c44 Clarify how setting a frontend priority works 2018-02-07 13:00:04 +01:00
Ludovic Fernandez
c867f48f11 Change go-bindata 2018-02-07 12:40:03 +01:00
Timo Reimann
514f9a7215 Reduce oxy round trip logs to debug. 2018-02-07 11:32:03 +01:00
Wilhelm Uschtrin
0b0380b690 Fix typo 2018-02-06 14:30:04 +01:00
Sonu Kumar
4d0c8c189a Fixed typo. 2018-02-06 14:04:03 +01:00
SALLEYRON Julien
afe4c307f9 Traefik still start when Let's encrypt is down 2018-02-05 18:20:04 +01:00
Michael
ce3a0fdd46 Fix dnsrr endpoint mode excluded when not using swarm LB 2018-02-05 11:34:03 +01:00
Ludovic Fernandez
203a5c5c48 Hide the pflag error when displaying help. 2018-02-05 09:12:03 +01:00
Ludovic Fernandez
be4aeaacde Add documentation about entry points definition with CLI. 2018-02-05 08:54:03 +01:00
Ludovic Fernandez
26dc2f4d61 doc: option not available in 1.5. 2018-01-30 17:16:03 +01:00
Alexandre Guédon
6aac78fc36 typo in "i"ngress annotations. 2018-01-29 16:48:05 +01:00
Ludovic Fernandez
f6c53f0450 Rebuild experimental image 2018-01-29 16:08:03 +01:00
NicoMen
54e09b98c7 Prepare release v1.5.1 2018-01-29 15:04:03 +01:00
Ludovic Fernandez
4eebaa1a80 Enhance file provider documentation. 2018-01-29 14:36:03 +01:00
NicoMen
cb9bf3ce68 Fix domain names in dynamic TLS configuration 2018-01-29 10:48:03 +01:00
SALLEYRON Julien
49a8cb76f5 Add note on redirect for ACME http challenge 2018-01-26 09:22:03 +01:00
SALLEYRON Julien
bf12306f17 Change gzipwriter receiver to implement CloseNotifier 2018-01-25 21:46:04 +01:00
SALLEYRON Julien
323b8237a0 Handle undefined entrypoint on ACME config and frontend config 2018-01-25 12:02:04 +01:00
Michael
039ccaf4f1 Fix tar gz source only on tags on travis 2018-01-24 16:10:04 +01:00
Michael
4afb39778a Fix add src.tar.gz in Træfik release 2018-01-24 10:40:04 +01:00
Ludovic Fernandez
751781a3b7 Increase integration tests timeout. 2018-01-24 09:14:02 +01:00
Ludovic Fernandez
f5d150c3b4 Fix the k8s redirection template. 2018-01-24 08:12:03 +01:00
Ludovic Fernandez
ae9342208e Prepare release v1.5.0 2018-01-23 17:34:04 +01:00
Michael
3040d9df0d Build cross binary only on tags in travis 2018-01-23 17:00:06 +01:00
Ludovic Fernandez
00e0571811 Rename TLSConfigurations to TLS. 2018-01-23 16:30:07 +01:00
Ludovic Fernandez
bfb07746fe Deploy pages on all branches. 2018-01-23 14:48:04 +01:00
Ludovic Fernandez
171cda6186 New multi version documentation mechanism 2018-01-23 14:18:03 +01:00
Timo Reimann
4cc17e112f Fix goroutine leak in throttler logic. 2018-01-23 12:44:03 +01:00
Ludovic Fernandez
b6af61fa6e ACME and corporate proxy. 2018-01-23 09:52:03 +01:00
Emile Vauge
4e07d92190 Fix doc dynamic certificates 2018-01-23 09:12:03 +01:00
Roman Pridybailo
fc00e1c228 Don't reload configuration when rancher server is down 2018-01-22 11:00:07 +01:00
Eldon
ae34486b57 Fix some doc links 2018-01-22 10:26:03 +01:00
SALLEYRON Julien
d7b513e9aa Disable websocket compression 2018-01-19 17:34:03 +01:00
SALLEYRON Julien
d8297a055a Fix breaking change in web metrics 2018-01-19 14:30:04 +01:00
SALLEYRON Julien
ced5aa5dc6 Challenge HTTP must ignore deprecated web.path option 2018-01-17 18:46:03 +01:00
Martijn Heemels
adfa3f795c Fix typo in anonymous usage log message. 2018-01-17 12:20:04 +01:00
Michael
fe426f6fb2 Prepare release v1.5.0-rc5 2018-01-15 16:48:03 +01:00
SALLEYRON Julien
3e439cc39b Add Let's Encrypt HTTP Challenge 2018-01-15 16:04:05 +01:00
Blake Mesdag
56c0634918 Return errors from Docker client.Events 2018-01-15 14:26:03 +01:00
Tristan Colgate-McFarlane
bcadd68904 Fix data races. 2018-01-15 11:46:04 +01:00
Timo Reimann
9790aa91fe Apply various contentual and stylish improvements to the k8s docs. 2018-01-15 09:40:05 +01:00
Michael
5316b412d2 Fix concurrent map writes on digest auth 2018-01-12 20:00:05 +01:00
SALLEYRON Julien
b5ee5c34f2 Add compression and better error handling 2018-01-12 17:52:03 +01:00
Ludovic Fernandez
8239e04a19 fix: typo in Docker template. 2018-01-11 15:20:06 +01:00
SALLEYRON Julien
e2c5f3712f Fix redirect problem on dashboard + docs/tests on [web] 2018-01-11 09:46:03 +01:00
NicoMen
d0f3ad6024 Modify DEBUG messages to get ACME certificates 2018-01-10 15:20:03 +01:00
Ludovic Fernandez
044d87d96d Switch to golang/dep. 2018-01-09 21:46:04 +01:00
Ludovic Fernandez
d88554fa92 fix: list entries parsing. 2018-01-09 12:40:04 +01:00
Timo Reimann
e74a20de24 Document rewrite-target annotation. 2018-01-09 11:56:02 +01:00
Ludovic Fernandez
7c227392fa fix: glide files. 2018-01-09 11:24:03 +01:00
Ludovic Fernandez
8a697f7a39 Fix: timeout integration test 2018-01-09 10:08:03 +01:00
Julien Maitrehenry
60fd26e0b7 Add a clustering example with Docker Swarm 2018-01-07 15:54:03 +01:00
SALLEYRON Julien
acd0c1bcd5 GzipResponse must implement CloseNotifier if ResponseWriter implement it 2018-01-05 02:26:03 +01:00
SALLEYRON Julien
22bdbd2498 Prepare release 1.5.0-rc4 2018-01-04 15:22:03 +01:00
Ludovic Fernandez
287fb78654 Split Consul and Consul Catalog documentation 2018-01-04 14:48:03 +01:00
SALLEYRON Julien
5b24403c8e Don't panic if ResponseWriter does not implement CloseNotify 2018-01-04 11:18:03 +01:00
Julien Maitrehenry
e83599dd08 Add a note on how to add label to a docker compose file 2018-01-04 10:34:03 +01:00
SALLEYRON Julien
f30ad20c9b Use gorilla readMessage and writeMessage instead of just an io.Copy 2018-01-03 15:32:03 +01:00
Timo Reimann
01e17b6c3e k8s guide: Leave note about assumed DaemonSet usage. 2018-01-03 09:12:03 +01:00
SALLEYRON Julien
3e13ebec93 We need to flush the end of the body when retry is streamed 2018-01-02 16:02:03 +01:00
Fernandez Ludovic
23c1a9ca8e Merge branch 'v1.4' into v1.5 2018-01-02 13:10:11 +01:00
Michael
741c739ef1 Prepare release v1.4.6 2018-01-02 12:54:03 +01:00
SALLEYRON Julien
52f16e11a8 Use gorilla readMessage and writeMessage instead of just an io.Copy 2018-01-02 12:30:05 +01:00
Michael
0ee6973e2f Upgrade docs dependencies and adapt configuration 2018-01-02 11:28:02 +01:00
Timo Reimann
4819974a1c Improve Marathon service label documentation. 2018-01-02 11:08:02 +01:00
Michael
e8e8b41eed Normalize serviceName added to the service backend names 2018-01-02 10:52:03 +01:00
Krzysztof Pędrys
7d23d3c0a4 Typo in docker.endpoint TCP port. 2018-01-02 10:38:03 +01:00
Ludovic Fernandez
718fc7a79d Fix bug report command 2018-01-02 10:14:03 +01:00
Ludovic Fernandez
bfd142b13b Fix custom headers template 2018-01-02 10:10:04 +01:00
Ludovic Fernandez
75533b2beb Use prefix for sticky and stickiness tags. 2018-01-02 09:44:02 +01:00
NicoMen
9a7821b8fa Send empty configuration from file provider 2017-12-21 21:24:03 +01:00
lishaoxiong
e8333883df Add tests for TLS dynamic configuration in ETCD3 2017-12-21 18:02:04 +01:00
NicoMen
1e44e339ad Allow deleting dynamically all TLS certificates from an entryPoint 2017-12-21 14:16:03 +01:00
Ludovic Fernandez
89a79d0f1b Prepare release 1.5.0-rc3 2017-12-20 15:10:06 +01:00
NicoMen
9e41485ff1 Modify ACME configuration migration into KV store 2017-12-20 14:40:07 +01:00
Nimi Wariboko Jr
3c7c6c4d9f Mesos: Use slave.PID.Host as task SlaveIP. 2017-12-20 12:12:03 +01:00
Ludovic Fernandez
cd1b3904da Add missing entrypoints template. 2017-12-20 10:26:03 +01:00
Emile Vauge
b23b2611b3 Add non regex pathPrefix 2017-12-19 17:00:12 +01:00
Timo Reimann
877770f7cf Update go-marathon 2017-12-19 16:00:09 +01:00
lishaoxiong
3142a4f4b3 Fix stickiness bug due to template syntax error 2017-12-19 14:08:03 +01:00
Ludovic Fernandez
b4dc96527d Move rate limit documentation. 2017-12-19 09:48:03 +01:00
Ludovic Fernandez
35b5ca4c63 fix isHealthy logic. 2017-12-18 10:30:08 +01:00
Ludovic Fernandez
daf3023b02 Change Zookeeper default prefix. 2017-12-18 09:22:03 +01:00
Michael
b17d5b80b8 Reload configuration when port change for one service 2017-12-15 20:52:03 +01:00
Michael
48b4eb5c0d Fix bad Træfik update on Consul Catalog 2017-12-15 16:00:14 +01:00
Ludovic Fernandez
7ecd6d20ba Support regex redirect by frontend 2017-12-15 11:48:03 +01:00
Kevin Risden
bddad57a7b Fix RawPath handling in addPrefix 2017-12-15 03:50:07 +01:00
Ludovic Fernandez
799136a714 fix: backend name for Stateful services. (Service Fabric) 2017-12-15 01:22:03 +01:00
Timo Reimann
350d61b4a6 Fix github.com/containous/traefik-extra-service-fabric dep to v1.0.1. 2017-12-14 16:06:03 +01:00
Gérald Croës
b6f5a66fab Grammar 2017-12-13 18:22:05 +01:00
Ludovic Fernandez
b0c12e2422 Fix: frontend redirect 2017-12-13 17:02:04 +01:00
Michael MATUR
623a7dc7e6 Fix small missing property in documentation for consul catalog 2017-12-13 11:56:02 +01:00
Michael MATUR
709c7e5707 Improve documentation for Cloudflare API key 2017-12-13 11:56:02 +01:00
Mikhail Vasin
ee04f52a16 Fix broken links and improve ResponseCodeRatio() description 2017-12-08 16:12:04 +01:00
Ludovic Fernandez
7d98c1c4e0 Prepare release v1.5.0-rc2 2017-12-06 15:58:03 +01:00
Timo Reimann
4387cf38d7 Close ring buffer used in throttling function. 2017-12-06 14:54:03 +01:00
Michael MATUR
a9d38570ab Merge tag 'v1.4.5' into v1.5 2017-12-06 13:05:08 +01:00
SALLEYRON Julien
0e619369fd fix healthcheck when web is not specified 2017-12-06 11:20:03 +01:00
Michael
cda09c843a Prepare release v1.4.5 2017-12-06 10:44:03 +01:00
NicoMen
6333bfe6e8 Modify the ACME renewing logs level 2017-12-05 15:42:03 +01:00
Timo Reimann
41d8863d2f Fix pprof route order. 2017-12-05 10:50:03 +01:00
Jan Mara
523b7f96f8 Add note to Kubernetes RBAC docs about RoleBindings and namespaces 2017-12-05 02:46:03 +01:00
Mikhail Vasin
ab1a930705 Emphasize the necessity of enabling file backend 2017-12-05 02:30:02 +01:00
Ludovic Fernandez
3a99c86cb3 Change custom headers separator 2017-12-04 11:40:03 +01:00
Michael
d6ad7e2e64 Fix empty IP for backend when dnsrr in Docker swarm mode 2017-12-01 14:34:03 +01:00
Ludovic Fernandez
aaf120f263 Reduce logs with new Kubernetes security annotations 2017-12-01 14:00:04 +01:00
Ludovic Fernandez
c228e73b26 fix Docker labels documentation render. 2017-12-01 09:36:02 +01:00
SALLEYRON Julien
e27e65eb76 Fix wrong defaultentrypoint and unexisting entrypoint issue 2017-11-30 16:10:02 +01:00
SALLEYRON Julien
1c8acf3929 Doesn't ignore web params when web.metrics.prometheus is set 2017-11-30 14:12:04 +01:00
SALLEYRON Julien
40b3c17703 Fix metrics problem on multiple entrypoints 2017-11-30 12:18:03 +01:00
Daniel Tomcej
313357a6b3 quote template strings 2017-11-30 10:42:02 +01:00
Michael
37a1aaad64 Improve consul documentation 2017-11-30 10:12:03 +01:00
Ludovic Fernandez
f084d2a28b Fix Labels/annotation logs and values. 2017-11-30 09:26:03 +01:00
Michael
077b39d7c6 Add option -s to gofmt for autogen 2017-11-30 08:52:03 +01:00
Ludovic Fernandez
7081f3df58 Sync vendor and glide. 2017-11-29 13:26:03 +01:00
Ludovic Fernandez
9fe6a0a894 Prepare release v1.5.0-rc1 2017-11-28 14:50:06 +01:00
Fernandez Ludovic
3d452fd5b9 Merge branch 'v1.4' into master 2017-11-28 14:03:55 +01:00
Michael
47a5cfbd3e Fix empty ip when container is stopped 2017-11-28 13:58:04 +01:00
Daniel Tomcej
4cb6241e93 Kubernetes security header annotations 2017-11-28 13:36:03 +01:00
Ludovic Fernandez
b572879691 Add link to futur 1.5 documentation. 2017-11-28 13:06:03 +01:00
Ludovic Fernandez
ad07a6ab2b fix: Service Fabric 'expose' as boolean. 2017-11-28 12:02:02 +01:00
Ludovic Fernandez
4bdeb33ac1 Docker labels 2017-11-28 11:16:03 +01:00
Ludovic Fernandez
101a4d0d8d Describe 'refreshSecond' configuration. 2017-11-27 17:02:05 +01:00
Ludovic Fernandez
89e07d0c55 Add link to crypto/tls godoc. 2017-11-27 15:24:03 +01:00
Lawrence Gripper
39c1cc1b3c Add Service Fabric Provider 2017-11-27 14:26:04 +01:00
Fernandez Ludovic
9f6f637527 Merge branch 'v1.4' into master 2017-11-27 11:40:50 +01:00
Kwok-kuen Cheung
0f09551a76 Fix kubernetes path prefix rule with rewrite-target 2017-11-27 11:22:03 +01:00
Marco Jantke
8cd72cfc1b remove obsolete links in k8s docs 2017-11-27 10:04:02 +01:00
Timo Reimann
7a141c8616 Document filename parameter for Kubernetes. 2017-11-26 01:02:03 +01:00
Ludovic Fernandez
0ca65f955d Stats collection. 2017-11-25 13:36:03 +01:00
Ludovic Fernandez
011b748a55 Change server receiver name. 2017-11-24 19:18:03 +01:00
Michael
f6181ef3e2 Fix custom headers replacement 2017-11-23 17:40:03 +01:00
Guilhem Lettron
24368747ab Use healthcheck for systemd watchdog 2017-11-23 16:10:04 +01:00
Fernandez Ludovic
66591cf216 Merge tag 'v1.4.4' into master 2017-11-23 15:21:47 +01:00
lishaoxiong
1feeeb2eec Manage certificates dynamically in kv store 2017-11-23 11:50:03 +01:00
SALLEYRON Julien
419d46c958 Prepare release v1.4.4 2017-11-23 11:48:03 +01:00
Daniel Tomcej
7063da1c7d Add docker security headers via labels 2017-11-22 19:40:04 +01:00
SALLEYRON Julien
bee8ebb00b Resync oxy with original repository 2017-11-22 18:20:03 +01:00
SALLEYRON Julien
da5e4a13bf add entrypoint in prometheus doc and remove web on influxdb doc 2017-11-22 16:28:03 +01:00
Ludovic Fernandez
5dc1ec68a3 Uncompress generated files. 2017-11-22 12:00:04 +01:00
lishaoxiong
3d2e5ebe39 Fix typo in examples 2017-11-22 10:16:03 +01:00
Ludovic Fernandez
f5130db6b0 gofmt generated file. 2017-11-21 21:30:03 +01:00
Marco Jantke
676b79db42 Fix raw path handling in strip prefix 2017-11-21 14:28:03 +01:00
Tait Clarridge
6d2f4a0813 Add health check label to ECS 2017-11-21 11:06:03 +01:00
Alex Antonov
4b91204686 Marathon constraints filtering 2017-11-21 10:48:04 +01:00
Emile Vauge
7ddefcef72 Add file to storeconfig 2017-11-21 10:24:03 +01:00
Ludovic Fernandez
0f3e42d463 autogen file mode 2017-11-21 08:20:04 +01:00
Ludovic Fernandez
c9129b8ecf Remove GzipHandler Fork 2017-11-20 18:32:03 +01:00
Ludovic Fernandez
a6955ecf59 Vendor generated file from template 2017-11-20 15:26:03 +01:00
NicoMen
6619a787a3 Fix problems about duplicated and missing Docker backends/frontends. 2017-11-20 15:16:03 +01:00
Raúl Sánchez
aae17c817b Fix issue with label traefik.backend.loadbalancer.stickiness.cookieName 2017-11-20 11:42:03 +01:00
Ludovic Fernandez
ab87bad952 Run Rancher tests cases in parallel. 2017-11-20 11:40:04 +01:00
Timo Reimann
be306d651e Register pprof handlers. 2017-11-20 11:04:03 +01:00
Ludovic Fernandez
8fe5c22075 Exclude RC from doc publication. 2017-11-20 09:42:02 +01:00
Ludovic Fernandez
05a9350e57 Use contants from http package. 2017-11-20 09:40:03 +01:00
ryarnyah
7ed4ae2f8c Add labels for traefik.frontend.entryPoints & PassTLSCert to Kubernetes 2017-11-20 02:12:03 +01:00
Manuel Zapf
5d6384e101 redirect to another entryPoint per frontend 2017-11-18 13:50:03 +01:00
Ludovic Fernandez
1a4564d998 http.Server log goes to Debug level. 2017-11-18 01:10:03 +01:00
NicoMen
66e489addb Update libkv dependency 2017-11-17 17:22:03 +01:00
Marco Jantke
cdab6b1796 fix concurrent provider config reloads 2017-11-17 10:26:03 +01:00
Ludovic Fernandez
722f299306 Support template as raw string. 2017-11-17 10:12:03 +01:00
Ludovic Fernandez
66be04f39e Documentation archive 2017-11-16 09:20:03 +01:00
Fernandez Ludovic
8719f2836e Merge 'v1.4.3' into master
Release v1.4.3
2017-11-15 23:01:08 +01:00
Ludovic Fernandez
0c702b0b6b Revert "Merge v1.4.2 into master" 2017-11-15 18:18:03 +01:00
Ludovic Fernandez
6fcab72ec7 Merge v1.4.2 into master 2017-11-14 16:48:03 +01:00
NicoMen
77b111702b Prepare release v1.4.3 2017-11-14 12:06:03 +01:00
NicoMen
96a7cc483f Add Traefik prefix to the KV key 2017-11-14 11:38:03 +01:00
Ludovic Fernandez
1e3506848a Flush and errorcode 2017-11-14 11:16:03 +01:00
Michael
5ee2cae85c Fix Traefik reload if Consul Catalog tags change 2017-11-13 12:14:02 +01:00
Ludovic Fernandez
5c119fe2d6 Exclude GRPC from compress 2017-11-10 14:12:02 +01:00
ferhat elmas
d55115844a Fix typos in changelog 2017-11-10 11:12:02 +01:00
NicoMen
4f4491c247 Allow adding optional Client CA files 2017-11-10 10:30:04 +01:00
Ludovic Fernandez
1691f586d7 fix: flaky test influxdb. 2017-11-09 17:22:03 +01:00
Ludovic Fernandez
04dfe0de84 Put subcommand in dedicated files. 2017-11-09 17:08:03 +01:00
SALLEYRON Julien
27d1b46835 Split Web into API/Dashboard, ping, metric and Rest Provider 2017-11-09 16:12:04 +01:00
Ivan Rogov
2f62ec3632 Link corrected 2017-11-09 15:54:04 +01:00
Timo Reimann
384488ac02 Remove unused lightMarathonClient. 2017-11-09 12:40:02 +01:00
NicoMen
c469e669fd Make the TLS certificates management dynamic. 2017-11-09 12:16:03 +01:00
Levi Blaney
56affb90ae Add secret creation to docs for kubernetes backend 2017-11-09 10:52:03 +01:00
SALLEYRON Julien
f6aa147c78 Add tests for websocket headers 2017-11-09 10:04:03 +01:00
SALLEYRON Julien
9bd0fff319 Keep status when stream mode and compress 2017-11-09 00:48:03 +01:00
Aditya C S
00d7c5972f Add InfluxDB support for traefik metrics 2017-11-08 15:14:03 +01:00
Jan Collijs
58a438167b Minor fix for docker volume vs created directory 2017-11-08 15:12:03 +01:00
Michael
e3131481e9 chore: sort imports 2017-11-08 11:40:04 +01:00
Tom Saleeba
bc8d68bd31 docs: fix some typos 2017-11-07 11:50:03 +01:00
Raúl Sánchez
07c6e33598 Update Rancher API integration to go-rancher client v2. 2017-11-05 13:02:03 +01:00
Bernhard Millauer
70812c70fc Postfix windows binaries with .exe 2017-11-03 17:02:14 +01:00
Nico Mandery
d89b234cad Fix typo in frontend.headers.customresponseheaders label 2017-11-03 14:32:03 +01:00
Fernandez Ludovic
2070aa9443 Merge 'v1.4.2' into master 2017-11-03 13:51:24 +01:00
Nils Knappmeier
91ff94ea56 dumpcerts.sh: Fix call to "base64" for Alpine 2017-11-02 15:52:04 +01:00
Ludovic Fernandez
0347537f43 Freeze version of mkdocs-material. 2017-11-02 14:38:03 +01:00
Ludovic Fernandez
db9b18f121 Prepare release v1.4.2 2017-11-02 12:28:03 +01:00
Michael MATUR
ee70001be3 [doc] - update documentation to add AWS_HOSTED_ZONE_ID 2017-11-02 11:44:04 +01:00
Michael MATUR
972eea97fe [ecs] - fix import order 2017-11-02 11:44:04 +01:00
Kendrick Erickson
2b4d33e919 Pass through certain forward auth negative response headers 2017-11-02 11:06:03 +01:00
Jim Hribar
fc4d670c88 Minor grammar change 2017-11-02 10:38:03 +01:00
Alex Antonov
02035d4942 Missing Backend key in configuration when application has no tasks 2017-11-01 11:26:03 +01:00
Félix P
93a46089ce Support Host NetworkMode for ECS provider 2017-10-31 11:44:03 +01:00
Tait Clarridge
e8d63b2a3b Update github.com/xenolf/lego to 0.4.1 2017-10-31 10:42:03 +01:00
Ludovic Fernandez
d3c7681bc5 New PR template 2017-10-30 16:38:03 +01:00
NicoMen
dc66db4abe Make the traefik.port label optional when using service labels in Docker containers. 2017-10-30 15:10:05 +01:00
NicoMen
a0e1cf8376 Fix IP address when Docker container network mode is container 2017-10-30 14:36:04 +01:00
Daniel König
5292b84f4f fixed dead link in kubernetes backend config docs 2017-10-30 14:04:03 +01:00
burningTyger
b27455a36f entrypoints -> entryPoints 2017-10-30 13:20:03 +01:00
Tiscs Sun
5042c5bf40 Added ReplacePathRegex middleware 2017-10-30 12:54:03 +01:00
NicoMen
da7b6f0baf Make frontend names differents for similar routes 2017-10-30 12:06:03 +01:00
Simon Elsbrock
9b5845f1cb Fix datastore corruption on reload due to shrinking config size 2017-10-30 11:22:04 +01:00
Emile Vauge
e8633d17e8 Add proxy protocol tests 2017-10-30 10:02:03 +01:00
Blake Mesdag
d1d8b01dfb Use Node IP in Swarm Standalone with "host" NetworkMode 2017-10-25 20:20:03 +02:00
Tait Clarridge
7c4353a0ac Add missing functions for ECS template 2017-10-25 17:18:03 +02:00
Erwin de Keijzer
1b2cb53d4f Fix the k8s docs example deployment yaml 2017-10-25 16:58:04 +02:00
Ludovic Fernandez
3158e51c62 Remove hardcoded runtime.GOMAXPROCS. 2017-10-25 16:16:02 +02:00
Fernandez Ludovic
a0c72cdf00 Merge v1.4.1 into master 2017-10-25 11:36:14 +02:00
NicoMen
f0371da838 Add unique ID to Docker services replicas 2017-10-25 10:00:03 +02:00
NicoMen
44b82e6231 Fix mkdocs version 2017-10-24 18:06:03 +02:00
Michael
04f0bf3070 Prepare release v1.4.1 2017-10-24 15:52:04 +02:00
SALLEYRON Julien
7400c39511 Stream mode when http2 2017-10-24 14:38:02 +02:00
Emile Vauge
008a5af6d6 Add mmatur to maintainers 2017-10-24 13:18:03 +02:00
Ludovic Fernandez
35ca40c3de Enhance Trust Forwarded Headers 2017-10-23 16:12:03 +02:00
Emile Vauge
de821fc305 fix healthcheck path 2017-10-23 15:48:03 +02:00
Fernandez Ludovic
e3cac7d0e5 fix(docker): Network filter. 2017-10-23 14:20:03 +02:00
Ludovic Fernandez
81f7aa9df2 Regex capturing group. 2017-10-23 10:20:02 +02:00
NicoMen
6bce298d90 Add a note about redirection rule to precise how regex/replacement work. 2017-10-22 09:44:03 +02:00
SALLEYRON Julien
afbad56012 Force http/1.1 for websocket 2017-10-20 17:38:04 +02:00
Daniel Tomcej
d973096464 Add Custom header parsing to Docker Provider 2017-10-20 17:14:03 +02:00
Fernandez Ludovic
7192aa86b5 Merge 'v1.4.0' into master 2017-10-16 23:10:44 +02:00
Ludovic Fernandez
9c8df8b9ce Fix 1.4.0 release date 2017-10-16 19:44:02 +02:00
Ludovic Fernandez
ff4c7b82bc Prepare release v1.4.0 2017-10-16 18:42:03 +02:00
Emile Vauge
47ff51e640 add retry backoff to staert config loading 2017-10-16 18:06:04 +02:00
Ludovic Fernandez
08503655d9 Backward compatibility for sticky 2017-10-16 17:38:03 +02:00
Michael
3afd6024b5 Fix consul catalog retry 2017-10-16 16:58:03 +02:00
Ludovic Fernandez
aa308b7a3a Add TrustForwardHeader options. 2017-10-16 12:46:03 +02:00
Ludovic Fernandez
9598f646f5 New entry point parser. 2017-10-13 15:04:02 +02:00
Sergey Kirillov
8af39bdaf7 Changed Docker network filter to allow any swarm network 2017-10-13 12:00:03 +02:00
Timo Reimann
914f3d1fa3 Do not run integration tests by default. 2017-10-13 11:08:03 +02:00
Ludovic Fernandez
8cb3f0835a Stickiness cookie name. 2017-10-12 17:50:03 +02:00
Manuel Zapf
cba0898e4f fix seconds to really be seconds 2017-10-12 16:26:03 +02:00
Timo Reimann
8d158402f3 Continue processing on invalid auth-realm annotation. 2017-10-12 15:48:03 +02:00
SALLEYRON Julien
7f2582e3b6 Nil body retries 2017-10-12 15:10:04 +02:00
Emile Vauge
dbc796359f Fix Proxy Protocol documentation 2017-10-12 11:10:03 +02:00
Thibault Coupin
4d1285d8e5 Add docker things for documentation 2017-10-11 14:46:03 +02:00
Marco Jantke
871d097b30 Fix traefik logs to behave like configured 2017-10-11 10:38:03 +02:00
Timo Reimann
1532033a7f Create dummy main() function in generate.go. 2017-10-10 18:20:02 +02:00
Fernandez Ludovic
9faae7387e Merge tag 'v1.4.0-rc5' into master 2017-10-10 17:17:44 +02:00
Timo Reimann
a5c644e719 Only listen to configured k8s namespaces. 2017-10-10 16:26:03 +02:00
Ludovic Fernandez
7a2ce59563 Prepare release v1.4.0-rc5 2017-10-10 15:50:03 +02:00
Ludovic Fernandez
14cec7e610 Stickiness documentation 2017-10-10 15:24:03 +02:00
Emile Vauge
6287a3dd53 Add trusted whitelist proxy protocol 2017-10-10 14:50:03 +02:00
SALLEYRON Julien
93a1db77c5 Move http2 configure transport 2017-10-10 12:14:03 +02:00
Ludovic Fernandez
a9d4b09bdb Stickiness cookie name 2017-10-10 11:10:02 +02:00
Timo Reimann
ed2eb7b5a6 Quote priority values in annotation examples. 2017-10-09 14:16:03 +02:00
Timo Reimann
18d8537d29 Document ways to partition Ingresses in the k8s guide. 2017-10-09 12:36:03 +02:00
Timo Reimann
72f3b1ed39 Remove pod from RBAC rules. 2017-10-09 12:12:03 +02:00
Marco Jantke
fd70e6edb1 enable prefix matching within slash boundaries 2017-10-06 11:34:03 +02:00
Shane Smith-Sahnow
5a578c5375 Updating make run-dev 2017-10-06 10:44:03 +02:00
Marco Jantke
9db8773055 fix flakiness in log rotation test 2017-10-06 09:20:13 +02:00
Timo Reimann
8a67434380 Sanitize cookie names. 2017-10-05 12:14:03 +02:00
Emile Vauge
c94e5f3589 Delay first version check 2017-10-05 08:42:02 +02:00
vermishelle
adef7200f6 Fix grammar 2017-10-03 10:22:03 +02:00
Fernandez Ludovic
cf508b6d48 Merge 'v1.4.0-rc4' into master 2017-10-02 17:18:24 +02:00
NicoMen
f8d36fda28 Prepare release v1.4.0-rc4 2017-10-02 16:00:03 +02:00
SALLEYRON Julien
4fe9cc7730 Add tests for urlencoded part in url 2017-10-02 15:36:02 +02:00
Chris Aumann
758b7f875b Fix grammar mistake in the kv-config docs 2017-10-02 14:58:04 +02:00
Ludovic Fernandez
0b97a67cfa CI: speed up pull images. 2017-10-02 14:22:03 +02:00
Julien Senon
ec5976bbc9 Update gRPC example 2017-10-02 11:34:03 +02:00
Ludovic Fernandez
5cc49e2931 bug command. 2017-10-02 10:32:02 +02:00
SALLEYRON Julien
b6752a2c02 Forward upgrade error from backend 2017-09-29 21:04:03 +02:00
jeffreykoetsier
d41e28fc36 Handle empty ECS Clusters properly 2017-09-29 16:56:03 +02:00
SALLEYRON Julien
64c52a6921 Consul catalog remove service failed 2017-09-29 16:30:03 +02:00
Ed Robinson
691a678b19 Improve compression documentation 2017-09-29 10:34:03 +02:00
Timo Reimann
1ba7fd91ff grep to-be-pulled-images directly to avoid newline issue. 2017-09-26 14:44:03 +02:00
Timo Reimann
1c98a9ad3e Add request accepting grace period delaying graceful shutdown. 2017-09-26 10:22:03 +02:00
Jiri Tyr
dd23ceeead Updating Docker output and curl for sticky sessions 2017-09-22 17:22:03 +02:00
Ludovic Fernandez
058fa1367b CI: speed up pull images. 2017-09-22 16:46:03 +02:00
Philippe M. Chiasson
9db12374ea Be certain to clear our marshalled representation before reloading it 2017-09-22 16:14:03 +02:00
Sami Jawhar
fc550ac1fc Dumpcerts.sh: fixed sed, extracted domain keys 2017-09-22 15:18:03 +02:00
Fernandez Ludovic
d6ef8ec3d1 Merge branch 'v1.4' into master 2017-09-21 11:37:33 +02:00
Marco Jantke
837db9a2d9 add json format support for traefik logs 2017-09-21 10:42:02 +02:00
SALLEYRON Julien
a941739f8a Change pull image command in Makefile 2017-09-20 20:02:02 +02:00
SALLEYRON Julien
795a346006 Flaky tests and refresh problem in consul catalog 2017-09-20 19:08:02 +02:00
Marco Jantke
9d00da7285 fix SSE subscriptions when retries are enabled 2017-09-20 18:40:03 +02:00
Marco Jantke
52c1909f24 Fix deprecated IdleTimeout config 2017-09-20 18:14:03 +02:00
Fernandez Ludovic
2cbf9cae71 Merge tag 'v1.4.0-rc3' into master 2017-09-18 21:52:44 +02:00
SALLEYRON Julien
f9225c54ff Prepare release v1.4.0-rc3 2017-09-18 18:20:03 +02:00
Ludovic Fernandez
cb05f36976 Manage Headers for the Authentication forwarding. 2017-09-18 17:48:07 +02:00
Frédéric Logier
49e0e20ce2 fix healthcheck port 2017-09-18 15:50:03 +02:00
Ludovic Fernandez
7c35337999 Remove GZIPHandler fork. 2017-09-18 11:04:03 +02:00
Fernandez Ludovic
2296aab5a8 refactor: unflaky access log. 2017-09-18 09:44:03 +02:00
Fernandez Ludovic
ce3b255f1a chore: Use go-check fork. 2017-09-18 09:44:03 +02:00
SALLEYRON Julien
3942f3366d User guide gRPC 2017-09-16 10:56:02 +02:00
Ludovic Fernandez
df76cc33a5 Fixes entry points configuration. 2017-09-15 20:56:04 +02:00
Marco Jantke
cf387d5a6d Enable loss less rotation of log files 2017-09-15 15:02:03 +02:00
Martin Proks
0a0cf87625 Fix rancher host IP address 2017-09-15 12:30:03 +02:00
Ludovic Fernandez
1a2544610d Enhance web backend documentation 2017-09-15 09:18:03 +02:00
Ludovic Fernandez
5229b7cfba Add forward auth documentation. 2017-09-14 21:26:02 +02:00
Timo Reimann
243b45881d Document custom error page restrictions. 2017-09-14 08:50:02 +02:00
Avi Deitcher
883028d981 Add examples of proxying ping 2017-09-13 15:24:03 +02:00
Ludovic Fernandez
bdeb7bfb9f Display Traefik logs in integration test 2017-09-13 10:34:04 +02:00
Ludovic Fernandez
808ffb0491 Explains new bot features. 2017-09-12 21:04:03 +02:00
Timo Reimann
5305a16350 Add guide section on production advice, esp. CPU. 2017-09-12 19:56:04 +02:00
Manuel Zapf
63b581935d Add stack name to backend name generation to fix rancher metadata backend 2017-09-12 15:06:04 +02:00
Ludovic Fernandez
c7c9349b00 Enhance documentation readability. 2017-09-11 19:10:04 +02:00
Ben Parli
d54417acfe Rate limiting for frontends 2017-09-09 13:36:03 +02:00
Fernandez Ludovic
9fba37b409 Merge v1.4.0-rc2 into master 2017-09-09 01:00:48 +02:00
Ludovic Fernandez
6d28c52f59 Prepare release v1.4.0-rc2 2017-09-08 21:28:02 +02:00
SALLEYRON Julien
f80a6ef2a6 Fix consul catalog refresh problems 2017-09-08 20:50:04 +02:00
SALLEYRON Julien
ecf31097ea Upgrade oxy for websocket bug 2017-09-08 16:14:03 +02:00
Ludovic Fernandez
16fc3675db Force GOARM to v6. 2017-09-08 14:50:04 +02:00
Ludovic Fernandez
651d993d9c prometheus, HTTP method and utf8 2017-09-08 11:22:03 +02:00
Ludovic Fernandez
03eb5139a2 Anonymize contributing doc 2017-09-08 10:28:03 +02:00
Ludovic Fernandez
286d882f1e Remove old glide elements for integration tests. 2017-09-08 10:26:03 +02:00
Emile Vauge
3b6afdf80c Fix error in prepareServer 2017-09-07 20:14:03 +02:00
Michael
c19cce69fa Add basic auth for ecs 2017-09-07 17:34:03 +02:00
SALLEYRON Julien
5c4931e235 Update oxy for websocket bug 2017-09-07 16:06:04 +02:00
Michael
b705e64a8a Add Basic auth for consul catalog 2017-09-07 15:28:02 +02:00
NicoMen
7fd1eb3780 Upgrade libkermit/compose version 2017-09-07 15:14:03 +02:00
Chulki Lee
8c5514612f Fix whitespaces 2017-09-07 12:02:03 +02:00
Chulki Lee
924e82ab0c doc: add notes on server urls with path 2017-09-07 11:40:03 +02:00
Keith Bremner
adcb99d330 Update cluster.md 2017-09-07 11:16:03 +02:00
Ludovic Fernandez
8339139400 Access log default values 2017-09-07 10:54:03 +02:00
Charlie O'Leary
a43cf8d2b8 Fix IAM policy sid. 2017-09-07 10:08:04 +02:00
NicoMen
2b863d9bc2 Upgrade libkermit/compose version 2017-09-06 15:02:03 +02:00
Michael
9ce4f94818 ECS provider refactoring 2017-09-06 12:10:05 +02:00
Marco Jantke
5157a6ad47 Send traefik logs to stdout 2017-09-06 11:58:03 +02:00
Manuel Zapf
cd6c58a372 fix rancher api environment get 2017-09-06 10:50:04 +02:00
SALLEYRON Julien
03ba8396f3 Add test for SSL TERMINATION in Websocket 2017-09-06 09:36:02 +02:00
Ludovic Fernandez
b0a0e16136 Enhance documentation. 2017-09-05 15:58:03 +02:00
Kyle Bai
732d73dd43 [Docs] Fix invalid service yaml example 2017-09-05 11:42:03 +02:00
Fernandez Ludovic
e075dfe911 refactor: re-organize doc. 2017-09-01 20:38:03 +02:00
Fernandez Ludovic
425b53585a doc: fix error pages configuration. 2017-09-01 20:38:03 +02:00
Ludovic Fernandez
d5bbb103d4 HTTPS for images, video and links in docs. 2017-09-01 19:44:03 +02:00
Ludovic Fernandez
5c2849ea07 Enhance security headers doc. 2017-09-01 17:44:03 +02:00
Ludovic Fernandez
723418e2cc fix: documentation Mesos. 2017-08-30 14:52:03 +02:00
Emile Vauge
45e2e8baec Update traefik SSH key take 2 (#2023) 2017-08-29 09:37:47 +02:00
Ludovic Fernandez
b0ae6bc049 Prepare release v1.4.0-rc1 2017-08-29 02:10:03 +02:00
Fernandez Ludovic
ffb53c07b8 refactor: basic configuration. 2017-08-28 23:02:04 +02:00
Fernandez Ludovic
f329b3b51d chore: change CODEOWNERS file. 2017-08-28 23:02:04 +02:00
Fernandez Ludovic
5b27aba3e1 doc: Material Theme. 2017-08-28 23:02:04 +02:00
Fernandez Ludovic
7c2ba62b56 doc: structural review
- user-guide review.
- add DataDog and StatD configuration.
- sync sample.toml and doc.
- split entry points doc.
- Deprecated.
2017-08-28 23:02:04 +02:00
Julien Maitrehenry
24862402e5 Refactor doc pages 2017-08-28 23:02:04 +02:00
ArikaChen
d568d2f55a Update golang version in contributing guide 2017-08-28 15:20:03 +02:00
Marco Jantke
dae7e7a80a add RetryAttempts to AccessLog in JSON format 2017-08-28 12:50:02 +02:00
Emile Vauge
23cdb37165 Update Traefiker SSH key 2017-08-28 11:48:03 +02:00
Fernandez Ludovic
2c82dfd444 Merge v1.3.7 2017-08-25 22:58:49 +02:00
Emile Vauge
c8c31aea62 Add proxy protocol 2017-08-25 21:32:03 +02:00
NicoMen
89b0037ec1 Improve Let's Encrypt documentation 2017-08-25 21:10:03 +02:00
Emile Vauge
b75fb23887 Update documentation for 1.4 release 2017-08-25 20:40:03 +02:00
Daniel Rampelt
52b69fbcb8 Add forward authentication option 2017-08-25 18:22:03 +02:00
Michael
f16219f90a Exposed by default feature in Consul Catalog 2017-08-25 17:32:03 +02:00
Ludovic Fernandez
7b0cef0fac Prepare release v1.3.7 2017-08-25 17:08:02 +02:00
SALLEYRON Julien
e0af17a17a Refactor globalConfiguration / WebProvider 2017-08-25 16:10:03 +02:00
mildis
92fb86b66f log X-Forwarded-For as ClientHost if present 2017-08-25 13:00:03 +02:00
Ludovic Fernandez
919295cffc Only forward X-Fowarded-Port. 2017-08-25 12:14:03 +02:00
Michael
086a85d2f0 Enable loadbalancer.sticky for ECS 2017-08-25 11:42:03 +02:00
Fernandez Ludovic
8235cd3645 refactor: minor changes. 2017-08-25 11:08:02 +02:00
Fernandez Ludovic
f1a257abf8 refactor: enhance bug report command. 2017-08-25 11:08:02 +02:00
Alex Antonov
98dfd2ba0e Added a check to ensure clientTLS configuration contains either a cert or a key 2017-08-25 10:26:02 +02:00
Ludovic Fernandez
87e6285cf6 Update certificates. 2017-08-25 09:20:03 +02:00
Luís Duarte
0d56a98836 Add support for Query String filtering 2017-08-24 20:28:03 +02:00
Nicolas Bonneval
8105f1c379 Enable loadbalancer.sticky for Consul Catalog 2017-08-24 18:38:05 +02:00
Marco Jantke
e6c2040ea8 Extract metrics to own package and refactor implementations 2017-08-23 20:46:03 +02:00
Julien Maitrehenry
c1b5b740ff toml page - replace li by table 2017-08-23 19:46:03 +02:00
Timo Reimann
1d2d0cefaa Fix documentation glitches. 2017-08-23 09:22:03 +02:00
Fernandez Ludovic
04e65958ee Merge 'v1.3.6' 2017-08-22 16:23:18 +02:00
Michael
8765494cbd Add support for several ECS backends 2017-08-22 11:46:03 +02:00
Julien Maitrehenry
05665f4eec Add more visibility to docker stack deploy label issue 2017-08-22 10:56:03 +02:00
Ludovic Fernandez
78544f7fa2 Prepare release v1.3.6 2017-08-22 09:52:02 +02:00
Emile Vauge
396449c07f Add healthcheck command 2017-08-21 23:18:02 +02:00
Emile Vauge
eda679776e Add Marco Jantke to maintainers 2017-08-21 22:22:04 +02:00
Max van der Stam
69d57d602f Add guide for Docker, Traefik & Letsencrypt 2017-08-21 21:20:04 +02:00
Ludovic Fernandez
32b2736efd Bump gorilla/mux version. 2017-08-21 20:40:02 +02:00
Ludovic Fernandez
3f650bbd11 Support X-Forwarded-Port. 2017-08-21 17:54:02 +02:00
Ludovic Fernandez
5313922bb7 compress: preserve status code 2017-08-21 11:10:03 +02:00
Alex Antonov
ec3e2c08b8 Support multi-port service routing for containers running on Marathon 2017-08-21 10:46:03 +02:00
Ludovic Fernandez
40e18db838 Websocket parameters and protocol. 2017-08-20 19:02:02 +02:00
Timo Reimann
0367034f93 Fix docs about default namespaces. 2017-08-18 19:18:02 +02:00
Timo Reimann
b80ecd51a7 Use default frontend priority of zero. 2017-08-18 16:14:03 +02:00
Marco Jantke
14a0d66410 Add configurable timeouts and curate default timeout settings 2017-08-18 15:34:04 +02:00
Fernandez Ludovic
d84ccbc52a doc: add bots documentation.
- move contributing guide
- move maintainer guide
2017-08-18 10:24:03 +02:00
Fernandez Ludovic
1190768f4b chore: remove all PR scripts. 2017-08-18 10:24:03 +02:00
Timo Reimann
ea3510d1f3 Add support for readiness checks. 2017-08-18 03:08:03 +02:00
Timo Reimann
3f76f73e8c Mark Marathon and Kubernetes as constraint-supporting. 2017-08-18 02:40:03 +02:00
Ludovic Fernandez
759c269dee Code cleaning. 2017-08-18 02:18:02 +02:00
Boran Car
c360395afc examples/k8s: fix ui ingress port out of sync with deployment 2017-08-18 01:40:03 +02:00
Marco Paga
60a35c8aba Setting the Cookie Path explicitly to root 2017-08-13 11:34:34 +02:00
Emile Vauge
50dd2b8cff Change Traefik intro video 2017-08-11 15:19:36 +02:00
Richard Shepherd
4e5fcac9cb Add log file close and reopen on receipt of SIGUSR1 2017-08-11 12:04:58 +02:00
Timo Reimann
64b8fc52c3 [marathon] Fix and extend integration tests.
- Update compose file.
- Add integration test for Marathon application deployment.
2017-08-10 21:58:08 +02:00
Timo Reimann
19a5ba3264 Update github.com/docker/libcompose
Update github.com/docker/libcompose in glide.* files.
Vendor github.com/docker/libcompose update.
2017-08-10 21:58:08 +02:00
thomasbach76
7ff6c32452 Add the sprig functions in the template engine 2017-08-10 20:42:39 +02:00
Christoph Glaubitz
ff11467022 Bind healthcheck to backend by entryPointName 2017-08-10 18:00:31 +02:00
Ludovic Fernandez
7d3878214a Update documentation 2017-08-10 17:29:32 +02:00
Ludovic Fernandez
984817d3a0 Add more files to CODEOWNERS. 2017-08-10 16:47:11 +02:00
Alex Antonov
6b133e24b9 Added RetryMetrics to DataDog and StatsD providers 2017-08-09 02:54:35 +02:00
SALLEYRON Julien
990ee89650 Add helloworld tests with gRPC 2017-08-06 11:55:42 +02:00
Mark Dastmalchi-Round
8071f31721 Moved namespace to correct place 2017-08-03 10:25:05 +02:00
Fernandez Ludovic
d456c2ce6a Merge 'v1.3.5' 2017-08-01 19:32:44 +02:00
Ludovic Fernandez
413ed62933 Prepare release v1.3.5 2017-08-01 17:43:37 +02:00
SALLEYRON Julien
1b4dc3783c Oxy with fixes on websocket + integration tests 2017-08-01 15:24:08 +02:00
Kirill Orlov
94f922cd28 Added ability to override frontend priority for k8s ingress router 2017-07-29 18:35:23 +02:00
Sascha Grunert
29390a3c4a Update the documentation to use DaemonSet or Deployment (#1735) 2017-07-29 12:50:04 +02:00
Julien Salleyron
1db9482a8e Prepare release v1.3.4 2017-07-27 17:24:19 +02:00
Julien Salleyron
888e6dcbc8 Oxy with gorilla for websocket(+integration tests) 2017-07-27 15:43:12 +02:00
Timo Reimann
765c44d77f [kubernetes] Add secrets resource to in-line RBAC spec.
Previously only existed in the Github-hosted example file.
2017-07-27 10:02:02 +02:00
Fernandez Ludovic
64ee68763b feat: match doc also. 2017-07-24 10:06:22 +02:00
Fernandez Ludovic
4122aef12e chore: fix CODEOWERS file. 2017-07-24 10:06:22 +02:00
Timo Reimann
8cb44598c0 [marathon] Use test builder.
This change introduces the builder pattern to the Marathon unit tests in
order to simplify and reduce the amount of testing boilerplate.

Additional changes:

- Add missing unit tests.
- Make all tests look consistent.
- Use dedicated type for task states for increased type safety.
- Remove obsoleted getApplication function.
2017-07-21 17:15:29 +02:00
Alex Antonov
69c628b626 DataDog and StatsD Metrics Support
* Added support for DataDog and StatsD monitoring
* Added documentation
2017-07-21 00:26:43 +02:00
Marco Jantke
cd28e7b24f fix GraceTimeOut description
Documentation stated that GraceTimeOut describes the timeout between
hot-reloads, which is not the case. GraceTimeOut describes the timeout
Traefik uses to finish serving active requests before stopping only.
2017-07-20 23:42:43 +02:00
Fernandez Ludovic
40d9058bb6 refactor: migration Negroni from codegangsta to urfave 2017-07-20 15:19:15 +02:00
Fernandez Ludovic
c36e0b3b06 refactor: add Safe everywhere is needing. 2017-07-20 14:59:54 +02:00
Timo Reimann
3174fb8861 [marathon] Assign filtered tasks to apps contained in slice.
We previously assigned them to a copy of each application, which
effectively disabled all filtering.

Fixes a bug introduced along commit 779eeba.
2017-07-20 10:39:27 +02:00
Marco Jantke
074b31b5e9 respond with 503 on empty backend 2017-07-19 19:28:24 +02:00
Pierre Ugaz
16609cd485 Update docs for dnsimple env vars.
* Lego library uses DNS_OAUTH_TOKEN instead of DNSIMPLE_OAUTH_TOKEN
2017-07-19 18:01:24 +02:00
dedalusj
a09a8b1235 Fix replace path rule
* Fix replace path rule
* test: add RequestURI tests.
2017-07-19 10:27:52 +02:00
bitsofinfo
70ab34cfb8 doc change regarding consul SSL
document change to clarify consul ssl, vs consul ssl client certificate security
2017-07-18 17:22:08 +02:00
Fernandez Ludovic
36ee69609e fix: double compression. 2017-07-18 11:27:24 +02:00
Fernandez Ludovic
c53be185f4 chore(glide): change nergoni git url. 2017-07-12 10:22:39 +02:00
Timo Reimann
779eeba650 [marathon] Use single API call to fetch Marathon resources.
Change Marathon provider to make just one API call instead of two per
configuration update by means of specifying embedded resources, which
enable retrieving multiple response types from the API at once. Apart
from the obvious savings in API calls, we primarily gain a consistent
view on both applications and tasks that allows us to drop a lot of
correlation logic.  Additionally, it will serve as the basis for the
introduction of readiness checks which require application/task
consistency for correct leverage on the proxy end.

Additional changes:

marathon.go:
- Filter on tasks now embedded inside the applications.
- Reduce/simplify signature on multiple template functions as we do not
  need to check for proper application/task correlation anymore.
- Remove getFrontendBackend in favor of just getBackend.
- Move filtering on enabled/exposed applications from `taskFilter` to
  `applicationFilter`. (The task filter just reached out to the
  applications anyway, so it never made sense to locate it with the
  tasks where the filter was called once for every task even though the
  result would never change.)
- Remove duplicate constraints filter in tasks, where it neither made
  sense to keep as it operates on the application level only.
- Add context to rendering error.

marathon_test.go:
- Simplify and reduce numerous tests.
- Convert tests with high number of cases into parallelized sub-tests.
- Improve readability/structure for several tests.
- Add missing test for enabled/exposed applications.
- Simplify the mocked Marathon server.

marathon.tmpl:
- Update application/task iteration.
- Replace `getFrontendBackend` by `getBackend`.
2017-07-11 14:35:01 +02:00
Marco Jantke
58ffea6627 extract lb configuration steps into method 2017-07-10 19:18:31 +02:00
Fernandez Ludovic
a2d68ed881 chore: GitHub Code Owners. 2017-07-10 17:45:58 +02:00
Ludovic Fernandez
d653a348b1 Factorize labels
* refactor(accesslog): factorize file name.
* traefik.frontend.rule
* traefik.frontend.value
* traefik.backend.circuitbreaker.expression
* traefik.enable
* traefik.backend.loadbalancer.method
* traefik.backend.loadbalancer.sticky
* traefik.backend.maxconn.amount
* traefik.backend.maxconn.extractorfunc
* traefik.port
* traefik.tags
* traefik.backend
* traefik.weight
* traefik.domain
* traefik.protocol
* traefik.frontend.passHostHeader
* traefik.frontend.whitelistSourceRange
* traefik.frontend.priority
* traefik.frontend.entryPoints
* traefik.frontend.auth.basic
* traefik.backend.id
* traefik.backend.circuitbreaker
* traefik.frontend.rule.type
* traefik.portIndex
* refactor(docker): specific labels
* refactor(rancher): specific labels
* traefik.backend.healthcheck.*
* refactor(providers): factorize labels.
2017-07-10 16:58:12 +02:00
Ludovic Fernandez
2e84b1e556 Enhance integration tests
* refactor: remove unused code.
* refactor: factorize Traefik cmd start.
* refactor(whitelist): minor change.
* refactor(accesslog): better use of checker.
* refactor(errorpages): factorize containers IP variables.
* refactor(integration): refactor cmdTraefikWithConfigFile.
2017-07-10 14:58:31 +02:00
Fernandez Ludovic
bbb133d94c doc: remove glide integration. 2017-07-10 11:33:05 +02:00
Timo Reimann
d90fa5ab3e [kubernetes] Improve documentation.
- Add details to the labelselector parameter.
- Add section on ExternalNames in the guide.
2017-07-08 12:59:12 +02:00
Christophe Robin
759a19bc4f Add whitelist configuration option for entrypoints
* Add whitelist configuration option for entrypoints
* Add whitelist support to --entrypoint flag
2017-07-08 12:21:14 +02:00
Fernandez Ludovic
a7ec785994 refactor(dynamodb): Use Traefik Logger. 2017-07-08 00:05:53 +02:00
Fernandez Ludovic
46faa7a745 refactor(ecs): Use Traefik Logger. 2017-07-08 00:05:53 +02:00
Fernandez Ludovic
54e3f08833 refactor(marathon): Use Traefik Logger. 2017-07-08 00:05:53 +02:00
Fernandez Ludovic
b365836c57 feat: Add Trace in Base Provider. 2017-07-08 00:05:53 +02:00
Fernandez Ludovic
242f1b9c3c feat(logger): Expose Logrus writer.
- Hack logrus scanner buffer size.
- dedicate method for large scanner buffer.
2017-07-08 00:05:53 +02:00
Matt Christiansen
4dfbb6d489 Add marathon label to configure basic auth, similar to docker and rancher providers 2017-07-07 23:36:04 +02:00
James Sturtevant
c31b4c55c2 Update contributing guide build steps 2017-07-07 23:13:23 +02:00
Salvatore Pinto
ca5bbab20a traefik controller access to secrets
The traefik controller shall have access to secrets for the k8s basic authentication (#1488) to work
2017-07-07 22:35:03 +02:00
Michael Laccetti
41dd124a4b kubernetes ingress rewrite-target implementation
* Adding support for `ingress.kubernetes.io/rewrite-target`

We create a rule using the `PathPrefixStrip` to trim out the bit in the rewrite rule.
2017-07-07 21:27:54 +02:00
Marco Jantke
dbf6161fa1 always rebuild webui on 'make image'
and introduce a new make target image-dirty that is used for the Traefik
deployment.
2017-07-07 17:56:48 +02:00
Marcos Nils
7aabd6e385 Update README.md 2017-07-07 14:34:25 +02:00
NicoMen
cb203f8e7e Make the ACME developements testing easier
* ADD docker-compose and shell script to allow developers to get ACME environment easily
2017-07-07 11:36:32 +02:00
Fernandez Ludovic
8f845bac74 Merge tag 'v1.3.3' 2017-07-06 19:37:12 +02:00
Fernandez Ludovic
98b52d1f54 Prepare release v1.3.3 2017-07-06 17:53:35 +02:00
Timo Reimann
4892b2b0da [kubernetes] Undo the Secrets controller sync wait.
When Secrets permissions have not been granted (which is likely to be
the case for users not needing the basic auth feature), the watch on the
Secrets API will never yield a response, thereby causing the controller
to never sync successfully, and in turn causing the check for all
controller synchronizations to fail consistently. Thus, no event will
ever be handled.
2017-07-06 17:12:25 +02:00
Timo Reimann
a89eb122a0 Clarify that provider-enabling argument parameters set all defaults. 2017-07-06 17:00:44 +02:00
Vincent Demeester
b7daa2f3a4 Update traefik dependencies (docker/docker and related) (#1823)
Update traefik dependencies (docker/docker and related)

- Update dependencies
- Fix compilation problems
- Remove vdemeester/docker-events (in docker api now)
- Remove `integration/vendor`
- Use `testImport`
- update some deps.
- regenerate the lock from scratch (after a `glide cc`)
2017-07-06 16:28:13 +02:00
Timo Reimann
91ce78da46 [k8s] Tell glog to log everything into STDERR.
Logging errors into a file inside a minimalistic container might not be
possible, and glog bails out with an exit code > 0 if it fails.
2017-07-04 17:11:50 +02:00
Vincent Demeester
7d178f49b4 Update docker version to 17.03.2…
… and also update the url to get static binaries.

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2017-07-03 16:21:28 +02:00
Fernandez Ludovic
85f4f26942 doc: release cycle. 2017-07-03 14:57:19 +02:00
Fernandez Ludovic
eee8ba8a53 doc: Traefik bug command. 2017-07-03 12:42:06 +02:00
Ludovic Fernandez
22aceec426 Re-think integration vendoring
- remove docker/docker from  Traefik vendor (unused)
- use `ignore` for all Traefik vendor in integration glide.
- defined only integration specific version of the dependencies.
2017-07-03 11:53:31 +02:00
Ben Parli
121c057b90 Custom Error Pages (#1675)
* custom error pages
2017-07-01 01:04:18 +02:00
Marco Jantke
2c976227dd remove confusing go-marathon log message
Log message produced by go-marathon was:
time="2017-06-28T09:08:19Z" level=debug msg="listenToSSE(): failed to
handle event: failed to decode the event type, content: , error: EOF"

The fix for this was done in the upstream project of go-marathon
donovanhide/eventsource.

Background is that Marathon periodically sends a \n over the SSE
subscription, in order to keep the connection alive. This was parsed as
empty event by the eventsource and published. go-marathon in turn was
not able to do something with this empty event was producing the log
message above. By getting rid of publishing empty events in the
downstream library, we also get rid of this log message.
2017-06-30 22:14:57 +02:00
Julien Salleyron
81d011e57d Handle RootCAs Certificate 2017-06-30 14:56:55 +02:00
Fernandez Ludovic
3776e58041 Merge branch 'v1.3' 2017-06-30 00:04:04 +02:00
Fernandez Ludovic
f06e256934 Prepare release v1.3.2 2017-06-29 17:40:11 +02:00
Fernandez Ludovic
4699d6be18 Fix proxying of unannounced trailers 2017-06-29 17:03:29 +02:00
Timo Reimann
6473002021 Continue Ingress processing on auth retrieval failure. 2017-06-29 16:13:53 +02:00
Timo Reimann
4d89ff7e18 Improve basic auth handling.
- Enrich logging.
- Move error closer to producer.
2017-06-29 16:13:53 +02:00
Timo Reimann
c5c63071ca Wait for secret controller to finish synchronizing.
Prevents a race on closing the events channel, possibly leading to a
double-close.
2017-06-29 16:13:53 +02:00
Timo Reimann
9fbe21c534 Upgrade go-marathon to dd6cbd4.
Fixes a problem with UnreachableStrategy being available now in two
type-incompatible formats (object and string).

We also upgrade the transitive dependency
github.com/donovanhide/eventsource.
2017-06-29 09:59:20 +02:00
Fernandez Ludovic
36c88111de Merge branch 'v1.3' 2017-06-27 23:27:00 +02:00
Fernandez Ludovic
7a34303593 chore: Bump Docker version to 17.03 2017-06-27 23:22:43 +02:00
Fernandez Ludovic
2201dcd505 doc: Manuel Laufenberg become Manuel Zapf. 2017-06-27 22:02:23 +02:00
Emile Vauge
7a7cafcbaa Add Nicolas Mengin to maintainers 2017-06-27 22:02:23 +02:00
Emile Vauge
efb671401d Add Julien Salleyron to maintainers 2017-06-27 21:35:47 +02:00
Richard Shepherd
4128c1ac8d Allow file provider to load config from files in a directory. 2017-06-27 16:58:04 +02:00
Fernandez Ludovic
73e10c96cc Merge branch 'v1.3' 2017-06-27 14:42:12 +02:00
Fernandez Ludovic
fdb24c64e4 chore(semaphoreci): update Docker version. 2017-06-27 14:05:44 +02:00
nmengin
631079a12f feature: Add provided certificates check before to generate ACME certificate when OnHostRule is activated
- ADD TI to check the new behaviour with onHostRule and provided certificates
- ADD TU on the getProvidedCertificate method
2017-06-26 18:32:55 +02:00
Marco Jantke
0055965295 add status code to request duration metric 2017-06-26 18:21:28 +02:00
Fernandez Ludovic
f99f3b987e fix: websocket when the connection upgrade failed. 2017-06-26 18:00:03 +02:00
Emile Vauge
34e60a8404 Change to a more flexible PR review process
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-06-26 11:04:12 +02:00
Timo Reimann
ceec81011b Address review comments. 2017-06-24 12:32:05 +02:00
Brian 'Redbeard' Harrington
927003329e contrib: Dump keys/certs from acme.json to files
In the event that a user needs to explode their acme.json file into
a set of directories and relevant files for troubleshooting or use
with other programs this script will parse them into the components
in the following path structure:

```
certdir
├── certs
│   ├── domain-1.example.com
│   ├── domain-2.example.com
│   └── domain-n.example.com
└── private
    └── letsencrypt.key
```
2017-06-24 12:32:05 +02:00
Fernandez Ludovic
01bb0a80ab doc: update Rancher documentation. 2017-06-21 14:54:36 +02:00
vholovko
db1baf80a9 Speeding up health change detection by separating it from catalog services check. 2017-06-20 20:27:04 +02:00
Martin Baillie
9cb07d026f Refactor into dual Rancher API/Metadata providers
Introduces Rancher's metadata service as an optional provider source for
Traefik, enabled by setting `rancher.MetadataService`.

The provider uses a long polling technique to watch the metadata service and
obtain near instantaneous updates. Alternatively it can be configured to poll
the metadata service every `rancher.RefreshSeconds` by setting
`rancher.MetadataPoll`.

The refactor splits API and metadata service code into separate source
files respectively, and specific configuration is deferred to
sub-structs.

Incorporates bugfix #1414
2017-06-20 19:08:53 +02:00
tanyadegurechaff
984ea1040f Fix error handling for docker swarm mode 2017-06-20 18:10:21 +02:00
Martin Baillie
447109e868 Add HTTP HEAD handling to /ping endpoint
Also updates documentation to reflect new method.
2017-06-20 11:40:14 +02:00
Marco Jantke
f79317a435 retry only on real network errors
Now retries only happen when actual network errors occur and not only
anymore based on the HTTP status code. This is because the backend could
also send this status codes as their normal interface and in that case
we don't want to retry.
2017-06-19 20:13:46 +02:00
Fernandez Ludovic
131d8dd765 Merge tag 'v1.3.1' 2017-06-16 16:52:53 +02:00
bitsofinfo
b452695c20 added consul acl token note 2017-06-16 16:31:03 +02:00
Mat Byczkowski
f17785c3ab doc: fix typo in maintainer.md 2017-06-16 14:00:24 +02:00
Fernandez Ludovic
2a578748fd Merge branch 'v1.3' 2017-06-14 22:26:35 +02:00
Marco Jantke
2ddae2e856 update go-marathon to 441a03a
in order to get the latest fixes regarding SSE subscription failover.
2017-06-14 10:03:49 +02:00
Marco Jantke
885b9f371c enable logging to stdout for access logs 2017-06-13 23:43:38 +02:00
Daniel Tomcej
f275e4ad3c Create Header Middleware 2017-06-13 12:34:17 +02:00
Fernandez Ludovic
aea7bc0c07 chore: update Glide hash. 2017-06-12 22:15:33 +02:00
Fernandez Ludovic
a457392ec3 refactor: clean coreos/etcd dependency. 2017-06-12 22:15:33 +02:00
Fernandez Ludovic
37ec7d0505 refactor: subpackage for x/oauth2. 2017-06-12 22:15:33 +02:00
Fernandez Ludovic
8f6404ab3a fix: sirupsen/logrus version
State:
- Current version: 10f801ebc38b33738c9d17d50860f484a0988ff5
- Glide suggest: f7f79f729e0fbe2fcc061db48a9ba0263f588252

https://github.com/sirupsen/logrus/commits/master?after=85b1699d505667d13f8ac4478c1debbf85d6c5de+34
10f801ebc3 (17 Mar 2017)
f7f79f729e (19 Jan 2016)
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
1538b16b21 fix: golang/protobuf version
`github.com/golang/protobuf`:
- `github.com/prometheus/client_golang` (no version)
- `github.com/gogo/protobuf` (no version)
- `google.golang.org/appengine` (no version)
- `github.com/matttproud/golang_protobuf_extensions` (no version)

State:
- Current version: 2bba0603135d7d7f5cb73b2125beeda19c09f4ef
- Glide suggest: 8616e8ee5e20a1704615e6c8d7afcdac06087a67

Force to keep the current version.

Refs
- 2bba060313 (Mar 31, 2017) next commit the Apr 27, 2017.
- 8616e8ee5e (8 Jun 2016)
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
a6477fbd95 fix: Prometheus dependency version: matttproud/golang_protobuf_extensions
`matttproud/golang_protobuf_extensions` is used by:
- `github.com/prometheus/client_golang`
- `github.com/prometheus/common`

Force to the latest version.

Refs:
- https://github.com/matttproud/golang_protobuf_extensions/commits/master (no dependencies manager)
- 24 Apr 2016, c12348ce28 (master, HEAD)
- 6 Apr 2015, fc2b8d3a73
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
e802dcd189 fix: Mesos/k8s dependency version: golang/glog
`golang/glog` is used by:
- `github.com/mesos/mesos-go` (no version)
- `k8s.io/client-go` (`44145f04b68cf362d9c4df2182967c2275eaefed`)

In #353 (add Mesos provider, 20 Jul 2016), the `golang/glog` hash is `fca8c8854093a154ff1eb580aae10276ad6b1b5f`.

The problem appear in #836 (use k8s client, 1 Dec 2016).

Refs:
- Traefik:
  - https://github.com/containous/traefik/pull/836
  - 131f581f77
- Glog
  - https://github.com/golang/glog/commits/master
  - https://github.com/golang/glog/pull/13
  - 44145f04b6
  - fca8c88540
- k8s
  - e121606b0d/Godeps/Godeps.json
  - https://github.com/kubernetes/client-go/blob/master/Godeps/Godeps.json
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
931dc02c09 fix: Vulcand dependency version : vulcand/predicate
`vulcand/predicate` is used by:
- `github.com/vulcand/oxy` (no dependencies manager)
- `github.com/vulcand/route` (used by `github.com/vulcand/vulcand`)

`github.com/vulcand/vulcand` (Godeps) required a old version `cb0bff91a7ab7cf7571e661ff883fc997bc554a3`.

`19b9dde14240d94c804ae5736ad0e1de10bf8fe6` is the only commit before `cb0bff91a7ab7cf7571e661ff883fc997bc554a3`.

refs:
- 42492a3a85/Godeps/Godeps.json
- https://github.com/vulcand/predicate/commits/master
- 19b9dde142
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
7017cdcf49 fix: oxy dependency version: mailgun/timetools. 2017-06-12 22:15:33 +02:00
Fernandez Ludovic
5aa017d9b5 fix: k8s dependency version: emicklei/go-restful
`emicklei/go-restful` is used by:
- `k8s.io/client-go`  (Godeps)

Refs:
- e121606b0d/Godeps/Godeps.json
2017-06-12 22:15:33 +02:00
Fernandez Ludovic
a7297b49a4 fix: Prometheus dependencies
Prometheus is related to `go-kit/kit`.
`go-kit/kit` doesn't have dependency manager.

We use `go-kit/kit` v0.3.0 (15 Nov 2016).

We must explicitly declare the Prometheus dependencies.
Prometheus doesn't have dependency manager.
Use the commit date to fixed all hash.

refs:
- go-kit
  - https://github.com/go-kit/kit/tree/v0.3.0 (15 Nov 2016)
- Prometheus
  - https://github.com/prometheus/client_golang/commits/master
  - 08fd2e1237 (Apr 1, 2017)
  - https://github.com/prometheus/common/commits/master
  - 49fee292b2 (Feb 20, 2017)
  - https://github.com/prometheus/client_model/commits/master
  - 6f38060186 (Feb 16, 2017, master, HEAD)
  - https://github.com/prometheus/procfs/commits/master
  - a1dba9ce8b (Feb 16, 2017)
2017-06-12 22:15:33 +02:00
Zachary Seguin
3eaeb81831 Adds definitions to backend kv template for health checking 2017-06-12 21:54:08 +02:00
Alex Antonov
7d6c778211 Enhanced flexibility in Consul Catalog configuration 2017-06-12 21:18:55 +02:00
Fernandez Ludovic
9c27a98821 refactor: move Marathon client mock.
refactor: remove old Marathon mock.
refactor: generate new Marathon mock.

mockery -recursive -dir=vendor/github.com/gambol99/ -name=Marathon -output=provider/marathon/mocks
2017-06-12 20:27:54 +02:00
djalal
ad54c5a278 drop "slave" wording for "worker"
Traefik should follow modern IT trends, and use manager/leader/worker/agent, etc. instead of "master/slave".

e.g jenkinsci/jenkins#2007 (https://issues.jenkins-ci.org/browse/JENKINS-27268)

NB: of course, it can only apply where possible, since backends like Mesos should retain their own concepts, and not add more confusion.
2017-06-12 20:07:39 +02:00
Fernandez Ludovic
96939e2990 chore: Enhance GitHub issue template. 2017-06-12 19:29:23 +02:00
Fernandez Ludovic
5268db47a1 fix: glide go-marathon 2017-06-11 21:44:36 +02:00
Drew Wells
3048509807 enable TLS client forwarding
Copys the incoming TLS client certificate to the outgoing
request. The backend can then use this certificate for
client authentication ie. k8s client cert authentication
2017-06-11 15:24:29 +02:00
Fernandez Ludovic
7399a83c74 refactor: Use Statefull interface in access log. 2017-06-09 23:55:49 +02:00
Fernandez Ludovic
18c3d8dc62 test: add AddPrefix test. 2017-06-09 23:55:49 +02:00
Fernandez Ludovic
2d1ddcf28b test: HealthCheck review. 2017-06-09 23:55:49 +02:00
Fernandez Ludovic
a1a0420314 test: use MustNewRequest. 2017-06-09 23:55:49 +02:00
Fernandez Ludovic
2223587fc0 refactor: ordering imports. 2017-06-09 23:55:49 +02:00
Fernandez Ludovic
63f9bccf9f refactor: fix typos.
refactor: typo in whitelister file name.
2017-06-09 23:55:49 +02:00
Fernandez Ludovic
18d11e02d0 test: simplify stripPrefix* tests. 2017-06-09 23:55:49 +02:00
Richard Quintin
a71d69cc3c make the cookie name unique to the backend being served 2017-06-07 20:18:16 +02:00
Marco Jantke
e007bb7546 add metrics for backend_retries_total 2017-06-07 08:56:50 +02:00
Mihai Todor
7874ffd506 Minor Health UI fixes
- format the Oy axis ticks as integers on the Total Status Code
Count chart
- prevent the Average Response Time chart from showing negative
values on the Oy axis
- remove the deprecated transitionDuration field
- set the transition duration to 0 on the Average Response Time
chart to avoid triggering an NVD3 marker placement bug
2017-06-06 22:40:39 +02:00
Richard Shepherd
a9216e24f5 Add JSON as access logging format 2017-06-06 16:26:22 +02:00
Alex Antonov
39388a2199 Exported getSubDomain function from Marathon provider to be able to use in custom templates 2017-06-06 14:31:30 +02:00
Fernandez Ludovic
71111708d4 Merge branch 'v1.3' into master 2017-06-02 19:56:15 +02:00
Fernandez Ludovic
d5efc99876 doc: Enhance GitHub issue template. 2017-06-01 21:53:05 -07:00
Fernandez Ludovic
1e84e77a67 Merge branch 'v1.3' into master 2017-06-01 20:53:02 +02:00
Fernandez Ludovic
d6b448f430 Merge branch 'v1.3' into master 2017-05-31 23:29:23 +02:00
Fernandez Ludovic
e426b27581 refactor: valid Git branch name must work. 2017-05-31 10:34:00 +02:00
Fernandez Ludovic
b6c5c14447 refactor: Enhance rules tests.
- refactor: change incorrect package.
- refactor: test readability.
2017-05-31 10:34:00 +02:00
Fernandez Ludovic
cbccdd51c5 refactor: Logs & errors review.
- log & error: remove format if not necessary, add if necessary.
- add constants for k8s annotations.
- fix typos
2017-05-30 23:33:27 +02:00
Fernandez Ludovic
994e135368 refactor: typo in misspelling. 2017-05-26 16:42:26 -07:00
Timo Reimann
87e5cda506 Update CONTRIBUTING.md.
- Go 1.8 is the current minimum requirement.
- The main binary moved to cmd/traefik.
- Remove obsolete gox example.
2017-05-25 00:18:22 +02:00
Fernandez Ludovic
2833d68f15 Merge branch 'v1.3' into merge-back-1_3_0-rc3 2017-05-24 20:39:38 +02:00
Richard Shepherd
64e8b31d49 Switch access logging to logrus 2017-05-24 14:20:42 +02:00
Igor
2643271053 Use more inclusive language in README.md {guys => folks}
While usage of the word "guys" can be considered gender neutral depending on location and context, it is widely considered to be gendered -- and more inclusive options are readily available. 💜

References:

* [When is "guys" gender neutral? I did a survey! -- Julia Evans](https://jvns.ca/blog/2013/12/27/guys-guys-guys/)
2017-05-22 21:14:43 +02:00
Ludovic Fernandez
5b36b274a3 doc(maintainer): add contributor/needs-resolve-conflicts
Replace `contributor/needs-rebase` by `contributor/needs-resolve-conflicts`.
2017-05-22 20:05:10 +02:00
Fernandez Ludovic
8ad31d6eb4 Merge remote-tracking branch 'upstream/v1.3' into merge-v1_3 2017-05-22 11:38:28 +02:00
Brian Akins
13e8a875cf Allow overriding port for backend healthchecks 2017-05-19 17:48:16 +02:00
Ed Robinson
c7281df230 Update usage of .local with .minikube in k8s docs
Fixes #1521
2017-05-19 17:02:39 +02:00
MaZderMind
5f0b215e90 IP Whitelists for Frontend (with Docker- & Kubernetes-Provider Support) 2017-05-19 15:19:29 +02:00
Timo Reimann
55f610422a Install github.com/stretchr/testify/require. 2017-05-19 15:19:29 +02:00
Timo Reimann
a04ef15bcd Issue template: Emphasize SO and Slack for support questions.
- Be more explicit in the purpose of the issue tracker.
- Move SO before Slack since it seems preferable.
- Refer to SO and Slack on first question again.
2017-05-19 10:39:05 +02:00
Ludovic Fernandez
81754840ff Update README.md 2017-05-18 23:17:16 +02:00
Fernandez Ludovic
2610023131 refactor: Deflake and Try package
- feat: add CI multiplier
- refactor: readability
- feat: custom Sleep function
- refactor(integration): use custom Sleep
- feat: show Try progress
- feat(try): try response with status code
- refactor(try): use a dedicate package.
- refactor(integration): Try everywhere
- feat(CI): pass CI env var to Integration Tests.
- refactor(acme): increase timeout.
- feat(acme): show Traefik logs
- refactor(integration): use `http.StatusXXX`
- refactor: remove Sleep
2017-05-18 22:34:15 +02:00
Ludovic Fernandez
ff3481f06b Merge pull request #1613 from containous/merge-v1.3.0-rc2-master
Merge v1.3.0-rc2 master
2017-05-17 12:41:26 +02:00
Emile Vauge
f8ea19d29c Merge branch 'v1.3' into merge-v1.3.0-rc2-master 2017-05-17 11:44:53 +02:00
Ludovic Fernandez
3b8ebf7d33 Merge pull request #1603 from antoine-aumjaud/patch-1
Small toml documentation update
2017-05-17 10:03:57 +02:00
Antoine Aumjaud
5e14f20786 Update documentation
fix some "errors"
2017-05-17 09:45:36 +02:00
Thomas Recloux
96b19deac5 Merge pull request #1616 from containous/remove-trecloux-maintainers
Remove Thomas Recloux from maintainers
2017-05-16 23:42:16 +02:00
Emile Vauge
a6aff7c85c Remove Thomas Recloux from maintainers 2017-05-16 23:20:29 +02:00
Emile Vauge
1310347395 Remove Russell from maintainers (#1614)
It's been a pleasure
2017-05-16 18:10:28 +01:00
Ludovic Fernandez
40c94d80d7 Merge pull request #1582 from ldez/doc/maintainer-labels
doc: add labels documentation.
2017-05-16 17:59:30 +02:00
Fernandez Ludovic
921a704c24 doc: add labels documentation. 2017-05-16 14:21:26 +02:00
Emile Vauge
3f490f95c6 Merge pull request #1589 from containous/add-ldez-maintainers
Add @ldez to maintainers
2017-05-16 11:26:54 +02:00
Emile Vauge
24d80b1909 Add @ldez to maintainers
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-05-16 11:06:59 +02:00
Ludovic Fernandez
78b2fba033 Merge pull request #1595 from ldez/refactor/remove-legacy-ci-data
chore(CI): remove old CI data.
2017-05-12 20:32:29 +02:00
Fernandez Ludovic
218b76275c chore(CI): remove old CI data. 2017-05-12 19:16:36 +02:00
Ludovic Fernandez
cf5b6d837f Merge pull request #1594 from ldez/doc/semaphoreci-badge
doc(CI): Add SemaphoreCI badge.
2017-05-12 19:04:00 +02:00
Fernandez Ludovic
0babc7bb64 doc(CI): Add SemaphoreCI badge. 2017-05-12 18:46:02 +02:00
Ludovic Fernandez
8a551d91fd Merge pull request #1573 from atbore-phx/ci-retry-tests
[CI] retry function
2017-05-12 17:41:21 +02:00
Attilio Borello
eeed035ef0 added retry function to validate script 2017-05-12 17:18:47 +02:00
Attilio Borello
33404a7772 added retry function to tests script 2017-05-12 17:18:47 +02:00
Ludovic Fernandez
bd90745528 Merge pull request #1593 from atbore-phx/ci-switch
[CI] removed unit and integration tests from travis
2017-05-12 17:17:00 +02:00
Attilio Borello
ede1212cb0 removed unit and integration tests from travis 2017-05-12 16:19:35 +02:00
Ludovic Fernandez
2dcbc01e51 Merge pull request #1544 from maxwo/proxy-dev-mode
Proxy in dev mode
2017-05-12 10:11:58 +02:00
Maxime Wojtczak
61ba50fac9 feat(Dev proxy) : Add proxy to localhost:8080 in dev mode. 2017-05-12 09:41:22 +02:00
Ludovic Fernandez
b24b5e20b4 Merge pull request #1548 from timoreimann/kubernetes-ignore-missing-pass-host-header-annotation
Merge v1.3 branch into master [2017-05-11]
2017-05-12 00:36:52 +02:00
Timo Reimann
3112432480 Merge remote-tracking branch 'upstream/v1.3' into HEAD 2017-05-11 21:10:20 +02:00
Ludovic Fernandez
94f5b0d9ff Merge pull request #1571 from containous/restore-access-logger
Restore: First stage of access logging middleware.
2017-05-11 17:24:26 +02:00
Fernandez Ludovic
d2c8824902 refactor: restore "First stage of access logging middleware."
This reverts commit 82651985c4.
2017-05-11 16:27:13 +02:00
Ludovic Fernandez
db09007dbc Merge pull request #1558 from Stibbons/yarnpkg
prefere yarnpkg over yarn
2017-05-10 18:26:05 +02:00
Gaetan Semet
5b2e8990f1 prefere yarnpkg over yarn
to avoid conflict with Hadoop Yarn cli.

I don’t know the best practice, but i do
have Apache Yarn installed on my machine, so
I get this conflict. Of course this conflict does
not arised when building within the docker.

https://github.com/yarnpkg/yarn/issues/2337
Signed-off-by: Gaetan Semet <gaetan@xeberon.net>
2017-05-10 17:35:17 +02:00
Ludovic Fernandez
2f6068decc Merge pull request #1580 from atbore-phx/docker-light
[CI] Reduce size of Docker Images
2017-05-10 17:23:37 +02:00
Attilio Borello
1e591dd188 clean up apt-cache in webui/Dockerfile 2017-05-10 11:24:19 +02:00
Attilio Borello
6838a81e50 replaced docker images with alpine if available (nginx, rabbitmq) 2017-05-10 11:24:19 +02:00
Ludovic Fernandez
ceef5e39b7 Merge pull request #1572 from atbore-phx/ci-docker-version
[CI] set Docker version
2017-05-09 16:04:08 +02:00
Attilio Borello
ef339af623 added DOCKER_VERSION variable 2017-05-09 11:25:25 +02:00
Ludovic Fernandez
acc7865542 Merge pull request #1554 from ldez/feat-push-force-pr
feat(github): push force PR branch.
2017-05-05 17:19:30 +02:00
Fernandez Ludovic
c00c240c14 feat(github): push force contributor branch. 2017-05-05 16:19:23 +02:00
4346 changed files with 200108 additions and 791969 deletions

24
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,24 @@
provider/kubernetes/** @containous/kubernetes
provider/rancher/** @containous/rancher
provider/marathon/** @containous/marathon
provider/docker/** @containous/docker
docs/user-guide/kubernetes.md @containous/kubernetes
docs/user-guide/marathon.md @containous/marathon
docs/user-guide/swarm.md @containous/docker
docs/user-guide/swarm-mode.md @containous/docker
docs/configuration/backends/docker.md @containous/docker
docs/configuration/backends/kubernetes.md @containous/kubernetes
docs/configuration/backends/marathon.md @containous/marathon
docs/configuration/backends/rancher.md @containous/rancher
examples/k8s/ @containous/kubernetes
examples/compose-k8s.yaml @containous/kubernetes
examples/k8s.namespace.yaml @containous/kubernetes
examples/compose-rancher.yml @containous/rancher
examples/compose-marathon.yml @containous/marathon
vendor/github.com/gambol99/go-marathon @containous/marathon
vendor/github.com/rancher @containous/rancher
vendor/k8s.io/ @containous/kubernetes

View File

@@ -1,150 +0,0 @@
# Contributing
### Building
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build traefik. For changes to its dependencies, the `glide` dependency management tool and `glide-vc` plugin are required.
#### Method 1: Using `Docker` and `Makefile`
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
```bash
$ make binary
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
Sending build context to Docker daemon 295.3 MB
Step 0 : FROM golang:1.7
---> 8c6473912976
Step 1 : RUN go get github.com/Masterminds/glide
[...]
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/emile/dev/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: binary (in .)
$ ls dist/
traefik*
```
#### Method 2: Using `go`
###### Setting up your `go` environment
- You need `go` v1.7+
- It is recommended you clone Træfik into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
- This will allow your `GOPATH` and `PATH` variable to be set to `~/go` via:
```bash
$ export GOPATH=~/go
$ export PATH=$PATH:$GOPATH/bin
```
This can be verified via `$ go env`
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
#### Setting up `glide` and `glide-vc` for dependency management
- Glide is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
- The glide plugin `glide-vc` must be installed from source: `go get github.com/sgotti/glide-vc`
If you want to add a dependency, use `$ glide get` to have glide put it into the vendor folder and update the glide manifest/lock files (`glide.yaml` and `glide.lock`, respectively). A following `glide-vc` run should be triggered to trim down the size of the vendor folder. The final result must be committed into VCS.
Dependencies for the integration tests in the `integration` folder are managed in a separate `integration/glide.yaml` file using the same toolset.
Care must be taken to choose the right arguments to `glide` when dealing with either main or integration test dependencies, or otherwise risk ending up with a broken build. For that reason, the helper script `script/glide.sh` encapsulates the gory details and conveniently calls `glide-vc` as well. Call it without parameters for basic usage instructions.
Here's a full example:
```bash
# install the new main dependency github.com/foo/bar and minimize vendor size
$ ./script/glide.sh get github.com/foo/bar
# install another dependency, this time for the integration tests
$ ( cd integration && ../script/glide.sh get github.com/baz/quuz )
# generate (Only required to integrate other components such as web dashboard)
$ go generate
# Standard go build
$ go build
# Using gox to build multiple platform
$ gox "linux darwin" "386 amd64 arm" \
-output="dist/traefik_{{.OS}}-{{.Arch}}" \
./cmd/traefik
# run other commands like tests
```
### Tests
##### Method 1: `Docker` and `make`
You can run unit tests using the `test-unit` target and the
integration test using the `test-integration` target.
```bash
$ make test-unit
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
# […]
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/vincent/src/github/vdemeester/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: test-unit (in .)
+ go test -cover -coverprofile=cover.out .
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
Test success
```
For development purposes, you can specify which tests to run by using:
```bash
# Run every tests in the MyTest suite
TESTFLAGS="-check.f MyTestSuite" make test-integration
# Run the test "MyTest" in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
# Run every tests starting with "My", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
# Run every tests ending with "Test", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
```
More: https://labix.org/gocheck
##### Method 2: `go`
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
```
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
```
### Documentation
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
First make sure you have python and pip installed
```shell
$ python --version
Python 2.7.2
$ pip --version
pip 1.5.2
```
Then install mkdocs with pip
```shell
$ pip install mkdocs
```
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
```shell
$ mkdocs serve
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
[I 160505 22:31:24 handlers:59] Start watching changes
[I 160505 22:31:24 handlers:61] Start detecting changes
```

View File

@@ -1,16 +1,29 @@
<!--
PLEASE READ THIS MESSAGE.
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For other type of questions, consider using one of:
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
- StackOverflow: https://stackoverflow.com/questions/tagged/traefik
-->
### Do you want to request a *feature* or report a *bug*?
<!--
If you intend to ask a support question: DO NOT FILE AN ISSUE.
-->
### What did you do?
<!--
HOW TO WRITE A GOOD ISSUE?
- if it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- Respect the issue template as much as possible.
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
@@ -19,14 +32,6 @@ HOW TO WRITE A GOOD ISSUE?
-->
### Do you want to request a *feature* or report a *bug*?
### What did you do?
### What did you expect to see?
@@ -37,6 +42,12 @@ HOW TO WRITE A GOOD ISSUE?
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
-->
```
(paste your output here)
```

68
.github/ISSUE_TEMPLATE/bugs.md vendored Normal file
View File

@@ -0,0 +1,68 @@
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
<!--
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as much as possible.
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
### What did you see instead?
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
-->
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in debug mode (`--debug` switch)
```
(paste your output here)
```

32
.github/ISSUE_TEMPLATE/features.md vendored Normal file
View File

@@ -0,0 +1,32 @@
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
### Do you want to request a *feature* or report a *bug*?
Feature
### What did you expect to see?
<!--
HOW TO WRITE A GOOD ISSUE?
- Respect the issue template as much as possible.
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->

View File

@@ -16,8 +16,21 @@ HOW TO WRITE A GOOD PULL REQUEST?
-->
### Description
### What does this PR do?
<!--
Briefly describe the pull request in a few paragraphs.
-->
<!-- A brief description of the change being made with this pull request. -->
### Motivation
<!-- What inspired you to submit this pull request? -->
### More
- [ ] Added/updated tests
- [ ] Added/updated documentation
### Additional Notes
<!-- Anything else we should know when reviewing? -->

View File

@@ -0,0 +1,7 @@
### What does this PR do?
Merge v{{.Version}} into master
### Motivation
Be sync.

View File

@@ -0,0 +1,7 @@
### What does this PR do?
Prepare release v{{.Version}}.
### Motivation
Create a new release.

26
.github/cpr.sh vendored
View File

@@ -1,26 +0,0 @@
#!/bin/sh
#
# git config --global alias.cpr '!sh .github/cpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- Checkout a Pull Request locally"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
git remote add $remote git@github.com:$remote/traefik.git
git fetch $remote $branch
git checkout -t -b "$pr--$branch" $remote/$branch

27
.github/rmpr.sh vendored
View File

@@ -1,27 +0,0 @@
#!/bin/sh
#
# git config --global alias.rmpr '!sh .github/rmpr.sh'
set -e # stop on error
usage="$(basename "$0") pr -- remove a Pull Request local branch & remote"
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
# clean
git checkout $initial
git branch -D "$pr--$branch"
git remote remove $remote

36
.github/rpr.sh vendored
View File

@@ -1,36 +0,0 @@
#!/bin/sh
#
# git config --global alias.rpr '!sh .github/rpr.sh'
set -e # stop on error
usage="$(basename "$0") pr remote/branch -- rebase a Pull Request against a remote branch"
if [ "$#" -ne 2 ]; then
echo "Illegal number of parameters"
echo "$usage" >&2
exit 1
fi
command -v jq >/dev/null 2>&1 || { echo "I require jq but it's not installed. Aborting." >&2; exit 1; }
set -x # echo on
initial=$(git rev-parse --abbrev-ref HEAD)
pr=$1
base=$2
remote=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.repo.owner.login)
branch=$(curl -s https://api.github.com/repos/containous/traefik/pulls/$pr | jq -r .head.ref)
clean ()
{
git checkout $initial
.github/rmpr.sh $pr
}
trap clean EXIT
.github/cpr.sh $pr
git rebase $base
git push --force-with-lease $remote "$pr--$branch"

7
.gitignore vendored
View File

@@ -1,7 +1,7 @@
/dist
/autogen/gen.go
.idea
.intellij
/autogen/genstatic/gen.go
.idea/
.intellij/
*.iml
/traefik
/traefik.toml
@@ -11,3 +11,4 @@
*.log
*.exe
.DS_Store
/examples/acme/acme.json

View File

@@ -2,7 +2,7 @@
set -e
sudo -E apt-get -yq update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-engine=${DOCKER_VERSION}*
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*
docker version
pip install --user -r requirements.txt

View File

@@ -1,12 +1,8 @@
#!/usr/bin/env bash
set -e
export secure='btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg='
export REPO='containous/traefik'
export DOCKER_VERSION=1.12.6
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
then
export VERSION
@@ -14,7 +10,7 @@ else
export VERSION=''
fi
export CODENAME=raclette
export CODENAME=cancoillotte
export N_MAKE_JOBS=2

View File

@@ -1,17 +1,18 @@
sudo: required
dist: trusty
git:
depth: false
services:
- docker
env:
global:
- secure: btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg=
- REPO: $TRAVIS_REPO_SLUG
- VERSION: $TRAVIS_TAG
- CODENAME: raclette
- CODENAME: cancoillotte
- N_MAKE_JOBS: 2
- DOCKER_VERSION: 1.12.6
script:
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
@@ -21,23 +22,18 @@ before_deploy:
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
export BEFORE_DEPLOY_RUN=1;
sudo -E apt-get -yq update;
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-engine=${DOCKER_VERSION}*;
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
docker version;
pip install --user -r requirements.txt;
make -j${N_MAKE_JOBS} crossbinary-parallel;
make image;
mkdocs build --clean;
tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .;
if [ "$TRAVIS_TAG" ]; then
make -j${N_MAKE_JOBS} crossbinary-parallel;
tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .;
fi;
curl -sI https://github.com/containous/structor/releases/latest | grep -Fi Location | tr -d '\r' | sed "s/tag/download/g" | awk -F " " '{ print $2 "/structor_linux-amd64"}' | wget --output-document=$GOPATH/bin/structor -i -;
chmod +x $GOPATH/bin/structor;
structor -o containous -r traefik --dockerfile-url="https://raw.githubusercontent.com/containous/traefik/master/docs.Dockerfile" --menu.js-url="https://raw.githubusercontent.com/containous/structor/master/traefik-menu.js.gotmpl" --exp-branch=master --debug;
fi
deploy:
- provider: pages
edge: true
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
tags: true
- provider: releases
api_key: ${GITHUB_TOKEN}
file: dist/traefik*
@@ -57,3 +53,11 @@ deploy:
skip_cleanup: true
on:
repo: containous/traefik
- provider: pages
edge: true
github_token: ${GITHUB_TOKEN}
local_dir: site
skip_cleanup: true
on:
repo: containous/traefik
all_branches: true

Binary file not shown.

BIN
.travis/traefiker_rsa.enc Normal file

Binary file not shown.

File diff suppressed because it is too large Load Diff

260
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,260 @@
# Contributing
## Building
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build Traefik.
For changes to its dependencies, the `dep` dependency management tool is required.
### Method 1: Using `Docker` and `Makefile`
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
```bash
$ make binary
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
Sending build context to Docker daemon 295.3 MB
Step 0 : FROM golang:1.9-alpine
---> 8c6473912976
Step 1 : RUN go get github.com/golang/dep/cmd/dep
[...]
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/user/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: binary (in .)
$ ls dist/
traefik*
```
### Method 2: Using `go`
##### Setting up your `go` environment
- You need `go` v1.9+
- It is recommended you clone Træfik into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
- Set your `GOPATH` and `PATH` variable to be set to `~/go` via:
```bash
export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
> Note: You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- Verify your environment is setup properly by running `$ go env`. Depending on your OS and environment you should see output similar to:
```bash
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/<yourusername>/go"
GORACE=""
## more go env's will be listed
```
##### Build Træfik
Once your environment is set up and the Træfik repository cloned you can build Træfik. You need get `go-bindata` once to be able to use `go generate` command as part of the build. The steps to build are:
```bash
cd ~/go/src/github.com/containous/traefik
# Get go-bindata. Please note, the ellipses are required
go get github.com/containous/go-bindata/...
# Start build
# generate
# (required to merge non-code components into the final binary, such as the web dashboard and provider's Go templates)
go generate
# Standard go build
go build ./cmd/traefik
# run other commands like tests
```
You will find the Træfik executable in the `~/go/src/github.com/containous/traefik` folder as `traefik`.
### Updating the templates
If you happen to update the provider templates (in `/templates`), you need to run `go generate` to update the `autogen` package.
### Setting up dependency management
[dep](https://github.com/golang/dep) is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
You need to use [dep](https://github.com/golang/dep) >= O.4.1.
If you want to add a dependency, use `dep ensure -add` to have [dep](https://github.com/golang/dep) put it into the vendor folder and update the dep manifest/lock files (`Gopkg.toml` and `Gopkg.lock`, respectively).
A following `make dep-prune` run should be triggered to trim down the size of the vendor folder.
The final result must be committed into VCS.
Here's a full example using dep to add a new dependency:
```bash
# install the new main dependency github.com/foo/bar and minimize vendor size
$ dep ensure -add github.com/foo/bar
# generate (Only required to integrate other components such as web dashboard)
$ go generate
# Standard go build
$ go build ./cmd/traefik
# run other commands like tests
```
### Tests
#### Method 1: `Docker` and `make`
You can run unit tests using the `test-unit` target and the
integration test using the `test-integration` target.
```bash
$ make test-unit
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
# […]
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/user/go/src/github/containous/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
---> Making bundle: generate (in .)
removed 'gen.go'
---> Making bundle: test-unit (in .)
+ go test -cover -coverprofile=cover.out .
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
Test success
```
For development purposes, you can specify which tests to run by using:
```bash
# Run every tests in the MyTest suite
TESTFLAGS="-check.f MyTestSuite" make test-integration
# Run the test "MyTest" in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
# Run every tests starting with "My", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
# Run every tests ending with "Test", in the MyTest suite
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
```
More: https://labix.org/gocheck
#### Method 2: `go`
Unit tests can be run from the cloned directory by `$ go test ./...` which should return `ok` similar to:
```
ok _/home/user/go/src/github/containous/traefik 0.004s
```
Integration tests must be run from the `integration/` directory and require the `-integration` switch to be passed like this: `$ cd integration && go test -integration ./...`.
## Documentation
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
### Method 1: `Docker` and `make`
You can test documentation using the `docs` target.
```bash
$ make docs
docker build -t traefik-docs -f docs.Dockerfile .
# […]
docker run --rm -v /home/user/go/github/containous/traefik:/mkdocs -p 8000:8000 traefik-docs mkdocs serve
# […]
[I 170828 20:47:48 server:283] Serving on http://0.0.0.0:8000
[I 170828 20:47:48 handlers:60] Start watching changes
[I 170828 20:47:48 handlers:62] Start detecting changes
```
And go to [http://127.0.0.1:8000](http://127.0.0.1:8000).
### Method 2: `mkdocs`
First make sure you have python and pip installed
```shell
$ python --version
Python 2.7.2
$ pip --version
pip 1.5.2
```
Then install mkdocs with pip
```shell
pip install --user -r requirements.txt
```
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
```shell
$ mkdocs serve
INFO - Building documentation...
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
INFO - Cleaning site directory
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
[I 160505 22:31:24 handlers:59] Start watching changes
[I 160505 22:31:24 handlers:61] Start detecting changes
```
## How to Write a Good Issue
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For end-user related support questions, refer to one of the following:
- the Traefik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
### Title
The title must be short and descriptive. (~60 characters)
### Description
- Respect the issue template as much as possible. [template](.github/ISSUE_TEMPLATE.md)
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
## How to Write a Good Pull Request
### Title
The title must be short and descriptive. (~60 characters)
### Description
- Respect the pull request template as much as possible. [template](.github/PULL_REQUEST_TEMPLATE.md)
- Explain the conditions which led you to write this PR: the context.
- The context should lead to something, an idea or a problem that youre facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
### Content
- Make it small.
- Do only one thing.
- Write useful descriptions and titles.
- Avoid re-formatting.
- Make sure the code builds.
- Make sure all tests pass.
- Add tests.
- Address review comments in terms of additional commits.
- Do not amend/squash existing ones unless the PR is trivial.
- If a PR involves changes to third-party dependencies, the commits pertaining to the vendor folder and the manifest/lock file(s) should be committed separated.
Read [10 tips for better pull requests](http://blog.ploeh.dk/2015/01/15/10-tips-for-better-pull-requests/).

1395
Gopkg.lock generated Normal file

File diff suppressed because it is too large Load Diff

197
Gopkg.toml Normal file
View File

@@ -0,0 +1,197 @@
# Gopkg.toml example
#
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
ignored = ["github.com/sirupsen/logrus"]
[[constraint]]
branch = "master"
name = "github.com/ArthurHlt/go-eureka-client"
[[constraint]]
branch = "master"
name = "github.com/BurntSushi/toml"
[[constraint]]
branch = "master"
name = "github.com/BurntSushi/ty"
[[constraint]]
branch = "master"
name = "github.com/NYTimes/gziphandler"
[[constraint]]
branch = "containous-fork"
name = "github.com/abbot/go-http-auth"
source = "github.com/containous/go-http-auth"
[[constraint]]
branch = "master"
name = "github.com/armon/go-proxyproto"
[[constraint]]
name = "github.com/aws/aws-sdk-go"
version = "1.6.18"
[[constraint]]
branch = "master"
name = "github.com/cenk/backoff"
[[constraint]]
name = "github.com/containous/flaeg"
version = "1.0.1"
[[constraint]]
branch = "master"
name = "github.com/containous/mux"
[[constraint]]
name = "github.com/containous/staert"
version = "2.1.0"
[[constraint]]
name = "github.com/containous/traefik-extra-service-fabric"
version = "1.0.5"
[[constraint]]
name = "github.com/coreos/go-systemd"
version = "14.0.0"
[[constraint]]
branch = "master"
name = "github.com/docker/leadership"
source = "github.com/containous/leadership"
[[constraint]]
name = "github.com/docker/libkv"
source = "github.com/abronan/libkv"
[[constraint]]
name = "github.com/eapache/channels"
version = "1.1.0"
[[constraint]]
branch = "master"
name = "github.com/elazarl/go-bindata-assetfs"
[[constraint]]
name = "github.com/go-check/check"
source = "github.com/containous/check"
[[constraint]]
name = "github.com/go-kit/kit"
version = "0.3.0"
[[constraint]]
name = "github.com/influxdata/influxdb"
version = "1.3.7"
[[constraint]]
branch = "master"
name = "github.com/jjcollinge/servicefabric"
[[constraint]]
name = "github.com/mattn/go-shellwords"
version = "1.0.3"
[[constraint]]
name = "github.com/mesosphere/mesos-dns"
source = "https://github.com/containous/mesos-dns.git"
[[constraint]]
branch = "master"
name = "github.com/mitchellh/copystructure"
[[constraint]]
branch = "master"
name = "github.com/mitchellh/hashstructure"
[[constraint]]
branch = "master"
name = "github.com/mitchellh/mapstructure"
[[constraint]]
branch = "master"
name = "github.com/rancher/go-rancher-metadata"
[[constraint]]
branch = "master"
name = "github.com/ryanuber/go-glob"
[[constraint]]
name = "github.com/satori/go.uuid"
version = "1.1.0"
[[constraint]]
branch = "master"
name = "github.com/stvp/go-udp-testing"
[[constraint]]
name = "github.com/vdemeester/shakers"
version = "0.1.0"
[[constraint]]
branch = "containous-fork"
name = "github.com/vulcand/oxy"
source = "https://github.com/containous/oxy.git"
[[constraint]]
name = "github.com/xenolf/lego"
version = "0.4.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "1.5.2"
[[constraint]]
name = "gopkg.in/fsnotify.v1"
version = "1.4.2"
[[constraint]]
name = "k8s.io/client-go"
version = "2.0.0"
[[override]]
name = "github.com/Nvveen/Gotty"
revision = "6018b68f96b839edfbe3fb48668853f5dbad88a3"
source = "github.com/ijc25/Gotty"
[[override]]
name = "github.com/gorilla/websocket"
revision = "a69d9f6de432e2c6b296a947d8a5ee88f68522cf"
[[override]]
# always keep this override
name = "github.com/mailgun/timetools"
revision = "7e6055773c5137efbeb3bd2410d705fe10ab6bfd"
[[override]]
name = "github.com/vulcand/predicate"
revision = "19b9dde14240d94c804ae5736ad0e1de10bf8fe6"
[[override]]
# remove override on master
name = "github.com/coreos/bbolt"
revision = "32c383e75ce054674c53b5a07e55de85332aee14"
[prune]
non-go = true
go-tests = true
unused-packages = true

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016-2017 Containous SAS
Copyright (c) 2016-2018 Containous SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

154
MAINTAINER.md Normal file
View File

@@ -0,0 +1,154 @@
# Maintainers
## The team
* Emile Vauge [@emilevauge](https://github.com/emilevauge)
* Vincent Demeester [@vdemeester](https://github.com/vdemeester)
* Ed Robinson [@errm](https://github.com/errm)
* Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
* Manuel Zapf [@SantoDE](https://github.com/SantoDE)
* Timo Reimann [@timoreimann](https://github.com/timoreimann)
* Ludovic Fernandez [@ldez](https://github.com/ldez)
* Julien Salleyron [@juliens](https://github.com/juliens)
* Nicolas Mengin [@nmengin](https://github.com/nmengin)
* Marco Jantke [@marco-jantke](https://github.com/marco-jantke)
* Michaël Matur [@mmatur](https://github.com/mmatur)
## PR review process:
* The status `needs-design-review` is only used in complex/heavy/tricky PRs.
* From `1` to `2`: 1 design LGTM in comment, by a senior maintainer, if needed.
* From `2` to `3`: 3 LGTM by any maintainer.
* If needed, a specific maintainer familiar with a particular domain can be requested for the review.
We use [PRM](https://github.com/ldez/prm) to manage locally pull requests.
## Bots
### [Myrmica Lobicornis](https://github.com/containous/lobicornis/)
**Update and Merge Pull Request**
The maintainer giving the final LGTM must add the `status/3-needs-merge` label to trigger the merge bot.
By default, a squash-rebase merge will be carried out.
If you want to preserve commits you must add `bot/merge-method-rebase` before `status/3-needs-merge`.
The status `status/4-merge-in-progress` is only for the bot.
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
In this case you must solve conflicts/CI/... and after you only need to remove `bot/need-human-merge`.
A maintainer can add `bot/no-merge` on a PR if he want (temporarily) prevent a merge by the bot.
`bot/light-review` can be used to decrease required LGTM from 3 to 1 when:
- vendor updates from previously reviewed PRs
- merges branches into master
- prepare release
### [Myrmica Bibikoffi](https://github.com/containous/bibikoffi/)
* closes stale issues [cron]
* use some criterion as number of days between creation, last update, labels, ...
### [Myrmica Aloba](https://github.com/containous/aloba)
**Manage GitHub labels**
* Add labels on new PR [GitHub WebHook]
* Add milestone to a new PR based on a branch version (1.4, 1.3, ...) [GitHub WebHook]
* Add and remove `contributor/waiting-for-corrections` label when a review request changes [GitHub WebHook]
* Weekly report of PR status on Slack (CaptainPR) [cron]
## Labels
If we open/look an issue/PR, we must add a `kind/*`, an `area/*` and a `status/*`.
### Contributor
* `contributor/need-more-information`: we need more information from the contributor in order to analyze a problem.
* `contributor/waiting-for-feedback`: we need the contributor to give us feedback.
* `contributor/waiting-for-corrections`: we need the contributor to take actions in order to move forward with a PR. **(only for PR)** _[bot, humans]_
* `contributor/needs-resolve-conflicts`: use it only when there is some conflicts (and an automatic rebase is not possible). **(only for PR)** _[bot, humans]_
### Kind
* `kind/enhancement`: a new or improved feature.
* `kind/question`: It's a question. **(only for issue)**
* `kind/proposal`: proposal PR/issues need a public debate.
* _Proposal issues_ are design proposal that need to be refined with multiple contributors.
* _Proposal PRs_ are technical prototypes that need to be refined with multiple contributors.
* `kind/bug/possible`: if we need to analyze to understand if it's a bug or not. **(only for issues)**
* `kind/bug/confirmed`: we are sure, it's a bug. **(only for issues)**
* `kind/bug/fix`: it's a bug fix. **(only for PR)**
### Resolution
* `resolution/duplicate`: it's a duplicate issue/PR.
* `resolution/declined`: Rule #1 of open-source: no is temporary, yes is forever.
* `WIP`: Work In Progress. **(only for PR)**
### Platform
* `platform/windows`: Windows related.
### Area
* `area/acme`: ACME related.
* `area/api`: Traefik API related.
* `area/authentication`: Authentication related.
* `area/cluster`: Traefik clustering related.
* `area/documentation`: regards improving/adding documentation.
* `area/infrastructure`: related to CI or Traefik building scripts.
* `area/healthcheck`: Health-check related.
* `area/logs`: Traefik logs related.
* `area/middleware`: Middleware related.
* `area/middleware/metrics`: Metrics related. (Prometheus, StatsD, ...)
* `area/oxy`: Oxy related.
* `area/provider`: related to all providers.
* `area/provider/boltdb`: Boltd DB related.
* `area/provider/consul`: Consul related.
* `area/provider/docker`: Docker and Swarm related.
* `area/provider/ecs`: ECS related.
* `area/provider/etcd`: Etcd related.
* `area/provider/eureka`: Eureka related.
* `area/provider/file`: file provider related.
* `area/provider/k8s`: Kubernetes related.
* `area/provider/marathon`: Marathon related.
* `area/provider/mesos`: Mesos related.
* `area/provider/rancher`: Rancher related.
* `area/provider/zk`: Zoo Keeper related.
* `area/sticky-session`: Sticky session related.
* `area/tls`: TLS related.
* `area/websocket`: WebSocket related.
* `area/webui`: Web UI related.
### Priority
* `priority/P0`: needs hot fix. **(only for issue)**
* `priority/P1`: need to be fixed in next release. **(only for issue)**
* `priority/P2`: need to be fixed in the future. **(only for issue)**
* `priority/P3`: maybe. **(only for issue)**
### PR size
* `size/S`: small PR. **(only for PR)** _[bot only]_
* `size/M`: medium PR. **(only for PR)** _[bot only]_
* `size/L`: Large PR. **(only for PR)** _[bot only]_
### Status - Workflow
The `status/*` labels represent the desired state in the workflow.
* `status/0-needs-triage`: all new issue or PR have this status. _[bot only]_
* `status/1-needs-design-review`: need a design review. **(only for PR)**
* `status/2-needs-review`: need a code/documentation review. **(only for PR)**
* `status/3-needs-merge`: ready to merge. **(only for PR)**
* `status/4-merge-in-progress`: merge in progress. _[bot only]_

View File

@@ -7,23 +7,29 @@ TRAEFIK_ENVS := \
-e VERBOSE \
-e VERSION \
-e CODENAME \
-e TESTDIRS
-e TESTDIRS \
-e CI \
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/' | grep -v '^integration/vendor/')
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
BIND_DIR := "dist"
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(subst /,-,$(GIT_BRANCH)))
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -e "TEST_CONTAINER=1" -v "/var/run/docker.sock:/var/run/docker.sock")
TRAEFIK_DOC_IMAGE := traefik-docs
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
DOCKER_RUN_DOC_PORT := 8000
DOCKER_RUN_DOC_MOUNT := -v $(CURDIR):/mkdocs
DOCKER_RUN_DOC_OPTS := --rm $(DOCKER_RUN_DOC_MOUNT) -p $(DOCKER_RUN_DOC_PORT):8000
print-%: ; @echo $*=$($*)
@@ -65,9 +71,10 @@ test-unit: build ## run the unit tests
test-integration: build ## run the integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
TEST_HOST=1 ./script/make.sh test-integration
validate: build ## validate gofmt, golint and go vet
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell validate-vendor
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-gofmt validate-govet validate-golint validate-misspell validate-vendor validate-autogen
build: dist
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
@@ -81,15 +88,27 @@ build-no-cache: dist
shell: build ## start a shell inside the build env
$(DOCKER_RUN_TRAEFIK) /bin/bash
image: binary ## build a docker traefik image
image-dirty: binary ## build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
image: clear-static binary ## clean up static directory and build a docker traefik image
docker build -t $(TRAEFIK_IMAGE) .
docs: docs-image
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOC_IMAGE) mkdocs serve
docs-image:
docker build -t $(TRAEFIK_DOC_IMAGE) -f docs.Dockerfile .
clear-static:
rm -rf static
dist:
mkdir dist
run-dev:
go generate
go build
go build ./cmd/traefik
./traefik
generate-webui: build-webui
@@ -106,9 +125,14 @@ fmt:
gofmt -s -l -w $(SRCS)
pull-images:
for f in $(shell find ./integration/resources/compose/ -type f); do \
docker-compose -f $$f pull; \
done
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml | awk '{print $$2}' | sort | uniq | xargs -P 6 -n 1 docker pull
dep-ensure:
dep ensure -v
./script/prune-dep.sh
dep-prune:
./script/prune-dep.sh
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

149
README.md
View File

@@ -3,7 +3,7 @@
<img src="docs/img/traefik.logo.png" alt="Træfik" title="Træfik" />
</p>
[![Build Status](https://travis-ci.org/containous/traefik.svg?branch=master)](https://travis-ci.org/containous/traefik)
[![Build Status SemaphoreCI](https://semaphoreci.com/api/v1/containous/traefik/branches/master/shields_badge.svg)](https://semaphoreci.com/containous/traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://docs.traefik.io)
[![Go Report Card](https://goreportcard.com/badge/containous/traefik)](http://goreportcard.com/report/containous/traefik)
[![](https://images.microbadger.com/badges/image/traefik.svg)](https://microbadger.com/images/traefik)
@@ -12,8 +12,27 @@
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Træfik (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Kubernetes](http://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Mesos](https://github.com/apache/mesos), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Eureka](https://github.com/Netflix/eureka), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), Rest API, file...) to manage its configuration automatically and dynamically.
Træfik (pronounced like _traffic_) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), and a lot more) to manage its configuration automatically and dynamically.
---
. **[Overview](#overview)** .
**[Features](#features)** .
**[Supported backends](#supported-backends)** .
**[Quickstart](#quickstart)** .
**[Web UI](#web-ui)** .
**[Test it](#test-it)** .
**[Documentation](#documentation)** .
. **[Support](#support)** .
**[Release cycle](#release-cycle)** .
**[Contributing](#contributing)** .
**[Maintainers](#maintainers)** .
**[Plumbing](#plumbing)** .
**[Credits](#credits)** .
---
## Overview
@@ -24,7 +43,7 @@ If you want your users to access some of your microservices from the Internet, y
- path `domain.com/web` will point the microservice `web` in your private network
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
Microservices are often deployed in dynamic environments where services are added, removed, killed, upgraded or scaled many times a day.
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
@@ -36,45 +55,52 @@ Træfik can listen to your service registry/orchestrator API, and knows each tim
Routes to your services will be created instantly.
Run it and forget it!
## Features
- [It's fast](http://docs.traefik.io/benchmarks)
- [It's fast](https://docs.traefik.io/benchmarks)
- No dependency hell, single binary made with go
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
- Rest API
- Multiple backends supported: Docker, Swarm, Kubernetes, Marathon, Mesos, Consul, Etcd, and more to come
- Watchers for backends, can listen for changes in backends to apply a new configuration automatically
- Hot-reloading of configuration. No need to restart the process
- Graceful shutdown http connections
- Circuit breakers on backends
- Circuit breakers, retry
- Round Robin, rebalancer load-balancers
- Rest Metrics
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
- SSL backends support
- SSL frontend support (with SNI)
- Metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
- Clean AngularJS Web UI
- Websocket support
- HTTP/2 support
- Retry request if network error
- Websocket, HTTP/2, GRPC ready
- Access Logs (JSON, CLF)
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
- High Availability with cluster mode
- [Proxy Protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support
- High Availability with cluster mode (beta)
## Supported backends
- [Docker](https://www.docker.com/) / [Swarm mode](https://docs.docker.com/engine/swarm/)
- [Kubernetes](https://kubernetes.io)
- [Mesos](https://github.com/apache/mesos) / [Marathon](https://mesosphere.github.io/marathon/)
- [Rancher](https://rancher.com) (API, Metadata)
- [Consul](https://www.consul.io/) / [Etcd](https://coreos.com/etcd/) / [Zookeeper](https://zookeeper.apache.org) / [BoltDB](https://github.com/boltdb/bolt)
- [Eureka](https://github.com/Netflix/eureka)
- [Amazon ECS](https://aws.amazon.com/ecs)
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- File
- Rest API
## Quickstart
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers. If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com/).
You will learn Træfik basics in less than 10 minutes.
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](http://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
[![Traefik ContainerCamp UK](https://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfik features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
[![Traefik Devoxx France](http://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](http://www.youtube.com/watch?v=QvAz9mVx5TI)
## Web UI
@@ -83,12 +109,6 @@ You can access the simple HTML frontend of Træfik.
![Web UI Providers](docs/img/web.frontend.png)
![Web UI Health](docs/img/traefik-health.png)
## Plumbing
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun guys
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
- [Negroni](https://github.com/codegangsta/negroni): web middlewares made simple
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
## Test it
@@ -98,7 +118,7 @@ You can access the simple HTML frontend of Træfik.
./traefik --configFile=traefik.toml
```
- Use the tiny Docker image:
- Use the tiny Docker image and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
```shell
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
@@ -110,33 +130,60 @@ docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.to
git clone https://github.com/containous/traefik
```
## Documentation
You can find the complete documentation [here](https://docs.traefik.io).
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
A collection of contributions around Træfik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
## Contributing
Please refer to [this section](.github/CONTRIBUTING.md).
## Code Of Conduct
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
## Support
You can join [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com) to get basic support.
To get basic support, you can:
- join the Træfik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
## Release cycle
- Release: We try to release a new version every 2 months
- i.e.: 1.3.0, 1.4.0, 1.5.0
- Release candidate: we do RC (1.**x**.0-rc**y**) before the final release (1.**x**.0)
- i.e.: 1.1.0-rc1 -> 1.1.0-rc2 -> 1.1.0-rc3 -> 1.1.0-rc4 -> 1.1.0
- Bug-fixes: For each version we release bug fixes
- i.e.: 1.1.1, 1.1.2, 1.1.3
- those versions contain only bug-fixes
- no additional features are delivered in those versions
- Each version is supported until the next one is released
- i.e.: 1.1.x will be supported until 1.2.0 is out
- We use [Semantic Versioning](http://semver.org/)
## Contributing
Please refer to [contributing documentation](CONTRIBUTING.md).
### Code of Conduct
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms.
## Maintainers
- Emile Vauge [@emilevauge](https://github.com/emilevauge)
- Vincent Demeester [@vdemeester](https://github.com/vdemeester)
- Russell Clare [@Russell-IO](https://github.com/Russell-IO)
- Ed Robinson [@errm](https://github.com/errm)
- Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
- Manuel Laufenberg [@SantoDE](https://github.com/SantoDE)
- Thomas Recloux [@trecloux](https://github.com/trecloux)
- Timo Reimann [@timoreimann](https://github.com/timoreimann)
[Information about process and maintainers](MAINTAINER.md)
## Plumbing
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun folks
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
- [Negroni](https://github.com/urfave/negroni): web middlewares made simple
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
## Credits
@@ -144,4 +191,4 @@ Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on t
Traefik's logo licensed under the Creative Commons 3.0 Attributions license.
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).

View File

@@ -6,7 +6,7 @@ import (
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"reflect"
"sort"
"strings"
@@ -24,6 +24,7 @@ type Account struct {
PrivateKey []byte
DomainsCertificate DomainsCertificates
ChallengeCerts map[string]*ChallengeCert
HTTPChallenge map[string]map[string][]byte
}
// ChallengeCert stores a challenge certificate
@@ -33,7 +34,7 @@ type ChallengeCert struct {
certificate *tls.Certificate
}
// Init inits acccount struct
// Init inits account struct
func (a *Account) Init() error {
err := a.DomainsCertificate.Init()
if err != nil {
@@ -178,7 +179,7 @@ func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain D
return nil
}
}
return errors.New("Certificate to renew not found for domain " + domain.Main)
return fmt.Errorf("certificate to renew not found for domain %s", domain.Main)
}
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
@@ -221,6 +222,24 @@ func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate,
return nil, false
}
func (dc *DomainsCertificates) toDomainsMap() map[string]*tls.Certificate {
domainsCertificatesMap := make(map[string]*tls.Certificate)
for _, domainCertificate := range dc.Certs {
certKey := domainCertificate.Domains.Main
if domainCertificate.Domains.SANs != nil {
sort.Strings(domainCertificate.Domains.SANs)
for _, dnsName := range domainCertificate.Domains.SANs {
if dnsName != domainCertificate.Domains.Main {
certKey += fmt.Sprintf(",%s", dnsName)
}
}
}
domainsCertificatesMap[certKey] = domainCertificate.tlsCert
}
return domainsCertificatesMap
}
// DomainsCertificate contains a certificate for multiple domains
type DomainsCertificate struct {
Domains Domain

View File

@@ -7,6 +7,8 @@ import (
"fmt"
"io/ioutil"
fmtlog "log"
"net"
"net/http"
"os"
"regexp"
"strings"
@@ -14,10 +16,14 @@ import (
"github.com/BurntSushi/ty/fun"
"github.com/cenk/backoff"
"github.com/containous/flaeg"
"github.com/containous/mux"
"github.com/containous/staert"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
traefikTls "github.com/containous/traefik/tls"
"github.com/containous/traefik/tls/generate"
"github.com/containous/traefik/types"
"github.com/eapache/channels"
"github.com/xenolf/lego/acme"
@@ -31,24 +37,39 @@ var (
// ACME allows to connect to lets encrypt and retrieve certs
type ACME struct {
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
Storage string `description:"File or key used for certificates storage."`
StorageFile string // deprecated
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
DNSProvider string `description:"Use a DNS based challenge provider rather than HTTPS."`
DelayDontCheckDNS int `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
ACMELogging bool `description:"Enable debug logging of ACME actions."`
client *acme.Client
defaultCertificate *tls.Certificate
store cluster.Store
challengeProvider *challengeProvider
checkOnDemandDomain func(domain string) bool
jobs *channels.InfiniteChannel
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
Email string `description:"Email address used for registration"`
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
Storage string `description:"File or key used for certificates storage."`
StorageFile string // deprecated
OnDemand bool `description:"Enable on demand certificate generation. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."` //deprecated
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
CAServer string `description:"CA server to use."`
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
DNSChallenge *DNSChallenge `description:"Activate DNS-01 Challenge"`
HTTPChallenge *HTTPChallenge `description:"Activate HTTP-01 Challenge"`
DNSProvider string `description:"Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge."` // deprecated
DelayDontCheckDNS flaeg.Duration `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."` // deprecated
ACMELogging bool `description:"Enable debug logging of ACME actions."`
client *acme.Client
defaultCertificate *tls.Certificate
store cluster.Store
challengeTLSProvider *challengeTLSProvider
challengeHTTPProvider *challengeHTTPProvider
checkOnDemandDomain func(domain string) bool
jobs *channels.InfiniteChannel
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
dynamicCerts *safe.Safe
}
// DNSChallenge contains DNS challenge Configuration
type DNSChallenge struct {
Provider string `description:"Use a DNS-01 based challenge provider rather than HTTPS."`
DelayBeforeCheck flaeg.Duration `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
}
// HTTPChallenge contains HTTP challenge Configuration
type HTTPChallenge struct {
EntryPoint string `description:"HTTP challenge EntryPoint"`
}
//Domains parse []Domain
@@ -93,28 +114,66 @@ type Domain struct {
}
func (a *ACME) init() error {
// FIXME temporary fix, waiting for https://github.com/xenolf/lego/pull/478
acme.HTTPClient = http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
TLSHandshakeTimeout: 15 * time.Second,
ResponseHeaderTimeout: 15 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
}
if a.ACMELogging {
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
} else {
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
}
// no certificates in TLS config, so we add a default one
cert, err := generateDefaultCertificate()
cert, err := generate.DefaultCertificate()
if err != nil {
return err
}
a.defaultCertificate = cert
// TODO: to remove in the futurs
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
log.Warnf("ACME.StorageFile is deprecated, use ACME.Storage instead")
a.Storage = a.StorageFile
}
a.jobs = channels.NewInfiniteChannel()
return nil
}
// AddRoutes add routes on internal router
func (a *ACME) AddRoutes(router *mux.Router) {
router.Methods(http.MethodGet).
Path(acme.HTTP01ChallengePath("{token}")).
Handler(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
if a.challengeHTTPProvider == nil {
rw.WriteHeader(http.StatusNotFound)
return
}
vars := mux.Vars(req)
if token, ok := vars["token"]; ok {
domain, _, err := net.SplitHostPort(req.Host)
if err != nil {
log.Debugf("Unable to split host and port: %v. Fallback to request host.", err)
domain = req.Host
}
tokenValue := a.challengeHTTPProvider.getTokenValue(token, domain)
if len(tokenValue) > 0 {
rw.WriteHeader(http.StatusOK)
rw.Write(tokenValue)
return
}
}
rw.WriteHeader(http.StatusNotFound)
}))
}
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, certs *safe.Safe, checkOnDemandDomain func(domain string) bool) error {
err := a.init()
if err != nil {
return err
@@ -123,6 +182,7 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
return errors.New("Empty Store, please provide a key for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
a.dynamicCerts = certs
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
@@ -151,12 +211,12 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
}
a.store = datastore
a.challengeProvider = &challengeProvider{store: a.store}
a.challengeTLSProvider = &challengeTLSProvider{store: a.store}
ticker := time.NewTicker(24 * time.Hour)
leadership.Pool.AddGoCtx(func(ctx context.Context) {
log.Infof("Starting ACME renew job...")
defer log.Infof("Stopped ACME renew job...")
log.Info("Starting ACME renew job...")
defer log.Info("Stopped ACME renew job...")
for {
select {
case <-ctx.Done():
@@ -167,74 +227,75 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
}
})
leadership.AddListener(func(elected bool) error {
if elected {
object, err := a.store.Load()
leadership.AddListener(a.leadershipListener)
return nil
}
func (a *ACME) leadershipListener(elected bool) error {
if elected {
_, err := a.store.Load()
if err != nil {
return err
}
transaction, object, err := a.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
account.Init()
var needRegister bool
if account == nil || len(account.Email) == 0 {
account, err = NewAccount(a.Email)
if err != nil {
return err
}
transaction, object, err := a.store.Begin()
needRegister = true
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Debug("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account := object.(*Account)
account.Init()
var needRegister bool
if account == nil || len(account.Email) == 0 {
account, err = NewAccount(a.Email)
if err != nil {
return err
}
needRegister = true
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debug("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
log.Debug(err)
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
}
if needRegister {
// New users will need to register; be sure to save it
log.Debugf("Register...")
reg, err := a.client.Register()
if err != nil {
return err
}
account.Registration = reg
}
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
reg, err := a.client.QueryRegistration()
if err != nil {
return err
}
account.Registration = reg
err = a.client.AgreeToTOS()
if err != nil {
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
}
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
}
return nil
})
err = transaction.Commit(account)
if err != nil {
return err
}
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
}
return nil
}
// CreateLocalConfig creates a tls.config using local ACME configuration
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, certs *safe.Safe, checkOnDemandDomain func(domain string) bool) error {
defer a.runJobs()
err := a.init()
if err != nil {
return err
@@ -243,18 +304,19 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
return errors.New("Empty Store, please provide a filename for certs storage")
}
a.checkOnDemandDomain = checkOnDemandDomain
a.dynamicCerts = certs
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
tlsConfig.GetCertificate = a.getCertificate
a.TLSConfig = tlsConfig
localStore := NewLocalStore(a.Storage)
a.store = localStore
a.challengeProvider = &challengeProvider{store: a.store}
a.challengeTLSProvider = &challengeTLSProvider{store: a.store}
var needRegister bool
var account *Account
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
log.Infof("Loading ACME Account...")
log.Info("Loading ACME Account...")
// load account
object, err := localStore.Load()
if err != nil {
@@ -262,7 +324,7 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
}
account = object.(*Account)
} else {
log.Infof("Generating ACME Account...")
log.Info("Generating ACME Account...")
account, err = NewAccount(a.Email)
if err != nil {
return err
@@ -272,12 +334,14 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
a.client, err = a.buildACMEClient(account)
if err != nil {
return err
log.Errorf(`Failed to build ACME client: %s
Let's Encrypt functionality will be limited until traefik is restarted.`, err)
return nil
}
if needRegister {
// New users will need to register; be sure to save it
log.Infof("Register...")
log.Info("Register...")
reg, err := a.client.Register()
if err != nil {
return err
@@ -287,7 +351,7 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
// The client has a URL to the current Let's Encrypt Subscriber
// Agreement. The user will need to agree to it.
log.Debugf("AgreeToTOS...")
log.Debug("AgreeToTOS...")
err = a.client.AgreeToTOS()
if err != nil {
// Let's Encrypt Subscriber Agreement renew ?
@@ -313,14 +377,12 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
a.retrieveCertificates()
a.renewCertificates()
a.runJobs()
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
for range ticker.C {
a.renewCertificates()
}
})
return nil
}
@@ -328,15 +390,12 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
domain := types.CanonicalDomain(clientHello.ServerName)
account := a.store.Get().(*Account)
//use regex to test for wildcard certs that might have been added into TLSConfig
for k := range a.TLSConfig.NameToCertificate {
selector := "^" + strings.Replace(k, "*.", "[^\\.]*\\.?", -1) + "$"
match, _ := regexp.MatchString(selector, domain)
if match {
return a.TLSConfig.NameToCertificate[k], nil
}
if providedCertificate := a.getProvidedCertificate(domain); providedCertificate != nil {
return providedCertificate, nil
}
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
if challengeCert, ok := a.challengeTLSProvider.getCertificate(domain); ok {
log.Debugf("ACME got challenge %s", domain)
return challengeCert, nil
}
@@ -350,13 +409,13 @@ func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificat
}
return a.loadCertificateOnDemand(clientHello)
}
log.Debugf("ACME got nothing %s", domain)
log.Debugf("No certificate found or generated for %s", domain)
return nil, nil
}
func (a *ACME) retrieveCertificates() {
a.jobs.In() <- func() {
log.Infof("Retrieving ACME certificates...")
log.Info("Retrieving ACME certificates...")
for _, domain := range a.Domains {
// check if cert isn't already loaded
account := a.store.Get().(*Account)
@@ -387,50 +446,33 @@ func (a *ACME) retrieveCertificates() {
}
}
}
log.Infof("Retrieved ACME certificates")
log.Info("Retrieved ACME certificates")
}
}
func (a *ACME) renewCertificates() {
a.jobs.In() <- func() {
log.Debugf("Testing certificate renew...")
log.Info("Testing certificate renew...")
account := a.store.Get().(*Account)
for _, certificateResource := range account.DomainsCertificate.Certs {
if certificateResource.needRenew() {
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true, OSCPMustStaple)
log.Infof("Renewing certificate from LE : %+v", certificateResource.Domains)
renewedACMECert, err := a.renewACMECertificate(certificateResource)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
log.Errorf("Error renewing certificate from LE: %v", err)
continue
}
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
renewedACMECert := &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
operation := func() error {
return a.storeRenewedCertificate(account, certificateResource, renewedACMECert)
}
transaction, object, err := a.store.Begin()
notify := func(err error, time time.Duration) {
log.Warnf("Renewed certificate storage error: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
account = object.(*Account)
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
log.Errorf("Error renewing certificate: %v", err)
continue
}
if err = transaction.Commit(account); err != nil {
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
log.Errorf("Datastore cannot sync: %v", err)
continue
}
}
@@ -438,22 +480,72 @@ func (a *ACME) renewCertificates() {
}
}
func dnsOverrideDelay(delay int) error {
func (a *ACME) renewACMECertificate(certificateResource *DomainsCertificate) (*Certificate, error) {
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
Domain: certificateResource.Certificate.Domain,
CertURL: certificateResource.Certificate.CertURL,
CertStableURL: certificateResource.Certificate.CertStableURL,
PrivateKey: certificateResource.Certificate.PrivateKey,
Certificate: certificateResource.Certificate.Certificate,
}, true, OSCPMustStaple)
if err != nil {
return nil, err
}
log.Infof("Renewed certificate from LE: %+v", certificateResource.Domains)
return &Certificate{
Domain: renewedCert.Domain,
CertURL: renewedCert.CertURL,
CertStableURL: renewedCert.CertStableURL,
PrivateKey: renewedCert.PrivateKey,
Certificate: renewedCert.Certificate,
}, nil
}
func (a *ACME) storeRenewedCertificate(account *Account, certificateResource *DomainsCertificate, renewedACMECert *Certificate) error {
transaction, object, err := a.store.Begin()
if err != nil {
return fmt.Errorf("error during transaction initialization for renewing certificate: %v", err)
}
log.Infof("Renewing certificate in data store : %+v ", certificateResource.Domains)
account = object.(*Account)
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
if err != nil {
return fmt.Errorf("error renewing certificate in datastore: %v ", err)
}
log.Infof("Commit certificate renewed in data store : %+v", certificateResource.Domains)
if err = transaction.Commit(account); err != nil {
return fmt.Errorf("error saving ACME account %+v: %v", account, err)
}
oldAccount := a.store.Get().(*Account)
for _, oldCertificateResource := range oldAccount.DomainsCertificate.Certs {
if oldCertificateResource.Domains.Main == certificateResource.Domains.Main && strings.Join(oldCertificateResource.Domains.SANs, ",") == strings.Join(certificateResource.Domains.SANs, ",") && certificateResource.Certificate != renewedACMECert {
return fmt.Errorf("renewed certificate not stored: %+v", certificateResource.Domains)
}
}
log.Infof("Certificate successfully renewed in data store: %+v", certificateResource.Domains)
return nil
}
func dnsOverrideDelay(delay flaeg.Duration) error {
var err error
if delay > 0 {
log.Debugf("Delaying %d seconds rather than validating DNS propagation", delay)
log.Debugf("Delaying %d rather than validating DNS propagation", delay)
acme.PreCheckDNS = func(_, _ string) (bool, error) {
time.Sleep(time.Duration(delay) * time.Second)
time.Sleep(time.Duration(delay))
return true, nil
}
} else if delay < 0 {
err = fmt.Errorf("Invalid negative DelayDontCheckDNS: %d", delay)
err = fmt.Errorf("invalid negative DelayBeforeCheck: %d", delay)
}
return err
}
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
log.Debugf("Building ACME client...")
log.Debug("Building ACME client...")
caServer := "https://acme-v01.api.letsencrypt.org/directory"
if len(a.CAServer) > 0 {
caServer = a.CAServer
@@ -463,25 +555,29 @@ func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
return nil, err
}
if len(a.DNSProvider) > 0 {
log.Debugf("Using DNS Challenge provider: %s", a.DNSProvider)
if a.DNSChallenge != nil && len(a.DNSChallenge.Provider) > 0 {
log.Debugf("Using DNS Challenge provider: %s", a.DNSChallenge.Provider)
err = dnsOverrideDelay(a.DelayDontCheckDNS)
err = dnsOverrideDelay(a.DNSChallenge.DelayBeforeCheck)
if err != nil {
return nil, err
}
var provider acme.ChallengeProvider
provider, err = dns.NewDNSChallengeProviderByName(a.DNSProvider)
provider, err = dns.NewDNSChallengeProviderByName(a.DNSChallenge.Provider)
if err != nil {
return nil, err
}
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
err = client.SetChallengeProvider(acme.DNS01, provider)
} else if a.HTTPChallenge != nil && len(a.HTTPChallenge.EntryPoint) > 0 {
client.ExcludeChallenges([]acme.Challenge{acme.DNS01, acme.TLSSNI01})
a.challengeHTTPProvider = &challengeHTTPProvider{store: a.store}
err = client.SetChallengeProvider(acme.HTTP01, a.challengeHTTPProvider)
} else {
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeTLSProvider)
}
if err != nil {
@@ -520,11 +616,18 @@ func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.C
// LoadCertificateForDomains loads certificates from ACME for given domains
func (a *ACME) LoadCertificateForDomains(domains []string) {
a.jobs.In() <- func() {
log.Debugf("LoadCertificateForDomains %s...", domains)
log.Debugf("LoadCertificateForDomains %v...", domains)
if len(domains) == 0 {
// no domain
return
}
domains = fun.Map(types.CanonicalDomain, domains).([]string)
operation := func() error {
if a.client == nil {
return fmt.Errorf("ACME client still not built")
return errors.New("ACME client still not built")
}
return nil
}
@@ -539,36 +642,34 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
return
}
account := a.store.Get().(*Account)
var domain Domain
if len(domains) == 0 {
// no domain
return
} else if len(domains) > 1 {
domain = Domain{Main: domains[0], SANs: domains[1:]}
} else {
domain = Domain{Main: domains[0]}
}
if _, exists := account.DomainsCertificate.exists(domain); exists {
// domain already exists
// Check provided certificates
uncheckedDomains := a.getUncheckedDomains(domains, account)
if len(uncheckedDomains) == 0 {
return
}
certificate, err := a.getDomainsCertificates(domains)
certificate, err := a.getDomainsCertificates(uncheckedDomains)
if err != nil {
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
log.Errorf("Error getting ACME certificates %+v : %v", uncheckedDomains, err)
return
}
log.Debugf("Got certificate for domains %+v", domains)
log.Debugf("Got certificate for domains %+v", uncheckedDomains)
transaction, object, err := a.store.Begin()
if err != nil {
log.Errorf("Error creating transaction %+v : %v", domains, err)
log.Errorf("Error creating transaction %+v : %v", uncheckedDomains, err)
return
}
var domain Domain
if len(uncheckedDomains) > 1 {
domain = Domain{Main: uncheckedDomains[0], SANs: uncheckedDomains[1:]}
} else {
domain = Domain{Main: uncheckedDomains[0]}
}
account = object.(*Account)
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
if err != nil {
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
log.Errorf("Error adding ACME certificates %+v : %v", uncheckedDomains, err)
return
}
if err = transaction.Commit(account); err != nil {
@@ -578,6 +679,97 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
}
}
// Get provided certificate which check a domains list (Main and SANs)
// from static and dynamic provided certificates
func (a *ACME) getProvidedCertificate(domains string) *tls.Certificate {
log.Debugf("Looking for provided certificate to validate %s...", domains)
cert := searchProvidedCertificateForDomains(domains, a.TLSConfig.NameToCertificate)
if cert == nil && a.dynamicCerts != nil && a.dynamicCerts.Get() != nil {
cert = searchProvidedCertificateForDomains(domains, a.dynamicCerts.Get().(*traefikTls.DomainsCertificates).Get().(map[string]*tls.Certificate))
}
if cert == nil {
log.Debugf("No provided certificate found for domains %s, get ACME certificate.", domains)
}
return cert
}
func searchProvidedCertificateForDomains(domain string, certs map[string]*tls.Certificate) *tls.Certificate {
// Use regex to test for provided certs that might have been added into TLSConfig
for certDomains := range certs {
domainCheck := false
for _, certDomain := range strings.Split(certDomains, ",") {
selector := "^" + strings.Replace(certDomain, "*.", "[^\\.]*\\.?", -1) + "$"
domainCheck, _ = regexp.MatchString(selector, domain)
if domainCheck {
break
}
}
if domainCheck {
log.Debugf("Domain %q checked by provided certificate %q", domain, certDomains)
return certs[certDomains]
}
}
return nil
}
// Get provided certificate which check a domains list (Main and SANs)
// from static and dynamic provided certificates
func (a *ACME) getUncheckedDomains(domains []string, account *Account) []string {
log.Debugf("Looking for provided certificate to validate %s...", domains)
allCerts := make(map[string]*tls.Certificate)
// Get static certificates
for domains, certificate := range a.TLSConfig.NameToCertificate {
allCerts[domains] = certificate
}
// Get dynamic certificates
if a.dynamicCerts != nil && a.dynamicCerts.Get() != nil {
for domains, certificate := range a.dynamicCerts.Get().(*traefikTls.DomainsCertificates).Get().(map[string]*tls.Certificate) {
allCerts[domains] = certificate
}
}
// Get ACME certificates
if account != nil {
for domains, certificate := range account.DomainsCertificate.toDomainsMap() {
allCerts[domains] = certificate
}
}
return searchUncheckedDomains(domains, allCerts)
}
func searchUncheckedDomains(domains []string, certs map[string]*tls.Certificate) []string {
uncheckedDomains := []string{}
for _, domainToCheck := range domains {
domainCheck := false
for certDomains := range certs {
domainCheck = false
for _, certDomain := range strings.Split(certDomains, ",") {
// Use regex to test for provided certs that might have been added into TLSConfig
selector := "^" + strings.Replace(certDomain, "*.", "[^\\.]*\\.?", -1) + "$"
domainCheck, _ = regexp.MatchString(selector, domainToCheck)
if domainCheck {
break
}
}
if domainCheck {
break
}
}
if !domainCheck {
uncheckedDomains = append(uncheckedDomains, domainToCheck)
}
}
if len(uncheckedDomains) == 0 {
log.Debugf("No ACME certificate to generate for domains %q.", domains)
} else {
log.Debugf("Domains %q need ACME certificates generation for domains %q.", domains, strings.Join(uncheckedDomains, ","))
}
return uncheckedDomains
}
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
domains = fun.Map(types.CanonicalDomain, domains).([]string)
log.Debugf("Loading ACME certificates %s...", domains)
@@ -585,7 +777,7 @@ func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
if len(failures) > 0 {
log.Error(failures)
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
return nil, fmt.Errorf("cannot obtain certificates %+v", failures)
}
log.Debugf("Loaded ACME certificates %s", domains)
return &Certificate{

43
acme/acme_example.json Normal file
View File

@@ -0,0 +1,43 @@
{
"Email": "test@traefik.io",
"Registration": {
"body": {
"resource": "reg",
"id": 3,
"key": {
"kty": "RSA",
"n": "y5a71suIqvEtovDmDVQ3SSNagk5IVCFI_TvqWpEXSrdbcDE2C-PTEtEUJuLkYwygcpiWYbPmXgdS628vQCw5Uo4DeDyHiuysJOWBLaWow3p9goOdhnPbGBq0liIR9xXyRoctdipVk8UiO9scWsu4jMBM3sMr7_yBWPfYYiLEQmZGFO3iE7Oqr55h_kncHIj5lUQY1j_jkftqxlxUB5_0quyJ7l915j5QY--eY7h4GEhRvx0TlUpi-CnRtRblGeDDDilXZD6bQN2962WdKecsmRaYx-ttLz6jCPXz2VDJRWNcIS501ne2Zh3hzw_DS6IRd2GIia1Wg4sisi9epC9sumXPHi6xzR6-_i_nsFjdtTkUcV8HmorOYoc820KQVZaLScxa8e7-ixpOd6mr6AIbEf7dBAkb9f_iK3GwpqKD8yNcaj1EQgNSyJSjnKSulXI_GwkGnuXe00Qpb1a8ha5Z8yWg7XmZZnJyAZrmK60RfwRNQ1rO5ioerNUBJ2KYTYNzVjBdob9Ug6Cjh4bEKNNjqcbjQ50_Z97Vw40xzpDQ_fYllc6n92eSuv6olxFJTmK7EhHuanDzITngaqei3zL9RwQ7P-1jfEZ03qmGrQYYqXcsS46PQ8cE-frzY2mKp16pRNCG7-03gKVGV0JHyW1aYbevNUk7OumCAXhC2YOigBk",
"e": "AQAB"
},
"contact": [
"mailto:test@traefik.io"
],
"agreement": "http://boulder:4000/terms/v1"
},
"uri": "http://127.0.0.1:4000/acme/reg/3",
"new_authzr_uri": "http://127.0.0.1:4000/acme/new-authz",
"terms_of_service": "http://boulder:4000/terms/v1"
},
"PrivateKey": "MIIJJwIBAAKCAgEAy5a71suIqvEtovDmDVQ3SSNagk5IVCFI/TvqWpEXSrdbcDE2C+PTEtEUJuLkYwygcpiWYbPmXgdS628vQCw5Uo4DeDyHiuysJOWBLaWow3p9goOdhnPbGBq0liIR9xXyRoctdipVk8UiO9scWsu4jMBM3sMr7/yBWPfYYiLEQmZGFO3iE7Oqr55h/kncHIj5lUQY1j/jkftqxlxUB5/0quyJ7l915j5QY++eY7h4GEhRvx0TlUpi+CnRtRblGeDDDilXZD6bQN2962WdKecsmRaYx+ttLz6jCPXz2VDJRWNcIS501ne2Zh3hzw/DS6IRd2GIia1Wg4sisi9epC9sumXPHi6xzR6+/i/nsFjdtTkUcV8HmorOYoc820KQVZaLScxa8e7+ixpOd6mr6AIbEf7dBAkb9f/iK3GwpqKD8yNcaj1EQgNSyJSjnKSulXI/GwkGnuXe00Qpb1a8ha5Z8yWg7XmZZnJyAZrmK60RfwRNQ1rO5ioerNUBJ2KYTYNzVjBdob9Ug6Cjh4bEKNNjqcbjQ50/Z97Vw40xzpDQ/fYllc6n92eSuv6olxFJTmK7EhHuanDzITngaqei3zL9RwQ7P+1jfEZ03qmGrQYYqXcsS46PQ8cE+frzY2mKp16pRNCG7+03gKVGV0JHyW1aYbevNUk7OumCAXhC2YOigBkCAwEAAQKCAgA8XW1EuwTC6tAFSDhuK1JZNUpY6K05hMUHkQRj5jFpzgQmt/C2hc7H/YZkIVJmrA/G6sdsINNlffZwKH9yH6q/d6w/snLeFl7UcdhjmIL5sxAT6sKCY0fLVd/FxERfZvp3Pw2Tw+mr7v+/j7BQm6cU1M/2HRiiB9SydIqMTpKyvXB6NC6ceOFbQTL9GxlQvKyEPbS/kiH/3vRB7I5d1GfPZmNfcp6ark9X0my8VK4HRSo36H8t/OhrfLrZXvh/O82aHVf0OTv/d8AgU/jNu+XVXoXegUfWglQFDChJf1KuaE+g5w1tqgFDNgkGRD475soXA6xgZi0Iw/B9tN3zALzT4IiAW1q72feeTgKOMA2zGtKXxQZZSOV+DuWFZNz/tT7XqGQThqxM09CHv2WGOe80vobtegXYTUt90hysrqIZmBW5XYdzQlJh1KWTtfCaTrWd47kbGvhkEPc8aA3Ji4/AqfkVXiqwaLu+MSlgzPpRj7U7UAIDqnpZjgttgLp74Ujnk3bTaUzdyyNqYDBG3IFGr/Sv+2GQDAUn/PYRJKWr0BteqOzX9zvW3zY8g9CYVXfK/AW3RMWLV8ly6vH/gWqa9gEuzRNRlzjUU6/HCVbUx3UT8RMWH2TQ0uuQZr5JX1iTwjeeT0dEIly1NnRQC92wcrE4UUTBEF3krGVpDBf0AQKCAQEA4jB8w+2fwzbF8X+gCODcY7sTeJRunzGy+jbdaLkcThuylga+6W3ZgWx0BD30ql9K2mouCVu86fCTnBeXXEC3QoTdgw/EzJ83+4JU3QSDdzs9Ta9vLHyvrpUkQfZ8UZpeLLmFsmsBMbBbnfw0S1TzXDsgrAc+G4tia8nO/Iqu75kEMGzmHQAvmN3iSqc1aTS4qumbB19g+v+csq9NEht4F9jt39KotG+OD3MxCxtMu7vxAkJRjFFcgcbb2Rtqe/kQEKA1vLEAJg27lV4k8XibCSerVUR6IzT8WZHrNiXmpRguTLl2k8uFUdCOOx6aLGyRVJ6+8SgIsMR540vnxwQzEQKCAQEA5mu2wtWT19mvXopC3easPsXIPzc5oaRkqfWZYT1KHcVQ7NIXsE3vCjcf/3igZ8l/FVQ4G4fpk/GoTqlpV5Aq/JHCpVOR2O69uB+W4kWgliejpHvF9gszzAYnC8lIXqDbWiinBhmm3ii8sDGAoBaSDw5NMUq3mI+nd8zZ+jx1bLBczDafmQ0YKr8k0YaROxIgoBgDOQDdSqG387lwzpza2DKI5Al3HfS42zjT0RmBahPiuT2aEoUZmIYuvFY0fEjfkpbdvLyexHfZCILRUGlG1nAwASFg86lp+mFSBJ3E3cvbP0CpbFGxon5u4Ao3/7htoOh6huh7MQ91h41fv1hsiQKCAQAe7WRR4e7jYVzlbX7zV9Oqq0y5QwpxJ/mB7viNNiphn7Xmf5uhDU0dPjgK0HHgzdDNVpFe5DVLg4KbaDpg+dRU+xfSsNhG5kpgUGzMH67eIbJ7Kc64tX/MDkZ74nkTK1lPIjrer3TlV2jfjDmWR1JTPR51hzP9ziwx8tEjhM7woeqJuIoqUvkvHL+xV3WdIgFSFUkGVAtNpp/FauTN4gWktRupbAN3UH2LLUP6ccwnK0aD+Y9u8T0F3av33qDLvL1umIlgeI89pMkOXmYMwmHoeY0axpcwszECCkqwB7SmxEyoXv+Qq9ZZ3ntkKAYKpvmkKWSQUtoFWYgVBS727mMRAoIBABLdwusU/bPwuPEutObiWjwRiaHTbb6UbUGVQGe70vO5EjUxxorC9s2JUe9i+w9EakleyfFHIZLheHxoVp26yio/7QYIX6q5cYM/4uTH+qwQts9i6wSISkdsQYovguNsnEk3huVy+Dy8bSaoBvYUowTkkOF2Uq4FJRskBLz+ckbh8dcuqcaoUdA+Mk+NixqhE1bIYIssTPItZ5hnGJtyMGD/UkIJnF0ximk4r+8w/W2oDypHpvPZPg1E/1KgZE/Az7166NDpSL6haX3O6ECDPi+Uo/mTuBJ7TpgXm9WQ7WuTo3H8Y2LhFYBOhdmGPKuNeDxyjIW7R0rvDxp4MtzB6rECggEAJIl7/qp1lxUQPQJRTsEYBkOtdRw0IGG1Rcj0emhHaBN05c9opCy+Osb7mVeU5ZiULe5kD02phL+36pEumprz7QzN46Y5pZc8AQ2W/QkeL4Wo9U9QzczvQQzc1EqrBkzvQTZtBhn4DRzz0IuTn1beVyHtBZeNpBFgMQFv9VYQuUNwFoTOkkQrBRnYbXH6KEnhF3c/1Hzi4KHVdHdfZ3LH7KFQJ34xio0q2tWQSQYeybmwOXdd9sxpz/Y4KBS9fqm7UrwnPK8yuOc05HLEaws+1iam5YyJprlQo3mGKe0wRztwn44HDeQr70LlFm0lzigVAv0hSiWO1Q5hJL7nDu8m/Q==",
"DomainsCertificate": {
"Certs": [
{
"Domains": {
"Main": "local1.com",
"SANs": [
"test1.local1.com",
"test2.local1.com"
]
},
"Certificate": {
"Domain": "local1.com",
"CertURL": "http://127.0.0.1:4000/acme/cert/ffc4f3f14def9ee6ec6a0522b5c0baa3379d",
"CertStableURL": "",
"PrivateKey": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBdVNoTTR4enF6cE5YcFNaNnAvZnQrRmt5VmgyK1BSZXJUelV0OERRSng2UkVjQS9FCnN2RnNIVmNOSkZMS2twYTNlOEd3SUZBakJQNnJPK3hoR1JjWlJrdENON1gyOW5LZFhGbHZkYzJxd0hyTFF5WWkKTTB3ODhTck41VERiNi96TWU2dTB0dERiYWtDbDd6ZEJKUXJ6a1h5ZU1MeVkzTUs3aVkrMHpwL2JqMVhvbk5DdQpaQStkZ3hsMVNrV01DVUYvQk9HNWFyT1hwb0x4S0dQWGdzV3hOTVNLVmJKSHczL3ZqNTViZU92Um5lT3BNWlhvCmMwOWpZT3VBakNka1Z5czBSWHJLNWNCRDRMbVRXdnN4MFdTK2VMVHlGTTdQTHVZM3lEWkNNWEhjVmlqRHhnbFMKYjB1ZVRQcGFUWEQwYkxqZ0RNOUVEdE15ZEJzMUNPWlpPWG9ickN5Q2I1eWxTOFdVd1NzVXM1UldxZnlVbnAvcgpSNGx2c2RZOWRVZjRPdkNMVnJvWWk5NWFGc1Zxa0xLOExuL0Eyc3kxYWlDTnR4RmpKOXRXbWU0V0NhdzRoU0YvCkR4NWVNNWNYR2JSYXduVlZJQlZXeHhzNTBPMFJlUWRvbXBQZEFNS1RDWk9SRmxYaDdOWTdxQVdWRGtpdzhyam8Kekd3Ni9XdjlOR3hTNTliKzc0YVAxcjBxOTZ2RS9Rdi8zTCtjbjhiN0lBLytPYmFKdzhIT3RGbXc4RjBxQkN3MAprYWVVSloxb1JueGFYQUo4RHhHREpFOVdNUzh0QmJtVm16YkxoRkMzeDdVc0xGeTBrSzh1SFBFT3dQb2NKNUFUCkE1UHBvclNEMmFleHA0Z3VqYVp5c1JManpmY0dnaTdva0JFNlZVNWVqRE1iYS9lNERQNEJQUVg5VmtVQ0F3RUEKQVFLQ0FnQmZjMWdYcUp1ZmZMT3REcVlpbXh4UmIrSVVKT2NpWldaSndmZDVvY244NGtEcHFDZFZ2RUZvNnF4NgpzamQ5MURhb2xOUHdCSC9aSGxRMTR3aTNQNEluQzdzS0wwTXVEeTN5SXFUa0RPOWVwSzdPWWdVMWZyTFgvS0lCCjZlc2x2Ny9HYldFTzhhSjdKdktqM0U4NEFtcEg4UDgzenJIYTlJUnJTT3NEcmNNcEpEZHpSOXp1OW1IVDZMYmYKWC9UdC9KYTNkSW42YUxUZ0FSYkRKSjAvN0J3TFFOcXpqT0dUOWdzUWRhbGdMK2x5eEo4L1ViRndhRmVwNmgzdApvbzBHcHQ0ZWgwdTdueDhlNVd3Q2RnWmJsTnpnS3grMC9Gd3dLRHhQZVRFc2ZpOEJONmlkR2NjbVdzd3prTWdtCnJmbERaeGNSWTNRSlZIVHBCL0dTTWZXRFBPQ3dRdGltQk1WN3kxM2hPMTdPWXpSNDBMZnpUalJBbmtna2V2eWYKcFowb3dLR3o4QS9haHhRWWJmYVQ5VEhXV0wrYUpYeUhFanBKckp5aTg3UExVbzhsOFVydU56MDRWNXpLOFJPbgo2cG9EWmVtbm1EYWRlU09pK3hZRWlGT1NwSXNWbzlpcm9jUGFKN2YzYWpiNUU4RHpuN1o1MmhzL2R6akpLcFZJCm5mVDFkUU9SZEowSXRUNlRlQ2RTL0dpS25IS1RtNjR2T21IbmlJcm8rUGRhUmFjV0IrTUJ0VytRd0cyUStyRGkKc3g4NlpQbHRpTVpLMDZ5TVlyVHZUdGk2aFVGaUY5cWh4b3RGazdNQkNrZlIwYUVhaUREQUpKNm1jb1lpRUQ2QgpBVGJhVmpVaGNaUiswYkRST25PN0ozRk5rZmx3K2dMaVhvcXFRRW9pU2ZWb2h5SWY3UUtDQVFFQThjYTM5K0g4CjN3L2Qrcm0yUGNhM0RMQnBYaWU4Z3ZYcGpjazVYSkpvSGVmbnJjZWQrcFpXaTZEYncwYld0MEdtYkxmVjJNSlAKV2I1aTZzSXhmdkN3YlFqbHY0UnExMVA5ZEswT3poMnVpKzZ6cXVBMG5YTVcrN0lJS0cvdDhmS2NJZGRRNnRGcwpFclFVTFBDak56ODA2cHBiSlhPRmVvMW1BK293TGhHNlA3dDhCdlZHSk1NaTNxejNlSUNuVVE2eDNFY01ITXNuClhrM21DUzI1WUZaNk96cytFK254cGVraTAzZmQwblp3UE1jdElHZys1c3hleE9zREsrTHlvb2FqQnc5N0oyUzIKcUNNWXFtT0tLcmxEQ3Y1WmQ4dlZLN3hXVmpKRVhGTTNMZ2pieHBRcCtuVXNVVWxwS01LOVlGS0lRREl0RU9aMApWcWExTXJaOElzN1l5d0tDQVFFQXhBemZIa2pIVGlvTHdZbG5EcEk0MWlOTDh5Y0ZBallrTC94dWhPU2tlVkE4CjdRWDZPZUpDekR3Z0FUYXVqOWR6Y0wwby9yTndWV0xWcnQ3OXk3YnJvVDdFREZKWVNTY25GRXNMTlVWSXRncGkKckNSUXJTL1F2TkVGTmE5K0pRc1dmYkdBNHdIUTFaSjI4MFp1cWMvNlEyUi9kZVh3cUZBQVBHN2NIcEhHWlR6ZQoyRmFRUHFLRkV4WlEyZkpvRys0SVBRNHVQVERybXlGMmVUWXk2T3BaaDBHbWJRYlVTa1dFWDlQRmF1cHJIWVdGCk8wK25DaVVPNVRaMFZoaGR2dUNKMWdPclZHYzhBUlJtUVZ1aUNEWTZCaGlvVTU0ZmZsSXlDTXZ5a3MwcmRXZ3MKWVJ2TmN4TXNlRGJpTDRKSURkMHhiN1d4VUdmVjRVNHZPMks5Vms1N0x3S0NBUUVBMkd1eE1jcXd1RnRUc0tPYwpaaUFDcXZFZTRKRmhSVGtySHlnSW1MelZSaS9ZU3M1c3MycnZmWDA0T3N5bVZ0UUZUVHdoeUMzbktjWXFkVW52ClZGblBFMHJyblV2Qzk0elBUQ205SHZPaTBzK1JORndOdlFMUWgrME5NR1ZBOFZyaU44aXRQZ1RJWU5XaFdianQKNFA1TE45V0QwVHBmT1J4cFBRZmNxT0JsZjdjcmhtNzNvdUNwemZtMmE3OStCaWpKUFF5NzR1cFhDeXRmeHNlUApNSlU0Uk56NjdJaDFMclpKM2xGbDFvYitZT2xKazhDOHpZd1RLT0hWck9zeGxobyt4SXN2Q2t3MDFMelZ6Mi9hCnRmT3Y5NTlHSnQzbXE0ZWpJUFZPQy9iUlpmdTMvMEdSY2dpQTZ5SnpaM0VxWTVaOU1EbTU3VzdjcE5RRlRxZmEKNXEyUmtRS0NBUUErNGhZSzQ3TXg2aUNkTWxKaEJSdS82OUJucktOWm96NFdPalRFNFlXejk3MmpGU0Mrd2tsRQpzeUJjNDBvNGp4WFRHb2wwc04rZU03WndnY3dNTko3OXVHRXZ4cFhVMlA4YTdqc3BHaEVKZXVsTlo5U015R0orCnZkaWE4TEJZZDJiK2FCbjhOay9pd1Rqd0xTNC92NXI1Vk5uaFdpRElDK2tYZVVPWGRwQ1pWbDN3TEV2V0cxRHQKMzJHTmxzZzM5VENsVE5BZUJudjc1VTdYOEQrQ0gvRVpoa0E0aGxFL2hXN0JRZTczclRzd1creHhLc3BjWWFpVwpjdEg3NzVMYUw3Rm1lUVRTYk01OVZpcTZXZ2J0OVY3Rko5R09DSkQzZHF2ZjBITDlEVndjSzQ3WWt3OWlFc3RYCnY5cnEvREhhYUpGNzBGNlFlTTNNbDhSa212WTZJYkEzQW9JQkFRRGt6RmZLeG9HQ3dWUDlua3k4NmFQSjFvd2kKc2FDZEx6RjRWTENRZzkrUXJITzEyY0p5MFFQUnJ2cUQyMGp1cDFlOWJhWVZzbkdYc1FZTFg2NVR6UzJSSCtlSAp6S0NPTTdnMVE3djMxNWpjMDMvN1lQck4rb3RrV0VBOUkyaDZjUE1vY3c0aERTNk02OFlxQVlKTS9RclVhenZhCnhBTFJaZEVkQW1xWDA4VHhuY1hRUEVxYkk0ZnlSZ2pVM1BYR3RRaFFFbERpR2kwbThjQTJNTXdsR1RmbTdOSXgKaENjZ2ZkL296TEp2VUhiMkxLRi82cXEySmJVRHlOMkVoK0xSZUJjdnp6Y1grZE5MdGQxY0Uvcm1SM2hMbWxmNgo3KzRpTVMxK0t1eWV3VlJVUEE1c1F1aUYyVUVoeEs1MUpZK1FpOG9HbERKdGRrOXB3QlZNN1F0WW9KVEwKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K",
"Certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZvakNDQklxZ0F3SUJBZ0lUQVAvRTgvRk43NTdtN0dvRklyWEF1cU0zblRBTkJna3Foa2lHOXcwQkFRc0YKQURBZk1SMHdHd1lEVlFRRERCUm9NbkJ3ZVNCb01tTnJaWElnWm1GclpTQkRRVEFlRncweE9EQXhNVFV3TnpJNQpNREJhRncweE9EQTBNVFV3TnpJNU1EQmFNRVF4RXpBUkJnTlZCQU1UQ214dlkyRnNNUzVqYjIweExUQXJCZ05WCkJBVVRKR1ptWXpSbU0yWXhOR1JsWmpsbFpUWmxZelpoTURVeU1tSTFZekJpWVdFek16YzVaRENDQWlJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTGtvVE9NYzZzNlRWNlVtZXFmMzdmaFpNbFlkdmowWApxMDgxTGZBMENjZWtSSEFQeExMeGJCMVhEU1JTeXBLV3QzdkJzQ0JRSXdUK3F6dnNZUmtYR1VaTFFqZTE5dlp5Cm5WeFpiM1hOcXNCNnkwTW1Jak5NUFBFcXplVXcyK3Y4ekh1cnRMYlEyMnBBcGU4M1FTVUs4NUY4bmpDOG1OekMKdTRtUHRNNmYyNDlWNkp6UXJtUVBuWU1aZFVwRmpBbEJmd1RodVdxemw2YUM4U2hqMTRMRnNUVEVpbFd5UjhOLwo3NCtlVzNqcjBaM2pxVEdWNkhOUFkyRHJnSXduWkZjck5FVjZ5dVhBUStDNWsxcjdNZEZrdm5pMDhoVE96eTdtCk44ZzJRakZ4M0ZZb3c4WUpVbTlMbmt6NldrMXc5R3k0NEF6UFJBN1RNblFiTlFqbVdUbDZHNndzZ20rY3BVdkYKbE1FckZMT1VWcW44bEo2ZjYwZUpiN0hXUFhWSCtEcndpMWE2R0l2ZVdoYkZhcEN5dkM1L3dOck10V29namJjUgpZeWZiVnBudUZnbXNPSVVoZnc4ZVhqT1hGeG0wV3NKMVZTQVZWc2NiT2REdEVYa0hhSnFUM1FEQ2t3bVRrUlpWCjRleldPNmdGbFE1SXNQSzQ2TXhzT3Yxci9UUnNVdWZXL3UrR2o5YTlLdmVyeFAwTC85eS9uSi9HK3lBUC9qbTIKaWNQQnpyUlpzUEJkS2dRc05KR25sQ1dkYUVaOFdsd0NmQThSZ3lSUFZqRXZMUVc1bFpzMnk0UlF0OGUxTEN4Ywp0SkN2TGh6eERzRDZIQ2VRRXdPVDZhSzBnOW1uc2FlSUxvMm1jckVTNDgzM0JvSXU2SkFST2xWT1hvd3pHMnYzCnVBeitBVDBGL1ZaRkFnTUJBQUdqZ2dHd01JSUJyREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdIUVlEVlIwbEJCWXcKRkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SFFZRFZSME9CQllFRk5LZQpBVUZYc2Z2N2lML0lYVVBXdzY2ZU5jQnhNQjhHQTFVZEl3UVlNQmFBRlB0NFR4TDVZQldETEo4WGZ6UVpzeTQyCjZrR0pNR1lHQ0NzR0FRVUZCd0VCQkZvd1dEQWlCZ2dyQmdFRkJRY3dBWVlXYUhSMGNEb3ZMekV5Tnk0d0xqQXUKTVRvME1EQXlMekF5QmdnckJnRUZCUWN3QW9ZbWFIUjBjRG92THpFeU55NHdMakF1TVRvME1EQXdMMkZqYldVdgphWE56ZFdWeUxXTmxjblF3T1FZRFZSMFJCREl3TUlJS2JHOWpZV3d4TG1OdmJZSVFkR1Z6ZERFdWJHOWpZV3d4CkxtTnZiWUlRZEdWemRESXViRzlqWVd3eExtTnZiVEFuQmdOVkhSOEVJREFlTUJ5Z0dxQVloaFpvZEhSd09pOHYKWlhoaGJYQnNaUzVqYjIwdlkzSnNNR0VHQTFVZElBUmFNRmd3Q0FZR1o0RU1BUUlCTUV3R0F5b0RCREJGTUNJRwpDQ3NHQVFVRkJ3SUJGaFpvZEhSd09pOHZaWGhoYlhCc1pTNWpiMjB2WTNCek1COEdDQ3NHQVFVRkJ3SUNNQk1NCkVVUnZJRmRvWVhRZ1ZHaHZkU0JYYVd4ME1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3A0Q2FxZlR4THNQTzQKS2JueDJZdEc4bTN3MC9keTVVR1VRNjZHbGxPVTk0L2I0MmNhbTRuNUZrTWlpZ01IaUx4c2JZVXh0cDZKQ3R5cQpLKzFNcDFWWEtSTTVKbFBTNWRIaWhxdHk1U3BrTUhjampwQSs3U2YyVWtoNmpKRWYxTUVJY2JnWnpJRk5IT0hYClVUUUppVFhKcno3blJDZnlQWFZtbWErUGtIRlU4R0VEVzJGOVptU1kzVFBiQWhiWkV2UkZubjUrR1lxbkZuancKWWw3Y0I2MXYwRzVpOGQwbnVvbTB4a2hiNTU3Y3BiZHhLblhsaFU4N2RZSTR5SUdPdUFGUWpYcXFXN2NIZCtXUQpWSDB2dFA3cEgrRmt2YnY4WkkxMHMrNU5ZcCtzZjFQZGQxekJsRmdNSGF3dnFFYUg3SU9sejdkajlCdmtVc0dpClhxQWVqQnFPCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVpakNDQTNLZ0F3SUJBZ0lDRWswd0RRWUpLb1pJaHZjTkFRRUxCUUF3S3pFcE1DY0dBMVVFQXd3Z1kyRmoKYTJ4cGJtY2dZM0o1Y0hSdlozSmhjR2hsY2lCbVlXdGxJRkpQVDFRd0hoY05NVFV4TURJeE1qQXhNVFV5V2hjTgpNakF4TURFNU1qQXhNVFV5V2pBZk1SMHdHd1lEVlFRREV4Um9ZWEJ3ZVNCb1lXTnJaWElnWm1GclpTQkRRVENDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUlLUjNtYUJjVVNzbmNYWXpRVDEzRDUKTnIrWjNtTHhNTWgzVFVkdDZzQUNtcWJKMGJ0UmxnWGZNdE5MTTJPVTFJNmEzSnUrdElaU2RuMnYyMUpCd3Z4VQp6cFpRNHp5MmNpbUlpTVFEWkNRSEp3ekM5R1puOEhhVzA5MWl6OUgwR28zQTdXRFh3WU5tc2RMTlJpMDBvMTRVCmpvYVZxYVBzWXJaV3ZSS2FJUnFhVTBoSG1TMEFXd1FTdk4vOTNpTUlYdXlpd3l3bWt3S2JXbm54Q1EvZ3NjdEsKRlV0Y05yd0V4OVdnajZLbGh3RFR5STFRV1NCYnhWWU55VWdQRnpLeHJTbXdNTzB5TmZmN2hvK1FUOXg1K1kvNwpYRTU5UzRNYzRaWHhjWEtldy9nU2xOOVU1bXZUK0QyQmhEdGtDdXBkZnNaTkNRV3AyN0ErYi9EbXJGSTlOcXNDCkF3RUFBYU9DQWNJd2dnRytNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUF3UXdZRFZSMGVCRHd3T3FFNE1BYUMKQkM1dGFXd3dDb2NJQUFBQUFBQUFBQUF3SW9jZ0FBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQQpBQUFBQUFBd0RnWURWUjBQQVFIL0JBUURBZ0dHTUg4R0NDc0dBUVVGQndFQkJITXdjVEF5QmdnckJnRUZCUWN3CkFZWW1hSFIwY0RvdkwybHpjbWN1ZEhKMWMzUnBaQzV2WTNOd0xtbGtaVzUwY25WemRDNWpiMjB3T3dZSUt3WUIKQlFVSE1BS0dMMmgwZEhBNkx5OWhjSEJ6TG1sa1pXNTBjblZ6ZEM1amIyMHZjbTl2ZEhNdlpITjBjbTl2ZEdOaAplRE11Y0Rkak1COEdBMVVkSXdRWU1CYUFGT21rUCs2ZXBlYnkxZGQ1WUR5VHBpNGtqcGVxTUZRR0ExVWRJQVJOCk1Fc3dDQVlHWjRFTUFRSUJNRDhHQ3lzR0FRUUJndDhUQVFFQk1EQXdMZ1lJS3dZQkJRVUhBZ0VXSW1oMGRIQTYKTHk5amNITXVjbTl2ZEMxNE1TNXNaWFJ6Wlc1amNubHdkQzV2Y21jd1BBWURWUjBmQkRVd016QXhvQytnTFlZcgphSFIwY0RvdkwyTnliQzVwWkdWdWRISjFjM1F1WTI5dEwwUlRWRkpQVDFSRFFWZ3pRMUpNTG1OeWJEQWRCZ05WCkhRNEVGZ1FVKzNoUEV2bGdGWU1zbnhkL05CbXpMamJxUVlrd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBMFkKQWVMWE9rbHg0aGhDaWtVVWwrQmRuRmZuMWcwVzVBaVFMVk5JT0w2UG5xWHUwd2puaE55aHFkd25maFlNbm95NAppZFJoNGxCNnB6OEdmOXBubExkL0RuV1NWM2dTKy9JL21BbDFkQ2tLYnk2SDJWNzkwZTZJSG1JSzJLWW0zam0rClUrK0ZJZEdwQmRzUVRTZG1pWC9yQXl1eE1ETTBhZE1rTkJ3VGZRbVpRQ3o2bkdIdzFRY1NQWk12WnBzQzhTa3YKZWt6eHNqRjFvdE9yTVVQTlBRdnRUV3JWeDhHbFIycWZ4LzR4YlFhMXYyZnJOdkZCQ21PNTlnb3oram5XdmZUdApqMk5qd0RaN3ZsTUJzUG0xNmRiS1lDODQwdXZSb1pqeHFzZGMzQ2hDWmpxaW1GcWxORy94b1BBOCtkVGljWnpDClhFOWlqUEljdlc2eTFhYTNiR3c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
}
}
]
},
"ChallengeCerts": {}
}

View File

@@ -1,6 +1,7 @@
package acme
import (
"crypto/tls"
"encoding/base64"
"net/http"
"net/http/httptest"
@@ -9,6 +10,8 @@ import (
"testing"
"time"
"github.com/containous/traefik/tls/generate"
"github.com/stretchr/testify/assert"
"github.com/xenolf/lego/acme"
)
@@ -68,8 +71,8 @@ func TestDomainsSetAppend(t *testing.T) {
}
func TestCertificatesRenew(t *testing.T) {
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
foo1Cert, foo1Key, _ := generate.KeyPair("foo1.com", time.Now())
foo2Cert, foo2Key, _ := generate.KeyPair("foo2.com", time.Now())
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
@@ -99,7 +102,7 @@ func TestCertificatesRenew(t *testing.T) {
},
},
}
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
foo1Cert, foo1Key, _ = generate.KeyPair("foo1.com", time.Now())
newCertificate := &Certificate{
Domain: "foo1.com",
CertURL: "url",
@@ -126,10 +129,10 @@ func TestCertificatesRenew(t *testing.T) {
func TestRemoveDuplicates(t *testing.T) {
now := time.Now()
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
barCert, barKey, _ := generateKeyPair("bar.com", now)
fooCert, fooKey, _ := generate.KeyPair("foo.com", now)
foo24Cert, foo24Key, _ := generate.KeyPair("foo.com", now.Add(24*time.Hour))
foo48Cert, foo48Key, _ := generate.KeyPair("foo.com", now.Add(48*time.Hour))
barCert, barKey, _ := generate.KeyPair("bar.com", now)
domainsCertificates := DomainsCertificates{
lock: sync.RWMutex{},
Certs: []*DomainsCertificate{
@@ -222,14 +225,14 @@ func TestNoPreCheckOverride(t *testing.T) {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS != nil {
t.Errorf("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
t.Error("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
}
}
func TestSillyPreCheckOverride(t *testing.T) {
err := dnsOverrideDelay(-5)
if err == nil {
t.Errorf("Missing expected error in dnsOverrideDelay!")
t.Error("Missing expected error in dnsOverrideDelay!")
}
}
@@ -240,7 +243,7 @@ func TestPreCheckOverride(t *testing.T) {
t.Errorf("Error in dnsOverrideDelay :%v", err)
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
@@ -264,16 +267,58 @@ cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
}`))
}))
defer ts.Close()
a := ACME{DNSProvider: "manual", DelayDontCheckDNS: 10, CAServer: ts.URL}
a := ACME{DNSChallenge: &DNSChallenge{Provider: "manual", DelayBeforeCheck: 10}, CAServer: ts.URL}
client, err := a.buildACMEClient(account)
if err != nil {
t.Errorf("Error in buildACMEClient: %v", err)
}
if client == nil {
t.Errorf("No client from buildACMEClient!")
t.Error("No client from buildACMEClient!")
}
if acme.PreCheckDNS == nil {
t.Errorf("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
}
}
func TestAcme_getUncheckedCertificates(t *testing.T) {
mm := make(map[string]*tls.Certificate)
mm["*.containo.us"] = &tls.Certificate{}
mm["traefik.acme.io"] = &tls.Certificate{}
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
domains := []string{"traefik.containo.us", "trae.containo.us"}
uncheckedDomains := a.getUncheckedDomains(domains, nil)
assert.Empty(t, uncheckedDomains)
domains = []string{"traefik.acme.io", "trae.acme.io"}
uncheckedDomains = a.getUncheckedDomains(domains, nil)
assert.Len(t, uncheckedDomains, 1)
domainsCertificates := DomainsCertificates{Certs: []*DomainsCertificate{
{
tlsCert: &tls.Certificate{},
Domains: Domain{
Main: "*.acme.wtf",
SANs: []string{"trae.acme.io"},
},
},
}}
account := Account{DomainsCertificate: domainsCertificates}
uncheckedDomains = a.getUncheckedDomains(domains, &account)
assert.Empty(t, uncheckedDomains)
}
func TestAcme_getProvidedCertificate(t *testing.T) {
mm := make(map[string]*tls.Certificate)
mm["*.containo.us"] = &tls.Certificate{}
mm["traefik.acme.io"] = &tls.Certificate{}
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
domain := "traefik.containo.us"
certificate := a.getProvidedCertificate(domain)
assert.NotNil(t, certificate)
domain = "trae.acme.io"
certificate = a.getProvidedCertificate(domain)
assert.Nil(t, certificate)
}

View File

@@ -1,97 +0,0 @@
package acme
import (
"crypto/tls"
"fmt"
"strings"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
type challengeProvider struct {
store cluster.Store
lock sync.RWMutex
}
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
log.Debugf("Challenge GetCertificate %s", domain)
if !strings.HasSuffix(domain, ".acme.invalid") {
return nil, false
}
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.ChallengeCerts == nil {
return nil, false
}
account.Init()
var result *tls.Certificate
operation := func() error {
for _, cert := range account.ChallengeCerts {
for _, dns := range cert.certificate.Leaf.DNSNames {
if domain == dns {
result = cert.certificate
return nil
}
}
}
return fmt.Errorf("Cannot find challenge cert for domain %s", domain)
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting cert: %v", err)
return nil, false
}
return result, true
}
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
if err != nil {
return err
}
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if account.ChallengeCerts == nil {
account.ChallengeCerts = map[string]*ChallengeCert{}
}
account.ChallengeCerts[domain] = &cert
return transaction.Commit(account)
}
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
delete(account.ChallengeCerts, domain)
return transaction.Commit(account)
}
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
}

View File

@@ -0,0 +1,92 @@
package acme
import (
"fmt"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeHTTPProvider)(nil)
type challengeHTTPProvider struct {
store cluster.Store
lock sync.RWMutex
}
func (c *challengeHTTPProvider) getTokenValue(token, domain string) []byte {
log.Debugf("Looking for an existing ACME challenge for token %v...", token)
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.HTTPChallenge == nil {
return []byte{}
}
var result []byte
operation := func() error {
var ok bool
if result, ok = account.HTTPChallenge[token][domain]; !ok {
return fmt.Errorf("cannot find challenge for token %v", token)
}
return nil
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting challenge for token retrying in %s", time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting challenge for token: %v", err)
return []byte{}
}
return result
}
func (c *challengeHTTPProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if account.HTTPChallenge == nil {
account.HTTPChallenge = map[string]map[string][]byte{}
}
if _, ok := account.HTTPChallenge[token]; !ok {
account.HTTPChallenge[token] = map[string][]byte{}
}
account.HTTPChallenge[token][domain] = []byte(keyAuth)
return transaction.Commit(account)
}
func (c *challengeHTTPProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if _, ok := account.HTTPChallenge[token]; ok {
if _, domainOk := account.HTTPChallenge[token][domain]; domainOk {
delete(account.HTTPChallenge[token], domain)
}
if len(account.HTTPChallenge[token]) == 0 {
delete(account.HTTPChallenge, token)
}
}
return transaction.Commit(account)
}
func (c *challengeHTTPProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
}

View File

@@ -0,0 +1,150 @@
package acme
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"encoding/hex"
"encoding/pem"
"fmt"
"strings"
"sync"
"time"
"github.com/cenk/backoff"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/log"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/tls/generate"
"github.com/xenolf/lego/acme"
)
var _ acme.ChallengeProviderTimeout = (*challengeTLSProvider)(nil)
type challengeTLSProvider struct {
store cluster.Store
lock sync.RWMutex
}
func (c *challengeTLSProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
log.Debugf("Looking for an existing ACME challenge for %s...", domain)
if !strings.HasSuffix(domain, ".acme.invalid") {
return nil, false
}
c.lock.RLock()
defer c.lock.RUnlock()
account := c.store.Get().(*Account)
if account.ChallengeCerts == nil {
return nil, false
}
account.Init()
var result *tls.Certificate
operation := func() error {
for _, cert := range account.ChallengeCerts {
for _, dns := range cert.certificate.Leaf.DNSNames {
if domain == dns {
result = cert.certificate
return nil
}
}
}
return fmt.Errorf("cannot find challenge cert for domain %s", domain)
}
notify := func(err error, time time.Duration) {
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
}
ebo := backoff.NewExponentialBackOff()
ebo.MaxElapsedTime = 60 * time.Second
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
if err != nil {
log.Errorf("Error getting cert: %v", err)
return nil, false
}
return result, true
}
func (c *challengeTLSProvider) Present(domain, token, keyAuth string) error {
log.Debugf("Challenge Present %s", domain)
cert, _, err := tlsSNI01ChallengeCert(keyAuth)
if err != nil {
return err
}
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
if account.ChallengeCerts == nil {
account.ChallengeCerts = map[string]*ChallengeCert{}
}
account.ChallengeCerts[domain] = &cert
return transaction.Commit(account)
}
func (c *challengeTLSProvider) CleanUp(domain, token, keyAuth string) error {
log.Debugf("Challenge CleanUp %s", domain)
c.lock.Lock()
defer c.lock.Unlock()
transaction, object, err := c.store.Begin()
if err != nil {
return err
}
account := object.(*Account)
delete(account.ChallengeCerts, domain)
return transaction.Commit(account)
}
func (c *challengeTLSProvider) Timeout() (timeout, interval time.Duration) {
return 60 * time.Second, 5 * time.Second
}
// tlsSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
func tlsSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
// generate a new RSA key for the certificates
var tempPrivKey crypto.PrivateKey
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return ChallengeCert{}, "", err
}
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
rsaPrivPEM := pemEncode(rsaPrivKey)
zBytes := sha256.Sum256([]byte(keyAuth))
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
tempCertPEM, err := generate.PemCert(rsaPrivKey, domain, time.Time{})
if err != nil {
return ChallengeCert{}, "", err
}
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return ChallengeCert{}, "", err
}
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
}
func pemEncode(data interface{}) []byte {
var pemBlock *pem.Block
switch key := data.(type) {
case *ecdsa.PrivateKey:
keyBytes, _ := x509.MarshalECPrivateKey(key)
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
case *rsa.PrivateKey:
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
case *x509.CertificateRequest:
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
case []byte:
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
}
return pem.EncodeToMemory(pemBlock)
}

View File

@@ -1,135 +0,0 @@
package acme
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/hex"
"encoding/pem"
"fmt"
"math/big"
"time"
)
func generateDefaultCertificate() (*tls.Certificate, error) {
randomBytes := make([]byte, 100)
_, err := rand.Read(randomBytes)
if err != nil {
return nil, err
}
zBytes := sha256.Sum256(randomBytes)
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
if err != nil {
return nil, err
}
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
return nil, err
}
return &certificate, nil
}
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, nil, err
}
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
if err != nil {
return nil, nil, err
}
return certPEM, keyPEM, nil
}
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
derBytes, err := generateDerCert(privKey, expiration, domain)
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}), nil
}
func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain string) ([]byte, error) {
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, err
}
if expiration.IsZero() {
expiration = time.Now().Add(365)
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: "TRAEFIK DEFAULT CERT",
},
NotBefore: time.Now(),
NotAfter: expiration,
KeyUsage: x509.KeyUsageKeyEncipherment,
BasicConstraintsValid: true,
DNSNames: []string{domain},
}
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
}
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
// generate a new RSA key for the certificates
var tempPrivKey crypto.PrivateKey
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return ChallengeCert{}, "", err
}
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
rsaPrivPEM := pemEncode(rsaPrivKey)
zBytes := sha256.Sum256([]byte(keyAuth))
z := hex.EncodeToString(zBytes[:sha256.Size])
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
if err != nil {
return ChallengeCert{}, "", err
}
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
if err != nil {
return ChallengeCert{}, "", err
}
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
}
func pemEncode(data interface{}) []byte {
var pemBlock *pem.Block
switch key := data.(type) {
case *ecdsa.PrivateKey:
keyBytes, _ := x509.MarshalECPrivateKey(key)
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
case *rsa.PrivateKey:
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
break
case *x509.CertificateRequest:
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
break
case []byte:
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
}
return pem.EncodeToMemory(pemBlock)
}

41
acme/localStore_test.go Normal file
View File

@@ -0,0 +1,41 @@
package acme
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
)
func TestLoad(t *testing.T) {
acmeFile := "./acme_example.json"
folder, prefix := filepath.Split(acmeFile)
tmpFile, err := ioutil.TempFile(folder, prefix)
defer os.Remove(tmpFile.Name())
if err != nil {
t.Error(err)
}
fileContent, err := ioutil.ReadFile(acmeFile)
if err != nil {
t.Error(err)
}
tmpFile.Write(fileContent)
localStore := NewLocalStore(tmpFile.Name())
obj, err := localStore.Load()
if err != nil {
t.Error(err)
}
account, ok := obj.(*Account)
if !ok {
t.Error("Object is not an ACME Account")
}
if len(account.DomainsCertificate.Certs) != 1 {
t.Errorf("Must found %d and found %d certificates in Account", 3, len(account.DomainsCertificate.Certs))
}
}

22
api/dashboard.go Normal file
View File

@@ -0,0 +1,22 @@
package api
import (
"net/http"
"github.com/containous/mux"
"github.com/containous/traefik/autogen/genstatic"
"github.com/elazarl/go-bindata-assetfs"
)
// DashboardHandler expose dashboard routes
type DashboardHandler struct{}
// AddRoutes add dashboard routes on a router
func (g DashboardHandler) AddRoutes(router *mux.Router) {
// Expose dashboard
router.Methods(http.MethodGet).Path("/").HandlerFunc(func(response http.ResponseWriter, request *http.Request) {
http.Redirect(response, request, request.Header.Get("X-Forwarded-Prefix")+"/dashboard/", 302)
})
router.Methods(http.MethodGet).PathPrefix("/dashboard/").
Handler(http.StripPrefix("/dashboard/", http.FileServer(&assetfs.AssetFS{Asset: genstatic.Asset, AssetInfo: genstatic.AssetInfo, AssetDir: genstatic.AssetDir, Prefix: "static"})))
}

46
api/debug.go Normal file
View File

@@ -0,0 +1,46 @@
package api
import (
"expvar"
"fmt"
"net/http"
"net/http/pprof"
"runtime"
"github.com/containous/mux"
)
func init() {
expvar.Publish("Goroutines", expvar.Func(goroutines))
}
func goroutines() interface{} {
return runtime.NumGoroutine()
}
// DebugHandler expose debug routes
type DebugHandler struct{}
// AddRoutes add debug routes on a router
func (g DebugHandler) AddRoutes(router *mux.Router) {
router.Methods(http.MethodGet).Path("/debug/vars").
HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
fmt.Fprint(w, "{\n")
first := true
expvar.Do(func(kv expvar.KeyValue) {
if !first {
fmt.Fprint(w, ",\n")
}
first = false
fmt.Fprintf(w, "%q: %s", kv.Key, kv.Value)
})
fmt.Fprint(w, "\n}\n")
})
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/cmdline").HandlerFunc(pprof.Cmdline)
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/profile").HandlerFunc(pprof.Profile)
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/symbol").HandlerFunc(pprof.Symbol)
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/trace").HandlerFunc(pprof.Trace)
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/").HandlerFunc(pprof.Index)
}

250
api/handler.go Normal file
View File

@@ -0,0 +1,250 @@
package api
import (
"net/http"
"github.com/containous/mux"
"github.com/containous/traefik/log"
"github.com/containous/traefik/middlewares"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/containous/traefik/version"
thoas_stats "github.com/thoas/stats"
"github.com/unrolled/render"
)
// Handler expose api routes
type Handler struct {
EntryPoint string `description:"EntryPoint" export:"true"`
Dashboard bool `description:"Activate dashboard" export:"true"`
Debug bool `export:"true"`
CurrentConfigurations *safe.Safe
Statistics *types.Statistics `description:"Enable more detailed statistics" export:"true"`
Stats *thoas_stats.Stats `json:"-"`
StatsRecorder *middlewares.StatsRecorder `json:"-"`
}
var (
templatesRenderer = render.New(render.Options{
Directory: "nowhere",
})
)
// AddRoutes add api routes on a router
func (p Handler) AddRoutes(router *mux.Router) {
if p.Debug {
DebugHandler{}.AddRoutes(router)
}
router.Methods(http.MethodGet).Path("/api").HandlerFunc(p.getConfigHandler)
router.Methods(http.MethodGet).Path("/api/providers").HandlerFunc(p.getConfigHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}").HandlerFunc(p.getProviderHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends").HandlerFunc(p.getBackendsHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}").HandlerFunc(p.getBackendHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}/servers").HandlerFunc(p.getServersHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}/servers/{server}").HandlerFunc(p.getServerHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends").HandlerFunc(p.getFrontendsHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}").HandlerFunc(p.getFrontendHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}/routes").HandlerFunc(p.getRoutesHandler)
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}/routes/{route}").HandlerFunc(p.getRouteHandler)
// health route
router.Methods(http.MethodGet).Path("/health").HandlerFunc(p.getHealthHandler)
version.Handler{}.AddRoutes(router)
if p.Dashboard {
DashboardHandler{}.AddRoutes(router)
}
}
func getProviderIDFromVars(vars map[string]string) string {
providerID := vars["provider"]
// TODO: Deprecated
if providerID == "rest" {
providerID = "web"
}
return providerID
}
func (p Handler) getConfigHandler(response http.ResponseWriter, request *http.Request) {
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
err := templatesRenderer.JSON(response, http.StatusOK, currentConfigurations)
if err != nil {
log.Error(err)
}
}
func (p Handler) getProviderHandler(response http.ResponseWriter, request *http.Request) {
providerID := getProviderIDFromVars(mux.Vars(request))
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, provider)
if err != nil {
log.Error(err)
}
} else {
http.NotFound(response, request)
}
}
func (p Handler) getBackendsHandler(response http.ResponseWriter, request *http.Request) {
providerID := getProviderIDFromVars(mux.Vars(request))
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, provider.Backends)
if err != nil {
log.Error(err)
}
} else {
http.NotFound(response, request)
}
}
func (p Handler) getBackendHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
backendID := vars["backend"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if backend, ok := provider.Backends[backendID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, backend)
if err != nil {
log.Error(err)
}
return
}
}
http.NotFound(response, request)
}
func (p Handler) getServersHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
backendID := vars["backend"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if backend, ok := provider.Backends[backendID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, backend.Servers)
if err != nil {
log.Error(err)
}
return
}
}
http.NotFound(response, request)
}
func (p Handler) getServerHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
backendID := vars["backend"]
serverID := vars["server"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if backend, ok := provider.Backends[backendID]; ok {
if server, ok := backend.Servers[serverID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, server)
if err != nil {
log.Error(err)
}
return
}
}
}
http.NotFound(response, request)
}
func (p Handler) getFrontendsHandler(response http.ResponseWriter, request *http.Request) {
providerID := getProviderIDFromVars(mux.Vars(request))
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, provider.Frontends)
if err != nil {
log.Error(err)
}
} else {
http.NotFound(response, request)
}
}
func (p Handler) getFrontendHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
frontendID := vars["frontend"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if frontend, ok := provider.Frontends[frontendID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, frontend)
if err != nil {
log.Error(err)
}
return
}
}
http.NotFound(response, request)
}
func (p Handler) getRoutesHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
frontendID := vars["frontend"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if frontend, ok := provider.Frontends[frontendID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, frontend.Routes)
if err != nil {
log.Error(err)
}
return
}
}
http.NotFound(response, request)
}
func (p Handler) getRouteHandler(response http.ResponseWriter, request *http.Request) {
vars := mux.Vars(request)
providerID := getProviderIDFromVars(vars)
frontendID := vars["frontend"]
routeID := vars["route"]
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
if provider, ok := currentConfigurations[providerID]; ok {
if frontend, ok := provider.Frontends[frontendID]; ok {
if route, ok := frontend.Routes[routeID]; ok {
err := templatesRenderer.JSON(response, http.StatusOK, route)
if err != nil {
log.Error(err)
}
return
}
}
}
http.NotFound(response, request)
}
// healthResponse combines data returned by thoas/stats with statistics (if
// they are enabled).
type healthResponse struct {
*thoas_stats.Data
*middlewares.Stats
}
func (p *Handler) getHealthHandler(response http.ResponseWriter, request *http.Request) {
health := &healthResponse{Data: p.Stats.Data()}
if p.StatsRecorder != nil {
health.Stats = p.StatsRecorder.Data()
}
err := templatesRenderer.JSON(response, http.StatusOK, health)
if err != nil {
log.Error(err)
}
}

984
autogen/gentemplates/gen.go Normal file
View File

@@ -0,0 +1,984 @@
// Code generated by go-bindata.
// sources:
// templates/consul_catalog.tmpl
// templates/docker.tmpl
// templates/ecs.tmpl
// templates/eureka.tmpl
// templates/kubernetes.tmpl
// templates/kv.tmpl
// templates/marathon.tmpl
// templates/mesos.tmpl
// templates/notFound.tmpl
// templates/rancher.tmpl
// DO NOT EDIT!
package gentemplates
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
)
type asset struct {
bytes []byte
info os.FileInfo
}
type bindataFileInfo struct {
name string
size int64
mode os.FileMode
modTime time.Time
}
func (fi bindataFileInfo) Name() string {
return fi.name
}
func (fi bindataFileInfo) Size() int64 {
return fi.size
}
func (fi bindataFileInfo) Mode() os.FileMode {
return fi.mode
}
func (fi bindataFileInfo) ModTime() time.Time {
return fi.modTime
}
func (fi bindataFileInfo) IsDir() bool {
return false
}
func (fi bindataFileInfo) Sys() interface{} {
return nil
}
var _templatesConsul_catalogTmpl = []byte(`[backends]
{{range $index, $node := .Nodes}}
[backends."backend-{{getBackend $node}}".servers."{{getBackendName $node $index}}"]
url = "{{getAttribute "protocol" $node.Service.Tags "http"}}://{{getBackendAddress $node}}:{{$node.Service.Port}}"
{{$weight := getAttribute "backend.weight" $node.Service.Tags "0"}}
{{with $weight}}
weight = {{$weight}}
{{end}}
{{end}}
{{range .Services}}
{{$service := .ServiceName}}
{{$circuitBreaker := getAttribute "backend.circuitbreaker" .Attributes ""}}
{{with $circuitBreaker}}
[backends."backend-{{$service}}".circuitbreaker]
expression = "{{$circuitBreaker}}"
{{end}}
[backends."backend-{{$service}}".loadbalancer]
method = "{{getAttribute "backend.loadbalancer" .Attributes "wrr"}}"
sticky = {{getSticky .Attributes}}
{{if hasStickinessLabel .Attributes}}
[backends."backend-{{$service}}".loadbalancer.stickiness]
cookieName = "{{getStickinessCookieName .Attributes}}"
{{end}}
{{if hasMaxconnAttributes .Attributes}}
[backends."backend-{{$service}}".maxconn]
amount = {{getAttribute "backend.maxconn.amount" .Attributes "" }}
extractorfunc = "{{getAttribute "backend.maxconn.extractorfunc" .Attributes "" }}"
{{end}}
{{end}}
[frontends]
{{range .Services}}
[frontends."frontend-{{.ServiceName}}"]
backend = "backend-{{.ServiceName}}"
passHostHeader = {{getAttribute "frontend.passHostHeader" .Attributes "true"}}
priority = {{getAttribute "frontend.priority" .Attributes "0"}}
{{$entryPoints := getAttribute "frontend.entrypoints" .Attributes ""}}
{{with $entryPoints}}
entrypoints = [{{range getEntryPoints $entryPoints}}
"{{.}}",
{{end}}]
{{end}}
basicAuth = [{{range getBasicAuth .Attributes}}
"{{.}}",
{{end}}]
[frontends."frontend-{{.ServiceName}}".routes."route-host-{{.ServiceName}}"]
rule = "{{getFrontendRule .}}"
{{end}}
`)
func templatesConsul_catalogTmplBytes() ([]byte, error) {
return _templatesConsul_catalogTmpl, nil
}
func templatesConsul_catalogTmpl() (*asset, error) {
bytes, err := templatesConsul_catalogTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/consul_catalog.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesDockerTmpl = []byte(`{{$backendServers := .Servers}}
[backends]{{range $backendName, $backend := .Backends}}
{{if hasCircuitBreakerLabel $backend}}
[backends.backend-{{$backendName}}.circuitbreaker]
expression = "{{getCircuitBreakerExpression $backend}}"
{{end}}
{{if hasLoadBalancerLabel $backend}}
[backends.backend-{{$backendName}}.loadbalancer]
method = "{{getLoadBalancerMethod $backend}}"
sticky = {{getSticky $backend}}
{{if hasStickinessLabel $backend}}
[backends.backend-{{$backendName}}.loadbalancer.stickiness]
cookieName = "{{getStickinessCookieName $backend}}"
{{end}}
{{end}}
{{if hasMaxConnLabels $backend}}
[backends.backend-{{$backendName}}.maxconn]
amount = {{getMaxConnAmount $backend}}
extractorfunc = "{{getMaxConnExtractorFunc $backend}}"
{{end}}
{{$servers := index $backendServers $backendName}}
{{range $serverName, $server := $servers}}
{{if hasServices $server}}
{{$services := getServiceNames $server}}
{{range $serviceIndex, $serviceName := $services}}
[backends.backend-{{getServiceBackend $server $serviceName}}.servers.service-{{$serverName}}]
url = "{{getServiceProtocol $server $serviceName}}://{{getIPAddress $server}}:{{getServicePort $server $serviceName}}"
weight = {{getServiceWeight $server $serviceName}}
{{end}}
{{else}}
[backends.backend-{{$backendName}}.servers.server-{{$server.Name | replace "/" "" | replace "." "-"}}]
url = "{{getProtocol $server}}://{{getIPAddress $server}}:{{getPort $server}}"
weight = {{getWeight $server}}
{{end}}
{{end}}
{{end}}
[frontends]{{range $frontend, $containers := .Frontends}}
{{$container := index $containers 0}}
{{if hasServices $container}}
{{$services := getServiceNames $container}}
{{range $serviceIndex, $serviceName := $services}}
[frontends."frontend-{{getServiceBackend $container $serviceName}}"]
backend = "backend-{{getServiceBackend $container $serviceName}}"
passHostHeader = {{getServicePassHostHeader $container $serviceName}}
{{if getWhitelistSourceRange $container}}
whitelistSourceRange = [{{range getWhitelistSourceRange $container}}
"{{.}}",
{{end}}]
{{end}}
priority = {{getServicePriority $container $serviceName}}
entryPoints = [{{range getServiceEntryPoints $container $serviceName}}
"{{.}}",
{{end}}]
basicAuth = [{{range getServiceBasicAuth $container $serviceName}}
"{{.}}",
{{end}}]
{{if hasServiceRedirect $container $serviceName}}
[frontends."frontend-{{getServiceBackend $container $serviceName}}".redirect]
entryPoint = "{{getServiceRedirectEntryPoint $container $serviceName}}"
regex = "{{getServiceRedirectRegex $container $serviceName}}"
replacement = "{{getServiceRedirectReplacement $container $serviceName}}"
{{end}}
[frontends."frontend-{{getServiceBackend $container $serviceName}}".routes."service-{{$serviceName | replace "/" "" | replace "." "-"}}"]
rule = "{{getServiceFrontendRule $container $serviceName}}"
{{end}}
{{else}}
[frontends."frontend-{{$frontend}}"]
backend = "backend-{{getBackend $container}}"
passHostHeader = {{getPassHostHeader $container}}
{{if getWhitelistSourceRange $container}}
whitelistSourceRange = [{{range getWhitelistSourceRange $container}}
"{{.}}",
{{end}}]
{{end}}
priority = {{getPriority $container}}
entryPoints = [{{range getEntryPoints $container}}
"{{.}}",
{{end}}]
basicAuth = [{{range getBasicAuth $container}}
"{{.}}",
{{end}}]
{{if hasRedirect $container}}
[frontends."frontend-{{$frontend}}".redirect]
entryPoint = "{{getRedirectEntryPoint $container}}"
regex = "{{getRedirectRegex $container}}"
replacement = "{{getRedirectReplacement $container}}"
{{end}}
{{ if hasHeaders $container}}
[frontends."frontend-{{$frontend}}".headers]
{{if hasSSLRedirectHeaders $container}}
SSLRedirect = {{getSSLRedirectHeaders $container}}
{{end}}
{{if hasSSLTemporaryRedirectHeaders $container}}
SSLTemporaryRedirect = {{getSSLTemporaryRedirectHeaders $container}}
{{end}}
{{if hasSSLHostHeaders $container}}
SSLHost = "{{getSSLHostHeaders $container}}"
{{end}}
{{if hasSTSSecondsHeaders $container}}
STSSeconds = {{getSTSSecondsHeaders $container}}
{{end}}
{{if hasSTSIncludeSubdomainsHeaders $container}}
STSIncludeSubdomains = {{getSTSIncludeSubdomainsHeaders $container}}
{{end}}
{{if hasSTSPreloadHeaders $container}}
STSPreload = {{getSTSPreloadHeaders $container}}
{{end}}
{{if hasForceSTSHeaderHeaders $container}}
ForceSTSHeader = {{getForceSTSHeaderHeaders $container}}
{{end}}
{{if hasFrameDenyHeaders $container}}
FrameDeny = {{getFrameDenyHeaders $container}}
{{end}}
{{if hasCustomFrameOptionsValueHeaders $container}}
CustomFrameOptionsValue = "{{getCustomFrameOptionsValueHeaders $container}}"
{{end}}
{{if hasContentTypeNosniffHeaders $container}}
ContentTypeNosniff = {{getContentTypeNosniffHeaders $container}}
{{end}}
{{if hasBrowserXSSFilterHeaders $container}}
BrowserXSSFilter = {{getBrowserXSSFilterHeaders $container}}
{{end}}
{{if hasContentSecurityPolicyHeaders $container}}
ContentSecurityPolicy = "{{getContentSecurityPolicyHeaders $container}}"
{{end}}
{{if hasPublicKeyHeaders $container}}
PublicKey = "{{getPublicKeyHeaders $container}}"
{{end}}
{{if hasReferrerPolicyHeaders $container}}
ReferrerPolicy = "{{getReferrerPolicyHeaders $container}}"
{{end}}
{{if hasIsDevelopmentHeaders $container}}
IsDevelopment = {{getIsDevelopmentHeaders $container}}
{{end}}
{{if hasAllowedHostsHeaders $container}}
AllowedHosts = [{{range getAllowedHostsHeaders $container}}
"{{.}}",
{{end}}]
{{end}}
{{if hasHostsProxyHeaders $container}}
HostsProxyHeaders = [{{range getHostsProxyHeaders $container}}
"{{.}}",
{{end}}]
{{end}}
{{if hasRequestHeaders $container}}
[frontends."frontend-{{$frontend}}".headers.customrequestheaders]
{{range $k, $v := getRequestHeaders $container}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{if hasResponseHeaders $container}}
[frontends."frontend-{{$frontend}}".headers.customresponseheaders]
{{range $k, $v := getResponseHeaders $container}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{if hasSSLProxyHeaders $container}}
[frontends."frontend-{{$frontend}}".headers.SSLProxyHeaders]
{{range $k, $v := getSSLProxyHeaders $container}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{end}}
[frontends."frontend-{{$frontend}}".routes."route-frontend-{{$frontend}}"]
rule = "{{getFrontendRule $container}}"
{{end}}
{{end}}
`)
func templatesDockerTmplBytes() ([]byte, error) {
return _templatesDockerTmpl, nil
}
func templatesDockerTmpl() (*asset, error) {
bytes, err := templatesDockerTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/docker.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesEcsTmpl = []byte(`[backends]{{range $serviceName, $instances := .Services}}
[backends.backend-{{ $serviceName }}.loadbalancer]
method = "{{ getLoadBalancerMethod $instances}}"
sticky = {{ getLoadBalancerSticky $instances}}
{{if hasStickinessLabel $instances}}
[backends.backend-{{ $serviceName }}.loadbalancer.stickiness]
cookieName = "{{getStickinessCookieName $instances}}"
{{end}}
{{ if hasHealthCheckLabels $instances }}
[backends.backend-{{ $serviceName }}.healthcheck]
path = "{{getHealthCheckPath $instances }}"
interval = "{{getHealthCheckInterval $instances }}"
{{end}}
{{range $index, $i := $instances}}
[backends.backend-{{ $i.Name }}.servers.server-{{ $i.Name }}{{ $i.ID }}]
url = "{{ getProtocol $i }}://{{ getHost $i }}:{{ getPort $i }}"
weight = {{ getWeight $i}}
{{end}}
{{end}}
[frontends]{{range $serviceName, $instances := .Services}}
{{range filterFrontends $instances}}
[frontends.frontend-{{ $serviceName }}]
backend = "backend-{{ $serviceName }}"
passHostHeader = {{ getPassHostHeader .}}
priority = {{ getPriority .}}
entryPoints = [{{range getEntryPoints .}}
"{{.}}",
{{end}}]
basicAuth = [{{range getBasicAuth .}}
"{{.}}",
{{end}}]
[frontends.frontend-{{ $serviceName }}.routes.route-frontend-{{ $serviceName }}]
rule = "{{getFrontendRule .}}"
{{end}}
{{end}}`)
func templatesEcsTmplBytes() ([]byte, error) {
return _templatesEcsTmpl, nil
}
func templatesEcsTmpl() (*asset, error) {
bytes, err := templatesEcsTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/ecs.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesEurekaTmpl = []byte(`[backends]{{range .Applications}}
{{ $app := .}}
{{range .Instances}}
[backends.backend{{$app.Name}}.servers.server-{{ getInstanceID . }}]
url = "{{ getProtocol . }}://{{ .IpAddr }}:{{ getPort . }}"
weight = {{ getWeight . }}
{{end}}{{end}}
[frontends]{{range .Applications}}
[frontends.frontend{{.Name}}]
backend = "backend{{.Name}}"
entryPoints = ["http"]
[frontends.frontend{{.Name }}.routes.route-host{{.Name}}]
rule = "Host:{{ .Name | tolower }}"
{{end}}
`)
func templatesEurekaTmplBytes() ([]byte, error) {
return _templatesEurekaTmpl, nil
}
func templatesEurekaTmpl() (*asset, error) {
bytes, err := templatesEurekaTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/eureka.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesKubernetesTmpl = []byte(`[backends]{{range $backendName, $backend := .Backends}}
[backends."{{$backendName}}"]
{{if $backend.CircuitBreaker}}
[backends."{{$backendName}}".circuitbreaker]
expression = "{{$backend.CircuitBreaker.Expression}}"
{{end}}
[backends."{{$backendName}}".loadbalancer]
method = "{{$backend.LoadBalancer.Method}}"
{{if $backend.LoadBalancer.Sticky}}
sticky = true
{{end}}
{{if $backend.LoadBalancer.Stickiness}}
[backends."{{$backendName}}".loadbalancer.stickiness]
cookieName = "{{$backend.LoadBalancer.Stickiness.CookieName}}"
{{end}}
{{range $serverName, $server := $backend.Servers}}
[backends."{{$backendName}}".servers."{{$serverName}}"]
url = "{{$server.URL}}"
weight = {{$server.Weight}}
{{end}}
{{end}}
[frontends]{{range $frontendName, $frontend := .Frontends}}
[frontends."{{$frontendName}}"]
backend = "{{$frontend.Backend}}"
priority = {{$frontend.Priority}}
passHostHeader = {{$frontend.PassHostHeader}}
entryPoints = [{{range $frontend.EntryPoints}}
"{{.}}",
{{end}}]
basicAuth = [{{range $frontend.BasicAuth}}
"{{.}}",
{{end}}]
whitelistSourceRange = [{{range $frontend.WhitelistSourceRange}}
"{{.}}",
{{end}}]
{{if $frontend.Redirect}}
[frontends."{{$frontendName}}".redirect]
entryPoint = "{{$frontend.Redirect.EntryPoint}}"
regex = "{{$frontend.Redirect.Regex}}"
replacement = "{{$frontend.Redirect.Replacement}}"
{{end}}
{{ if $frontend.Headers }}
[frontends."{{$frontendName}}".headers]
SSLRedirect = {{$frontend.Headers.SSLRedirect}}
SSLTemporaryRedirect = {{$frontend.Headers.SSLTemporaryRedirect}}
SSLHost = "{{$frontend.Headers.SSLHost}}"
STSSeconds = {{$frontend.Headers.STSSeconds}}
STSIncludeSubdomains = {{$frontend.Headers.STSIncludeSubdomains}}
STSPreload = {{$frontend.Headers.STSPreload}}
ForceSTSHeader = {{$frontend.Headers.ForceSTSHeader}}
FrameDeny = {{$frontend.Headers.FrameDeny}}
CustomFrameOptionsValue = "{{$frontend.Headers.CustomFrameOptionsValue}}"
ContentTypeNosniff = {{$frontend.Headers.ContentTypeNosniff}}
BrowserXSSFilter = {{$frontend.Headers.BrowserXSSFilter}}
ContentSecurityPolicy = "{{$frontend.Headers.ContentSecurityPolicy}}"
PublicKey = "{{$frontend.Headers.PublicKey}}"
ReferrerPolicy = "{{$frontend.Headers.ReferrerPolicy}}"
IsDevelopment = {{$frontend.Headers.IsDevelopment}}
{{if $frontend.Headers.AllowedHosts}}
AllowedHosts = [{{range $frontend.Headers.AllowedHosts}}
"{{.}}",
{{end}}]
{{end}}
{{if $frontend.Headers.HostsProxyHeaders}}
HostsProxyHeaders = [{{range $frontend.Headers.HostsProxyHeaders}}
"{{.}}",
{{end}}]
{{end}}
{{if $frontend.Headers.CustomRequestHeaders}}
[frontends."{{$frontendName}}".headers.customrequestheaders]
{{range $k, $v := $frontend.Headers.CustomRequestHeaders}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{if $frontend.Headers.CustomResponseHeaders}}
[frontends."{{$frontendName}}".headers.customresponseheaders]
{{range $k, $v := $frontend.Headers.CustomResponseHeaders}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{if $frontend.Headers.SSLProxyHeaders}}
[frontends."{{$frontendName}}".headers.SSLProxyHeaders]
{{range $k, $v := $frontend.Headers.SSLProxyHeaders}}
{{$k}} = "{{$v}}"
{{end}}
{{end}}
{{end}}
{{range $routeName, $route := $frontend.Routes}}
[frontends."{{$frontendName}}".routes."{{$routeName}}"]
rule = "{{$route.Rule}}"
{{end}}
{{end}}
`)
func templatesKubernetesTmplBytes() ([]byte, error) {
return _templatesKubernetesTmpl, nil
}
func templatesKubernetesTmpl() (*asset, error) {
bytes, err := templatesKubernetesTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/kubernetes.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesKvTmpl = []byte(`{{$frontends := List .Prefix "/frontends/" }}
{{$backends := List .Prefix "/backends/"}}
{{$tls := List .Prefix "/tls/"}}
[backends]{{range $backends}}
{{$backend := .}}
{{$backendName := Last $backend}}
{{$servers := ListServers $backend }}
{{$circuitBreaker := Get "" . "/circuitbreaker/" "expression"}}
{{with $circuitBreaker}}
[backends."{{$backendName}}".circuitBreaker]
expression = "{{$circuitBreaker}}"
{{end}}
{{$loadBalancer := Get "" . "/loadbalancer/" "method"}}
{{with $loadBalancer}}
[backends."{{$backendName}}".loadBalancer]
method = "{{$loadBalancer}}"
sticky = {{ getSticky . }}
{{if hasStickinessLabel $backend}}
[backends."{{$backendName}}".loadBalancer.stickiness]
cookieName = "{{getStickinessCookieName $backend}}"
{{end}}
{{end}}
{{$healthCheck := Get "" . "/healthcheck/" "path"}}
{{with $healthCheck}}
[backends."{{$backendName}}".healthCheck]
path = "{{$healthCheck}}"
interval = "{{ Get "30s" $backend "/healthcheck/" "interval" }}"
{{end}}
{{$maxConnAmt := Get "" . "/maxconn/" "amount"}}
{{$maxConnExtractorFunc := Get "" . "/maxconn/" "extractorfunc"}}
{{with $maxConnAmt}}
{{with $maxConnExtractorFunc}}
[backends."{{$backendName}}".maxConn]
amount = {{$maxConnAmt}}
extractorFunc = "{{$maxConnExtractorFunc}}"
{{end}}
{{end}}
{{range $servers}}
[backends."{{$backendName}}".servers."{{Last .}}"]
url = "{{Get "" . "/url"}}"
weight = {{Get "0" . "/weight"}}
{{end}}
{{end}}
[frontends]{{range $frontends}}
{{$frontend := Last .}}
{{$entryPoints := GetList . "/entrypoints"}}
[frontends."{{$frontend}}"]
backend = "{{Get "" . "/backend"}}"
passHostHeader = {{Get "true" . "/passHostHeader"}}
priority = {{Get "0" . "/priority"}}
entryPoints = [{{range $entryPoints}}
"{{.}}",
{{end}}]
{{$routes := List . "/routes/"}}
{{range $routes}}
[frontends."{{$frontend}}".routes."{{Last .}}"]
rule = "{{Get "" . "/rule"}}"
{{end}}
{{end}}
{{range $tls}}
{{$entryPoints := SplitGet . "/entrypoints"}}
[[tls]]
entryPoints = [{{range $entryPoints}}
"{{.}}",
{{end}}]
[tls.certificate]
certFile = """{{Get "" . "/certificate" "/certfile"}}"""
keyFile = """{{Get "" . "/certificate" "/keyfile"}}"""
{{end}}
`)
func templatesKvTmplBytes() ([]byte, error) {
return _templatesKvTmpl, nil
}
func templatesKvTmpl() (*asset, error) {
bytes, err := templatesKvTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/kv.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesMarathonTmpl = []byte(`{{$apps := .Applications}}
{{range $app := $apps}}
{{range $task := $app.Tasks}}
{{range $serviceIndex, $serviceName := getServiceNames $app}}
[backends."backend{{getBackend $app $serviceName}}".servers."server-{{$task.ID | replace "." "-"}}{{getServiceNameSuffix $serviceName }}"]
url = "{{getProtocol $app $serviceName}}://{{getBackendServer $task $app}}:{{getPort $task $app $serviceName}}"
weight = {{getWeight $app $serviceName}}
{{end}}
{{end}}
{{end}}
{{range $app := $apps}}
{{range $serviceIndex, $serviceName := getServiceNames $app}}
[backends."backend{{getBackend $app $serviceName }}"]
{{ if hasMaxConnLabels $app }}
[backends."backend{{getBackend $app $serviceName }}".maxconn]
amount = {{getMaxConnAmount $app }}
extractorfunc = "{{getMaxConnExtractorFunc $app }}"
{{end}}
{{ if hasLoadBalancerLabels $app }}
[backends."backend{{getBackend $app $serviceName }}".loadbalancer]
method = "{{getLoadBalancerMethod $app }}"
sticky = {{getSticky $app}}
{{if hasStickinessLabel $app}}
[backends."backend{{getBackend $app $serviceName }}".loadbalancer.stickiness]
cookieName = "{{getStickinessCookieName $app}}"
{{end}}
{{end}}
{{ if hasCircuitBreakerLabels $app }}
[backends."backend{{getBackend $app $serviceName }}".circuitbreaker]
expression = "{{getCircuitBreakerExpression $app }}"
{{end}}
{{ if hasHealthCheckLabels $app }}
[backends."backend{{getBackend $app $serviceName }}".healthcheck]
path = "{{getHealthCheckPath $app }}"
interval = "{{getHealthCheckInterval $app }}"
{{end}}
{{end}}
{{end}}
[frontends]{{range $app := $apps}}{{range $serviceIndex, $serviceName := getServiceNames .}}
[frontends."{{ getFrontendName $app $serviceName }}"]
backend = "backend{{getBackend $app $serviceName}}"
passHostHeader = {{getPassHostHeader $app $serviceName}}
priority = {{getPriority $app $serviceName}}
entryPoints = [{{range getEntryPoints $app $serviceName}}
"{{.}}",
{{end}}]
basicAuth = [{{range getBasicAuth $app $serviceName}}
"{{.}}",
{{end}}]
[frontends."{{ getFrontendName $app $serviceName }}".routes."route-host{{$app.ID | replace "/" "-"}}{{getServiceNameSuffix $serviceName }}"]
rule = "{{getFrontendRule $app $serviceName}}"
{{end}}{{end}}
`)
func templatesMarathonTmplBytes() ([]byte, error) {
return _templatesMarathonTmpl, nil
}
func templatesMarathonTmpl() (*asset, error) {
bytes, err := templatesMarathonTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/marathon.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesMesosTmpl = []byte(`{{$apps := .Applications}}
[backends]{{range .Tasks}}
[backends.backend{{getBackend . $apps}}.servers.server-{{getID .}}]
url = "{{getProtocol . $apps}}://{{getHost .}}:{{getPort . $apps}}"
weight = {{getWeight . $apps}}
{{end}}
[frontends]{{range .Applications}}
[frontends.frontend-{{getFrontEndName .}}]
backend = "backend{{getFrontendBackend .}}"
passHostHeader = {{getPassHostHeader .}}
priority = {{getPriority .}}
entryPoints = [{{range getEntryPoints .}}
"{{.}}",
{{end}}]
[frontends.frontend-{{getFrontEndName .}}.routes.route-host{{getFrontEndName .}}]
rule = "{{getFrontendRule .}}"
{{end}}
`)
func templatesMesosTmplBytes() ([]byte, error) {
return _templatesMesosTmpl, nil
}
func templatesMesosTmpl() (*asset, error) {
bytes, err := templatesMesosTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/mesos.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesNotfoundTmpl = []byte(`<!DOCTYPE html>
<html>
<head>
<title>Traefik</title>
</head>
<body>
Ohhhh man, this is bad...
</body>
</html>`)
func templatesNotfoundTmplBytes() ([]byte, error) {
return _templatesNotfoundTmpl, nil
}
func templatesNotfoundTmpl() (*asset, error) {
bytes, err := templatesNotfoundTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/notFound.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
var _templatesRancherTmpl = []byte(`{{$backendServers := .Backends}}
[backends]{{range $backendName, $backend := .Backends}}
{{if hasCircuitBreakerLabel $backend}}
[backends.backend-{{$backendName}}.circuitbreaker]
expression = "{{getCircuitBreakerExpression $backend}}"
{{end}}
{{if hasLoadBalancerLabel $backend}}
[backends.backend-{{$backendName}}.loadbalancer]
method = "{{getLoadBalancerMethod $backend}}"
sticky = {{getSticky $backend}}
{{if hasStickinessLabel $backend}}
[backends.backend-{{$backendName}}.loadbalancer.stickiness]
cookieName = "{{getStickinessCookieName $backend}}"
{{end}}
{{end}}
{{if hasMaxConnLabels $backend}}
[backends.backend-{{$backendName}}.maxconn]
amount = {{getMaxConnAmount $backend}}
extractorfunc = "{{getMaxConnExtractorFunc $backend}}"
{{end}}
{{range $index, $ip := $backend.Containers}}
[backends.backend-{{$backendName}}.servers.server-{{$index}}]
url = "{{getProtocol $backend}}://{{$ip}}:{{getPort $backend}}"
weight = {{getWeight $backend}}
{{end}}
{{end}}
[frontends]{{range $frontendName, $service := .Frontends}}
[frontends."frontend-{{$frontendName}}"]
backend = "backend-{{getBackend $service}}"
passHostHeader = {{getPassHostHeader $service}}
priority = {{getPriority $service}}
entryPoints = [{{range getEntryPoints $service}}
"{{.}}",
{{end}}]
basicAuth = [{{range getBasicAuth $service}}
"{{.}}",
{{end}}]
{{if hasRedirect $service}}
[frontends."frontend-{{$frontendName}}".redirect]
entryPoint = "{{getRedirectEntryPoint $service}}"
regex = "{{getRedirectRegex $service}}"
replacement = "{{getRedirectReplacement $service}}"
{{end}}
[frontends."frontend-{{$frontendName}}".routes."route-frontend-{{$frontendName}}"]
rule = "{{getFrontendRule $service}}"
{{end}}
`)
func templatesRancherTmplBytes() ([]byte, error) {
return _templatesRancherTmpl, nil
}
func templatesRancherTmpl() (*asset, error) {
bytes, err := templatesRancherTmplBytes()
if err != nil {
return nil, err
}
info := bindataFileInfo{name: "templates/rancher.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
a := &asset{bytes: bytes, info: info}
return a, nil
}
// Asset loads and returns the asset for the given name.
// It returns an error if the asset could not be found or
// could not be loaded.
func Asset(name string) ([]byte, error) {
cannonicalName := strings.Replace(name, "\\", "/", -1)
if f, ok := _bindata[cannonicalName]; ok {
a, err := f()
if err != nil {
return nil, fmt.Errorf("Asset %s can't read by error: %v", name, err)
}
return a.bytes, nil
}
return nil, fmt.Errorf("Asset %s not found", name)
}
// MustAsset is like Asset but panics when Asset would return an error.
// It simplifies safe initialization of global variables.
func MustAsset(name string) []byte {
a, err := Asset(name)
if err != nil {
panic("asset: Asset(" + name + "): " + err.Error())
}
return a
}
// AssetInfo loads and returns the asset info for the given name.
// It returns an error if the asset could not be found or
// could not be loaded.
func AssetInfo(name string) (os.FileInfo, error) {
cannonicalName := strings.Replace(name, "\\", "/", -1)
if f, ok := _bindata[cannonicalName]; ok {
a, err := f()
if err != nil {
return nil, fmt.Errorf("AssetInfo %s can't read by error: %v", name, err)
}
return a.info, nil
}
return nil, fmt.Errorf("AssetInfo %s not found", name)
}
// AssetNames returns the names of the assets.
func AssetNames() []string {
names := make([]string, 0, len(_bindata))
for name := range _bindata {
names = append(names, name)
}
return names
}
// _bindata is a table, holding each asset generator, mapped to its name.
var _bindata = map[string]func() (*asset, error){
"templates/consul_catalog.tmpl": templatesConsul_catalogTmpl,
"templates/docker.tmpl": templatesDockerTmpl,
"templates/ecs.tmpl": templatesEcsTmpl,
"templates/eureka.tmpl": templatesEurekaTmpl,
"templates/kubernetes.tmpl": templatesKubernetesTmpl,
"templates/kv.tmpl": templatesKvTmpl,
"templates/marathon.tmpl": templatesMarathonTmpl,
"templates/mesos.tmpl": templatesMesosTmpl,
"templates/notFound.tmpl": templatesNotfoundTmpl,
"templates/rancher.tmpl": templatesRancherTmpl,
}
// AssetDir returns the file names below a certain
// directory embedded in the file by go-bindata.
// For example if you run go-bindata on data/... and data contains the
// following hierarchy:
// data/
// foo.txt
// img/
// a.png
// b.png
// then AssetDir("data") would return []string{"foo.txt", "img"}
// AssetDir("data/img") would return []string{"a.png", "b.png"}
// AssetDir("foo.txt") and AssetDir("notexist") would return an error
// AssetDir("") will return []string{"data"}.
func AssetDir(name string) ([]string, error) {
node := _bintree
if len(name) != 0 {
cannonicalName := strings.Replace(name, "\\", "/", -1)
pathList := strings.Split(cannonicalName, "/")
for _, p := range pathList {
node = node.Children[p]
if node == nil {
return nil, fmt.Errorf("Asset %s not found", name)
}
}
}
if node.Func != nil {
return nil, fmt.Errorf("Asset %s not found", name)
}
rv := make([]string, 0, len(node.Children))
for childName := range node.Children {
rv = append(rv, childName)
}
return rv, nil
}
type bintree struct {
Func func() (*asset, error)
Children map[string]*bintree
}
var _bintree = &bintree{nil, map[string]*bintree{
"templates": {nil, map[string]*bintree{
"consul_catalog.tmpl": {templatesConsul_catalogTmpl, map[string]*bintree{}},
"docker.tmpl": {templatesDockerTmpl, map[string]*bintree{}},
"ecs.tmpl": {templatesEcsTmpl, map[string]*bintree{}},
"eureka.tmpl": {templatesEurekaTmpl, map[string]*bintree{}},
"kubernetes.tmpl": {templatesKubernetesTmpl, map[string]*bintree{}},
"kv.tmpl": {templatesKvTmpl, map[string]*bintree{}},
"marathon.tmpl": {templatesMarathonTmpl, map[string]*bintree{}},
"mesos.tmpl": {templatesMesosTmpl, map[string]*bintree{}},
"notFound.tmpl": {templatesNotfoundTmpl, map[string]*bintree{}},
"rancher.tmpl": {templatesRancherTmpl, map[string]*bintree{}},
}},
}}
// RestoreAsset restores an asset under the given directory
func RestoreAsset(dir, name string) error {
data, err := Asset(name)
if err != nil {
return err
}
info, err := AssetInfo(name)
if err != nil {
return err
}
err = os.MkdirAll(_filePath(dir, filepath.Dir(name)), os.FileMode(0755))
if err != nil {
return err
}
err = ioutil.WriteFile(_filePath(dir, name), data, info.Mode())
if err != nil {
return err
}
err = os.Chtimes(_filePath(dir, name), info.ModTime(), info.ModTime())
if err != nil {
return err
}
return nil
}
// RestoreAssets restores an asset under the given directory recursively
func RestoreAssets(dir, name string) error {
children, err := AssetDir(name)
// File
if err != nil {
return RestoreAsset(dir, name)
}
// Dir
for _, child := range children {
err = RestoreAssets(dir, filepath.Join(name, child))
if err != nil {
return err
}
}
return nil
}
func _filePath(dir, name string) string {
cannonicalName := strings.Replace(name, "\\", "/", -1)
return filepath.Join(append([]string{dir}, strings.Split(cannonicalName, "/")...)...)
}

View File

@@ -1,34 +1,26 @@
FROM golang:1.8
FROM golang:1.9-alpine
# Install a more recent version of mercurial to avoid mismatching results
# between glide run on a decently updated host system and the build container.
RUN awk '$1 ~ "^deb" { $3 = $3 "-backports"; print; exit }' /etc/apt/sources.list > /etc/apt/sources.list.d/backports.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -t jessie-backports --yes --no-install-recommends mercurial=3.9.1-1~bpo8+1 && \
rm -fr /var/lib/apt/lists/
RUN apk --update upgrade \
&& apk --no-cache --no-progress add git mercurial bash gcc musl-dev curl tar \
&& rm -rf /var/cache/apk/*
RUN go get github.com/jteeuwen/go-bindata/... \
RUN go get github.com/containous/go-bindata/... \
&& go get github.com/golang/lint/golint \
&& go get github.com/kisielk/errcheck \
&& go get github.com/client9/misspell/cmd/misspell \
&& go get github.com/mattfarina/glide-hash \
&& go get github.com/sgotti/glide-vc
&& go get github.com/client9/misspell/cmd/misspell
# Which docker version to test on
ARG DOCKER_VERSION=1.10.3
ARG DOCKER_VERSION=17.03.2
ARG DEP_VERSION=0.4.1
# Which glide version to test on
ARG GLIDE_VERSION=v0.12.3
# Download glide
# Download dep binary to bin folder in $GOPATH
RUN mkdir -p /usr/local/bin \
&& curl -fL https://github.com/Masterminds/glide/releases/download/${GLIDE_VERSION}/glide-${GLIDE_VERSION}-linux-amd64.tar.gz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
&& curl -fsSL -o /usr/local/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod +x /usr/local/bin/dep
# Download docker
RUN mkdir -p /usr/local/bin \
&& curl -fL https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz \
&& curl -fL https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
WORKDIR /go/src/github.com/containous/traefik

View File

@@ -76,11 +76,11 @@ func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object,
func (d *Datastore) watchChanges() error {
stopCh := make(chan struct{})
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
kvCh, err := d.kv.Watch(d.lockKey, stopCh, nil)
if err != nil {
return err
}
go func() {
safe.Go(func() {
ctx, cancel := context.WithCancel(d.ctx)
operation := func() error {
for {
@@ -97,7 +97,6 @@ func (d *Datastore) watchChanges() error {
if err != nil {
return err
}
// log.Debugf("Datastore object change received: %+v", d.meta)
if d.listener != nil {
err := d.listener(d.meta.object)
if err != nil {
@@ -114,25 +113,14 @@ func (d *Datastore) watchChanges() error {
if err != nil {
log.Errorf("Error in watch datastore: %v", err)
}
}()
})
return nil
}
func (d *Datastore) reload() error {
log.Debugf("Datastore reload")
d.localLock.Lock()
err := d.kv.LoadConfig(d.meta)
if err != nil {
d.localLock.Unlock()
return err
}
err = d.meta.unmarshall()
if err != nil {
d.localLock.Unlock()
return err
}
d.localLock.Unlock()
return nil
log.Debug("Datastore reload")
_, err := d.Load()
return err
}
// Begin creates a transaction with the KV store.
@@ -200,6 +188,10 @@ func (d *Datastore) get() *Metadata {
func (d *Datastore) Load() (Object, error) {
d.localLock.Lock()
defer d.localLock.Unlock()
// clear Object first, as mapstructure's decoder doesn't have ZeroFields set to true for merging purposes
d.meta.Object = d.meta.Object[:0]
err := d.kv.LoadConfig(d.meta)
if err != nil {
return nil, err

View File

@@ -54,7 +54,7 @@ func (l *Leadership) Participate(pool *safe.Pool) {
})
}
// AddListener adds a leadership listerner
// AddListener adds a leadership listener
func (l *Leadership) AddListener(listener LeaderListener) {
l.listeners = append(l.listeners, listener)
}
@@ -86,7 +86,7 @@ func (l *Leadership) onElection(elected bool) {
l.leader.Set(true)
l.Start()
} else {
log.Infof("Node %s elected slave ♝", l.Cluster.Node)
log.Infof("Node %s elected worker ♝", l.Cluster.Node)
l.leader.Set(false)
l.Stop()
}

View File

@@ -0,0 +1,136 @@
package anonymize
import (
"encoding/json"
"fmt"
"reflect"
"regexp"
"github.com/mitchellh/copystructure"
"github.com/mvdan/xurls"
)
const (
maskShort = "xxxx"
maskLarge = maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort
)
// Do configuration.
func Do(baseConfig interface{}, indent bool) (string, error) {
anomConfig, err := copystructure.Copy(baseConfig)
if err != nil {
return "", err
}
val := reflect.ValueOf(anomConfig)
err = doOnStruct(val)
if err != nil {
return "", err
}
configJSON, err := marshal(anomConfig, indent)
if err != nil {
return "", err
}
return doOnJSON(string(configJSON)), nil
}
func doOnJSON(input string) string {
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, maskLarge+"\""), maskLarge)
}
func doOnStruct(field reflect.Value) error {
switch field.Kind() {
case reflect.Ptr:
if !field.IsNil() {
if err := doOnStruct(field.Elem()); err != nil {
return err
}
}
case reflect.Struct:
for i := 0; i < field.NumField(); i++ {
fld := field.Field(i)
stField := field.Type().Field(i)
if !isExported(stField) {
continue
}
if stField.Tag.Get("export") == "true" {
if err := doOnStruct(fld); err != nil {
return err
}
} else {
if err := reset(fld, stField.Name); err != nil {
return err
}
}
}
case reflect.Map:
for _, key := range field.MapKeys() {
if err := doOnStruct(field.MapIndex(key)); err != nil {
return err
}
}
case reflect.Slice:
for j := 0; j < field.Len(); j++ {
if err := doOnStruct(field.Index(j)); err != nil {
return err
}
}
}
return nil
}
func reset(field reflect.Value, name string) error {
if !field.CanSet() {
return fmt.Errorf("cannot reset field %s", name)
}
switch field.Kind() {
case reflect.Ptr:
if !field.IsNil() {
field.Set(reflect.Zero(field.Type()))
}
case reflect.Struct:
if field.IsValid() {
field.Set(reflect.Zero(field.Type()))
}
case reflect.String:
if field.String() != "" {
field.Set(reflect.ValueOf(maskShort))
}
case reflect.Map:
if field.Len() > 0 {
field.Set(reflect.MakeMap(field.Type()))
}
case reflect.Slice:
if field.Len() > 0 {
field.Set(reflect.MakeSlice(field.Type(), 0, 0))
}
case reflect.Interface:
if !field.IsNil() {
return reset(field.Elem(), "")
}
default:
// Primitive type
field.Set(reflect.Zero(field.Type()))
}
return nil
}
// isExported return true is a struct field is exported, else false
func isExported(f reflect.StructField) bool {
if f.PkgPath != "" && !f.Anonymous {
return false
}
return true
}
func marshal(anomConfig interface{}, indent bool) ([]byte, error) {
if indent {
return json.MarshalIndent(anomConfig, "", " ")
}
return json.Marshal(anomConfig)
}

View File

@@ -0,0 +1,664 @@
package anonymize
import (
"crypto/tls"
"testing"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/provider"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/kv"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/zk"
traefikTls "github.com/containous/traefik/tls"
"github.com/containous/traefik/types"
)
func TestDo_globalConfiguration(t *testing.T) {
config := &configuration.GlobalConfiguration{}
config.GraceTimeOut = flaeg.Duration(666 * time.Second)
config.Debug = true
config.CheckNewVersion = true
config.AccessLogsFile = "AccessLogsFile"
config.AccessLog = &types.AccessLog{
FilePath: "AccessLog FilePath",
Format: "AccessLog Format",
}
config.TraefikLogsFile = "TraefikLogsFile"
config.LogLevel = "LogLevel"
config.EntryPoints = configuration.EntryPoints{
"foo": {
Network: "foo Network",
Address: "foo Address",
TLS: &traefikTls.TLS{
MinVersion: "foo MinVersion",
CipherSuites: []string{"foo CipherSuites 1", "foo CipherSuites 2", "foo CipherSuites 3"},
Certificates: traefikTls.Certificates{
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
},
ClientCA: traefikTls.ClientCA{
Files: []string{"foo ClientCAFiles 1", "foo ClientCAFiles 2", "foo ClientCAFiles 3"},
Optional: false,
},
},
Redirect: &types.Redirect{
Replacement: "foo Replacement",
Regex: "foo Regex",
EntryPoint: "foo EntryPoint",
},
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "foo Basic UsersFile",
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "foo Digest UsersFile",
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
},
Forward: &types.Forward{
Address: "foo Address",
TLS: &types.ClientTLS{
CA: "foo CA",
Cert: "foo Cert",
Key: "foo Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
WhitelistSourceRange: []string{"foo WhitelistSourceRange 1", "foo WhitelistSourceRange 2", "foo WhitelistSourceRange 3"},
Compress: true,
ProxyProtocol: &configuration.ProxyProtocol{
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
},
},
"fii": {
Network: "fii Network",
Address: "fii Address",
TLS: &traefikTls.TLS{
MinVersion: "fii MinVersion",
CipherSuites: []string{"fii CipherSuites 1", "fii CipherSuites 2", "fii CipherSuites 3"},
Certificates: traefikTls.Certificates{
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
},
ClientCA: traefikTls.ClientCA{
Files: []string{"fii ClientCAFiles 1", "fii ClientCAFiles 2", "fii ClientCAFiles 3"},
Optional: false,
},
},
Redirect: &types.Redirect{
Replacement: "fii Replacement",
Regex: "fii Regex",
EntryPoint: "fii EntryPoint",
},
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "fii Basic UsersFile",
Users: types.Users{"fii Basic Users 1", "fii Basic Users 2", "fii Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "fii Digest UsersFile",
Users: types.Users{"fii Digest Users 1", "fii Digest Users 2", "fii Digest Users 3"},
},
Forward: &types.Forward{
Address: "fii Address",
TLS: &types.ClientTLS{
CA: "fii CA",
Cert: "fii Cert",
Key: "fii Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
WhitelistSourceRange: []string{"fii WhitelistSourceRange 1", "fii WhitelistSourceRange 2", "fii WhitelistSourceRange 3"},
Compress: true,
ProxyProtocol: &configuration.ProxyProtocol{
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
},
},
}
config.Cluster = &types.Cluster{
Node: "Cluster Node",
Store: &types.Store{
Prefix: "Cluster Store Prefix",
// ...
},
}
config.Constraints = types.Constraints{
{
Key: "Constraints Key 1",
Regex: "Constraints Regex 2",
MustMatch: true,
},
{
Key: "Constraints Key 1",
Regex: "Constraints Regex 2",
MustMatch: true,
},
}
config.ACME = &acme.ACME{
Email: "acme Email",
Domains: []acme.Domain{
{
Main: "Domains Main",
SANs: []string{"Domains acme SANs 1", "Domains acme SANs 2", "Domains acme SANs 3"},
},
},
Storage: "Storage",
StorageFile: "StorageFile",
OnDemand: true,
OnHostRule: true,
CAServer: "CAServer",
EntryPoint: "EntryPoint",
DNSChallenge: &acme.DNSChallenge{Provider: "DNSProvider"},
DelayDontCheckDNS: 666,
ACMELogging: true,
TLSConfig: &tls.Config{
InsecureSkipVerify: true,
// ...
},
}
config.DefaultEntryPoints = configuration.DefaultEntryPoints{"DefaultEntryPoints 1", "DefaultEntryPoints 2", "DefaultEntryPoints 3"}
config.ProvidersThrottleDuration = flaeg.Duration(666 * time.Second)
config.MaxIdleConnsPerHost = 666
config.IdleTimeout = flaeg.Duration(666 * time.Second)
config.InsecureSkipVerify = true
config.RootCAs = traefikTls.RootCAs{"RootCAs 1", "RootCAs 2", "RootCAs 3"}
config.Retry = &configuration.Retry{
Attempts: 666,
}
config.HealthCheck = &configuration.HealthCheckConfig{
Interval: flaeg.Duration(666 * time.Second),
}
config.RespondingTimeouts = &configuration.RespondingTimeouts{
ReadTimeout: flaeg.Duration(666 * time.Second),
WriteTimeout: flaeg.Duration(666 * time.Second),
IdleTimeout: flaeg.Duration(666 * time.Second),
}
config.ForwardingTimeouts = &configuration.ForwardingTimeouts{
DialTimeout: flaeg.Duration(666 * time.Second),
ResponseHeaderTimeout: flaeg.Duration(666 * time.Second),
}
config.Docker = &docker.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "docker Filename",
Constraints: types.Constraints{
{
Key: "docker Constraints Key 1",
Regex: "docker Constraints Regex 2",
MustMatch: true,
},
{
Key: "docker Constraints Key 1",
Regex: "docker Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "docker Endpoint",
Domain: "docker Domain",
TLS: &types.ClientTLS{
CA: "docker CA",
Cert: "docker Cert",
Key: "docker Key",
InsecureSkipVerify: true,
},
ExposedByDefault: true,
UseBindPortIP: true,
SwarmMode: true,
}
config.File = &file.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "file Filename",
Constraints: types.Constraints{
{
Key: "file Constraints Key 1",
Regex: "file Constraints Regex 2",
MustMatch: true,
},
{
Key: "file Constraints Key 1",
Regex: "file Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Directory: "file Directory",
}
config.Web = &configuration.WebCompatibility{
Address: "web Address",
CertFile: "web CertFile",
KeyFile: "web KeyFile",
ReadOnly: true,
Statistics: &types.Statistics{
RecentErrors: 666,
},
Metrics: &types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{6.5, 6.6, 6.7},
},
Datadog: &types.Datadog{
Address: "Datadog Address",
PushInterval: "Datadog PushInterval",
},
StatsD: &types.Statsd{
Address: "StatsD Address",
PushInterval: "StatsD PushInterval",
},
},
Path: "web Path",
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "web Basic UsersFile",
Users: types.Users{"web Basic Users 1", "web Basic Users 2", "web Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "web Digest UsersFile",
Users: types.Users{"web Digest Users 1", "web Digest Users 2", "web Digest Users 3"},
},
Forward: &types.Forward{
Address: "web Address",
TLS: &types.ClientTLS{
CA: "web CA",
Cert: "web Cert",
Key: "web Key",
InsecureSkipVerify: true,
},
TrustForwardHeader: true,
},
},
Debug: true,
}
config.Marathon = &marathon.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "marathon Filename",
Constraints: types.Constraints{
{
Key: "marathon Constraints Key 1",
Regex: "marathon Constraints Regex 2",
MustMatch: true,
},
{
Key: "marathon Constraints Key 1",
Regex: "marathon Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "",
Domain: "",
ExposedByDefault: true,
GroupsAsSubDomains: true,
DCOSToken: "",
MarathonLBCompatibility: true,
TLS: &types.ClientTLS{
CA: "marathon CA",
Cert: "marathon Cert",
Key: "marathon Key",
InsecureSkipVerify: true,
},
DialerTimeout: flaeg.Duration(666 * time.Second),
KeepAlive: flaeg.Duration(666 * time.Second),
ForceTaskHostname: true,
Basic: &marathon.Basic{
HTTPBasicAuthUser: "marathon HTTPBasicAuthUser",
HTTPBasicPassword: "marathon HTTPBasicPassword",
},
RespectReadinessChecks: true,
}
config.ConsulCatalog = &consul.CatalogProvider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "ConsulCatalog Filename",
Constraints: types.Constraints{
{
Key: "ConsulCatalog Constraints Key 1",
Regex: "ConsulCatalog Constraints Regex 2",
MustMatch: true,
},
{
Key: "ConsulCatalog Constraints Key 1",
Regex: "ConsulCatalog Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "ConsulCatalog Endpoint",
Domain: "ConsulCatalog Domain",
ExposedByDefault: true,
Prefix: "ConsulCatalog Prefix",
FrontEndRule: "ConsulCatalog FrontEndRule",
}
config.Kubernetes = &kubernetes.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "k8s Filename",
Constraints: types.Constraints{
{
Key: "k8s Constraints Key 1",
Regex: "k8s Constraints Regex 2",
MustMatch: true,
},
{
Key: "k8s Constraints Key 1",
Regex: "k8s Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "k8s Endpoint",
Token: "k8s Token",
CertAuthFilePath: "k8s CertAuthFilePath",
DisablePassHostHeaders: true,
Namespaces: kubernetes.Namespaces{"k8s Namespaces 1", "k8s Namespaces 2", "k8s Namespaces 3"},
LabelSelector: "k8s LabelSelector",
}
config.Mesos = &mesos.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "mesos Filename",
Constraints: types.Constraints{
{
Key: "mesos Constraints Key 1",
Regex: "mesos Constraints Regex 2",
MustMatch: true,
},
{
Key: "mesos Constraints Key 1",
Regex: "mesos Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "mesos Endpoint",
Domain: "mesos Domain",
ExposedByDefault: true,
GroupsAsSubDomains: true,
ZkDetectionTimeout: 666,
RefreshSeconds: 666,
IPSources: "mesos IPSources",
StateTimeoutSecond: 666,
Masters: []string{"mesos Masters 1", "mesos Masters 2", "mesos Masters 3"},
}
config.Eureka = &eureka.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "eureka Filename",
Constraints: types.Constraints{
{
Key: "eureka Constraints Key 1",
Regex: "eureka Constraints Regex 2",
MustMatch: true,
},
{
Key: "eureka Constraints Key 1",
Regex: "eureka Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "eureka Endpoint",
Delay: "eureka Delay",
}
config.ECS = &ecs.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "ecs Filename",
Constraints: types.Constraints{
{
Key: "ecs Constraints Key 1",
Regex: "ecs Constraints Regex 2",
MustMatch: true,
},
{
Key: "ecs Constraints Key 1",
Regex: "ecs Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Domain: "ecs Domain",
ExposedByDefault: true,
RefreshSeconds: 666,
Clusters: ecs.Clusters{"ecs Clusters 1", "ecs Clusters 2", "ecs Clusters 3"},
Cluster: "ecs Cluster",
AutoDiscoverClusters: true,
Region: "ecs Region",
AccessKeyID: "ecs AccessKeyID",
SecretAccessKey: "ecs SecretAccessKey",
}
config.Rancher = &rancher.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "rancher Filename",
Constraints: types.Constraints{
{
Key: "rancher Constraints Key 1",
Regex: "rancher Constraints Regex 2",
MustMatch: true,
},
{
Key: "rancher Constraints Key 1",
Regex: "rancher Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
APIConfiguration: rancher.APIConfiguration{
Endpoint: "rancher Endpoint",
AccessKey: "rancher AccessKey",
SecretKey: "rancher SecretKey",
},
API: &rancher.APIConfiguration{
Endpoint: "rancher Endpoint",
AccessKey: "rancher AccessKey",
SecretKey: "rancher SecretKey",
},
Metadata: &rancher.MetadataConfiguration{
IntervalPoll: true,
Prefix: "rancher Metadata Prefix",
},
Domain: "rancher Domain",
RefreshSeconds: 666,
ExposedByDefault: true,
EnableServiceHealthFilter: true,
}
config.DynamoDB = &dynamodb.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "dynamodb Filename",
Constraints: types.Constraints{
{
Key: "dynamodb Constraints Key 1",
Regex: "dynamodb Constraints Regex 2",
MustMatch: true,
},
{
Key: "dynamodb Constraints Key 1",
Regex: "dynamodb Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
AccessKeyID: "dynamodb AccessKeyID",
RefreshSeconds: 666,
Region: "dynamodb Region",
SecretAccessKey: "dynamodb SecretAccessKey",
TableName: "dynamodb TableName",
Endpoint: "dynamodb Endpoint",
}
config.Etcd = &etcd.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "etcd Filename",
Constraints: types.Constraints{
{
Key: "etcd Constraints Key 1",
Regex: "etcd Constraints Regex 2",
MustMatch: true,
},
{
Key: "etcd Constraints Key 1",
Regex: "etcd Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "etcd Endpoint",
Prefix: "etcd Prefix",
TLS: &types.ClientTLS{
CA: "etcd CA",
Cert: "etcd Cert",
Key: "etcd Key",
InsecureSkipVerify: true,
},
Username: "etcd Username",
Password: "etcd Password",
},
}
config.Zookeeper = &zk.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "zk Filename",
Constraints: types.Constraints{
{
Key: "zk Constraints Key 1",
Regex: "zk Constraints Regex 2",
MustMatch: true,
},
{
Key: "zk Constraints Key 1",
Regex: "zk Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "zk Endpoint",
Prefix: "zk Prefix",
TLS: &types.ClientTLS{
CA: "zk CA",
Cert: "zk Cert",
Key: "zk Key",
InsecureSkipVerify: true,
},
Username: "zk Username",
Password: "zk Password",
},
}
config.Boltdb = &boltdb.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "boltdb Filename",
Constraints: types.Constraints{
{
Key: "boltdb Constraints Key 1",
Regex: "boltdb Constraints Regex 2",
MustMatch: true,
},
{
Key: "boltdb Constraints Key 1",
Regex: "boltdb Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "boltdb Endpoint",
Prefix: "boltdb Prefix",
TLS: &types.ClientTLS{
CA: "boltdb CA",
Cert: "boltdb Cert",
Key: "boltdb Key",
InsecureSkipVerify: true,
},
Username: "boltdb Username",
Password: "boltdb Password",
},
}
config.Consul = &consul.Provider{
Provider: kv.Provider{
BaseProvider: provider.BaseProvider{
Watch: true,
Filename: "consul Filename",
Constraints: types.Constraints{
{
Key: "consul Constraints Key 1",
Regex: "consul Constraints Regex 2",
MustMatch: true,
},
{
Key: "consul Constraints Key 1",
Regex: "consul Constraints Regex 2",
MustMatch: true,
},
},
Trace: true,
DebugLogGeneratedTemplate: true,
},
Endpoint: "consul Endpoint",
Prefix: "consul Prefix",
TLS: &types.ClientTLS{
CA: "consul CA",
Cert: "consul Cert",
Key: "consul Key",
InsecureSkipVerify: true,
},
Username: "consul Username",
Password: "consul Password",
},
}
cleanJSON, err := Do(config, true)
if err != nil {
t.Fatal(err, cleanJSON)
}
}

View File

@@ -0,0 +1,239 @@
package anonymize
import (
"testing"
"github.com/stretchr/testify/assert"
)
func Test_doOnJSON(t *testing.T) {
baseConfiguration := `
{
"GraceTimeOut": 10000000000,
"Debug": false,
"CheckNewVersion": true,
"AccessLogsFile": "",
"TraefikLogsFile": "",
"LogLevel": "ERROR",
"EntryPoints": {
"http": {
"Network": "",
"Address": ":80",
"TLS": null,
"Redirect": {
"EntryPoint": "https",
"Regex": "",
"Replacement": ""
},
"Auth": null,
"Compress": false
},
"https": {
"Network": "",
"Address": ":443",
"TLS": {
"MinVersion": "",
"CipherSuites": null,
"Certificates": null,
"ClientCAFiles": null
},
"Redirect": null,
"Auth": null,
"Compress": false
}
},
"Cluster": null,
"Constraints": [],
"ACME": {
"Email": "foo@bar.com",
"Domains": [
{
"Main": "foo@bar.com",
"SANs": null
},
{
"Main": "foo@bar.com",
"SANs": null
}
],
"Storage": "",
"StorageFile": "/acme/acme.json",
"OnDemand": true,
"OnHostRule": true,
"CAServer": "",
"EntryPoint": "https",
"DNSProvider": "",
"DelayDontCheckDNS": 0,
"ACMELogging": false,
"TLSConfig": null
},
"DefaultEntryPoints": [
"https",
"http"
],
"ProvidersThrottleDuration": 2000000000,
"MaxIdleConnsPerHost": 200,
"IdleTimeout": 180000000000,
"InsecureSkipVerify": false,
"Retry": null,
"HealthCheck": {
"Interval": 30000000000
},
"Docker": null,
"File": null,
"Web": null,
"Marathon": null,
"Consul": null,
"ConsulCatalog": null,
"Etcd": null,
"Zookeeper": null,
"Boltdb": null,
"Kubernetes": null,
"Mesos": null,
"Eureka": null,
"ECS": null,
"Rancher": null,
"DynamoDB": null,
"ConfigFile": "/etc/traefik/traefik.toml"
}
`
expectedConfiguration := `
{
"GraceTimeOut": 10000000000,
"Debug": false,
"CheckNewVersion": true,
"AccessLogsFile": "",
"TraefikLogsFile": "",
"LogLevel": "ERROR",
"EntryPoints": {
"http": {
"Network": "",
"Address": ":80",
"TLS": null,
"Redirect": {
"EntryPoint": "https",
"Regex": "",
"Replacement": ""
},
"Auth": null,
"Compress": false
},
"https": {
"Network": "",
"Address": ":443",
"TLS": {
"MinVersion": "",
"CipherSuites": null,
"Certificates": null,
"ClientCAFiles": null
},
"Redirect": null,
"Auth": null,
"Compress": false
}
},
"Cluster": null,
"Constraints": [],
"ACME": {
"Email": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Domains": [
{
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SANs": null
},
{
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SANs": null
}
],
"Storage": "",
"StorageFile": "/acme/acme.json",
"OnDemand": true,
"OnHostRule": true,
"CAServer": "",
"EntryPoint": "https",
"DNSProvider": "",
"DelayDontCheckDNS": 0,
"ACMELogging": false,
"TLSConfig": null
},
"DefaultEntryPoints": [
"https",
"http"
],
"ProvidersThrottleDuration": 2000000000,
"MaxIdleConnsPerHost": 200,
"IdleTimeout": 180000000000,
"InsecureSkipVerify": false,
"Retry": null,
"HealthCheck": {
"Interval": 30000000000
},
"Docker": null,
"File": null,
"Web": null,
"Marathon": null,
"Consul": null,
"ConsulCatalog": null,
"Etcd": null,
"Zookeeper": null,
"Boltdb": null,
"Kubernetes": null,
"Mesos": null,
"Eureka": null,
"ECS": null,
"Rancher": null,
"DynamoDB": null,
"ConfigFile": "/etc/traefik/traefik.toml"
}
`
anomConfiguration := doOnJSON(baseConfiguration)
if anomConfiguration != expectedConfiguration {
t.Errorf("Got %s, want %s.", anomConfiguration, expectedConfiguration)
}
}
func Test_doOnJSON_simple(t *testing.T) {
testCases := []struct {
name string
input string
expectedOutput string
}{
{
name: "email",
input: `{
"email1": "goo@example.com",
"email2": "foo.bargoo@example.com",
"email3": "foo.bargoo@example.com.us"
}`,
expectedOutput: `{
"email1": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"email2": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"email3": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}`,
},
{
name: "url",
input: `{
"URL": "foo domain.com foo",
"URL": "foo sub.domain.com foo",
"URL": "foo sub.sub.domain.com foo",
"URL": "foo sub.sub.sub.domain.com.us foo"
}`,
expectedOutput: `{
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo"
}`,
},
}
for _, test := range testCases {
t.Run(test.name, func(t *testing.T) {
output := doOnJSON(test.input)
assert.Equal(t, test.expectedOutput, output)
})
}
}

View File

@@ -0,0 +1,176 @@
package anonymize
import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
)
type Courgette struct {
Ji string
Ho string
}
type Tomate struct {
Ji string
Ho string
}
type Carotte struct {
Name string
Value int
Courgette Courgette
ECourgette Courgette `export:"true"`
Pourgette *Courgette
EPourgette *Courgette `export:"true"`
Aubergine map[string]string
EAubergine map[string]string `export:"true"`
SAubergine map[string]Tomate
ESAubergine map[string]Tomate `export:"true"`
PSAubergine map[string]*Tomate
EPAubergine map[string]*Tomate `export:"true"`
}
func Test_doOnStruct(t *testing.T) {
testCase := []struct {
name string
base *Carotte
expected *Carotte
hasError bool
}{
{
name: "primitive",
base: &Carotte{
Name: "koko",
Value: 666,
},
expected: &Carotte{
Name: "xxxx",
},
},
{
name: "struct",
base: &Carotte{
Name: "koko",
Courgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
},
},
{
name: "pointer",
base: &Carotte{
Name: "koko",
Pourgette: &Courgette{
Ji: "hoo",
},
},
expected: &Carotte{
Name: "xxxx",
Pourgette: nil,
},
},
{
name: "export struct",
base: &Carotte{
Name: "koko",
ECourgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
ECourgette: Courgette{
Ji: "xxxx",
},
},
},
{
name: "export pointer struct",
base: &Carotte{
Name: "koko",
ECourgette: Courgette{
Ji: "huu",
},
},
expected: &Carotte{
Name: "xxxx",
ECourgette: Courgette{
Ji: "xxxx",
},
},
},
{
name: "export map string/string",
base: &Carotte{
Name: "koko",
EAubergine: map[string]string{
"foo": "bar",
},
},
expected: &Carotte{
Name: "xxxx",
EAubergine: map[string]string{
"foo": "bar",
},
},
},
{
name: "export map string/pointer",
base: &Carotte{
Name: "koko",
EPAubergine: map[string]*Tomate{
"foo": {
Ji: "fdskljf",
},
},
},
expected: &Carotte{
Name: "xxxx",
EPAubergine: map[string]*Tomate{
"foo": {
Ji: "xxxx",
},
},
},
},
{
name: "export map string/struct (UNSAFE)",
base: &Carotte{
Name: "koko",
ESAubergine: map[string]Tomate{
"foo": {
Ji: "JiJiJi",
},
},
},
expected: &Carotte{
Name: "xxxx",
ESAubergine: map[string]Tomate{
"foo": {
Ji: "JiJiJi",
},
},
},
hasError: true,
},
}
for _, test := range testCase {
t.Run(test.name, func(t *testing.T) {
val := reflect.ValueOf(test.base).Elem()
err := doOnStruct(val)
if !test.hasError && err != nil {
t.Fatal(err)
}
if test.hasError && err == nil {
t.Fatal("Got no error but want an error.")
}
assert.EqualValues(t, test.expected, test.base)
})
}
}

View File

@@ -2,33 +2,45 @@ package main
import (
"bytes"
"encoding/json"
"fmt"
"net/url"
"os/exec"
"regexp"
"runtime"
"text/template"
"github.com/containous/flaeg"
"github.com/mvdan/xurls"
"github.com/containous/traefik/cmd/traefik/anonymize"
)
var (
bugtracker = "https://github.com/containous/traefik/issues/new"
const (
bugTracker = "https://github.com/containous/traefik/issues/new"
bugTemplate = `<!--
PLEASE READ THIS MESSAGE.
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
For other type of questions, consider using one of:
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
- StackOverflow: https://stackoverflow.com/questions/tagged/traefik
-->
### Do you want to request a *feature* or report a *bug*?
(If you intend to ask a support question: **DO NOT FILE AN ISSUE**.
Use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik)
or [Slack](https://traefik.herokuapp.com) instead.)
### What did you do?
<!--
HOW TO WRITE A GOOD ISSUE?
- if it's possible use the command` + "`" + `traefik bug` + "`" + `. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- Respect the issue template as more as possible.
- If it's possible use the command ` + "`" + "traefik bug" + "`" + `. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title must be short and descriptive.
- Explain the conditions which led you to write this issue: the context.
- The context should lead to something, an idea or a problem that youre facing.
@@ -37,12 +49,6 @@ HOW TO WRITE A GOOD ISSUE?
-->
### Do you want to request a *feature* or report a *bug*?
### What did you do?
### What did you expect to see?
@@ -60,7 +66,7 @@ HOW TO WRITE A GOOD ISSUE?
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
` + "```" + `toml
` + "```" + `json
{{.Configuration}}
` + "```" + `
@@ -78,7 +84,7 @@ Add more configuration information here.
)
// newBugCmd builds a new Bug command
func newBugCmd(traefikConfiguration interface{}, traefikPointersConfiguration interface{}) *flaeg.Command {
func newBugCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
//version Command init
return &flaeg.Command{
@@ -86,50 +92,67 @@ func newBugCmd(traefikConfiguration interface{}, traefikPointersConfiguration in
Description: `Report an issue on Traefik bugtracker`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
var version bytes.Buffer
if err := getVersionPrint(&version); err != nil {
return err
}
tmpl, err := template.New("").Parse(bugTemplate)
if err != nil {
return err
}
configJSON, err := json.MarshalIndent(traefikConfiguration, "", " ")
if err != nil {
return err
}
v := struct {
Version string
Configuration string
}{
Version: version.String(),
Configuration: anonymize(string(configJSON)),
}
var bug bytes.Buffer
if err := tmpl.Execute(&bug, v); err != nil {
return err
}
body := bug.String()
URL := bugtracker + "?body=" + url.QueryEscape(body)
if err := openBrowser(URL); err != nil {
fmt.Print("Please file a new issue at " + bugtracker + " using this template:\n\n")
fmt.Print(body)
}
return nil
},
Run: runBugCmd(traefikConfiguration),
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func runBugCmd(traefikConfiguration *TraefikConfiguration) func() error {
return func() error {
body, err := createBugReport(traefikConfiguration)
if err != nil {
return err
}
sendBugReport(body)
return nil
}
}
func createBugReport(traefikConfiguration *TraefikConfiguration) (string, error) {
var version bytes.Buffer
if err := getVersionPrint(&version); err != nil {
return "", err
}
tmpl, err := template.New("bug").Parse(bugTemplate)
if err != nil {
return "", err
}
config, err := anonymize.Do(traefikConfiguration, true)
if err != nil {
return "", err
}
v := struct {
Version string
Configuration string
}{
Version: version.String(),
Configuration: config,
}
var bug bytes.Buffer
if err := tmpl.Execute(&bug, v); err != nil {
return "", err
}
return bug.String(), nil
}
func sendBugReport(body string) {
URL := bugTracker + "?body=" + url.QueryEscape(body)
if err := openBrowser(URL); err != nil {
fmt.Printf("Please file a new issue at %s using this template:\n\n", bugTracker)
fmt.Print(body)
}
}
func openBrowser(URL string) error {
var err error
switch runtime.GOOS {
@@ -144,9 +167,3 @@ func openBrowser(URL string) error {
}
return err
}
func anonymize(input string) string {
replace := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, replace), replace)
}

66
cmd/traefik/bug_test.go Normal file
View File

@@ -0,0 +1,66 @@
package main
import (
"testing"
"github.com/containous/traefik/cmd/traefik/anonymize"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/tls"
"github.com/containous/traefik/types"
"github.com/stretchr/testify/assert"
)
func Test_createBugReport(t *testing.T) {
traefikConfiguration := &TraefikConfiguration{
ConfigFile: "FOO",
GlobalConfiguration: configuration.GlobalConfiguration{
EntryPoints: configuration.EntryPoints{
"goo": &configuration.EntryPoint{
Address: "hoo.bar",
Auth: &types.Auth{
Basic: &types.Basic{
UsersFile: "foo Basic UsersFile",
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
},
Digest: &types.Digest{
UsersFile: "foo Digest UsersFile",
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
},
},
},
},
File: &file.Provider{
Directory: "BAR",
},
RootCAs: tls.RootCAs{"fllf"},
},
}
report, err := createBugReport(traefikConfiguration)
assert.NoError(t, err, report)
// exported anonymous configuration
assert.NotContains(t, "web Basic Users ", report)
assert.NotContains(t, "foo Digest Users ", report)
assert.NotContains(t, "hoo.bar", report)
}
func Test_anonymize_traefikConfiguration(t *testing.T) {
traefikConfiguration := &TraefikConfiguration{
ConfigFile: "FOO",
GlobalConfiguration: configuration.GlobalConfiguration{
EntryPoints: configuration.EntryPoints{
"goo": &configuration.EntryPoint{
Address: "hoo.bar",
},
},
File: &file.Provider{
Directory: "BAR",
},
},
}
_, err := anonymize.Do(traefikConfiguration, true)
assert.NoError(t, err)
assert.Equal(t, "hoo.bar", traefikConfiguration.GlobalConfiguration.EntryPoints["goo"].Address)
}

View File

@@ -0,0 +1,297 @@
package main
import (
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik-extra-service-fabric"
"github.com/containous/traefik/api"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/middlewares/accesslog"
"github.com/containous/traefik/ping"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/rest"
"github.com/containous/traefik/provider/zk"
"github.com/containous/traefik/types"
sf "github.com/jjcollinge/servicefabric"
)
// TraefikConfiguration holds GlobalConfiguration and other stuff
type TraefikConfiguration struct {
configuration.GlobalConfiguration `mapstructure:",squash" export:"true"`
ConfigFile string `short:"c" description:"Configuration file to use (TOML)." export:"true"`
}
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
//default Docker
var defaultDocker docker.Provider
defaultDocker.Watch = true
defaultDocker.ExposedByDefault = true
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
defaultDocker.SwarmMode = false
// default File
var defaultFile file.Provider
defaultFile.Watch = true
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
// default Rest
var defaultRest rest.Provider
defaultRest.EntryPoint = configuration.DefaultInternalEntryPointName
// TODO: Deprecated - Web provider, use REST provider instead
var defaultWeb configuration.WebCompatibility
defaultWeb.Address = ":8080"
defaultWeb.Statistics = &types.Statistics{
RecentErrors: 10,
}
// TODO: Deprecated - default Metrics
defaultWeb.Metrics = &types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
EntryPoint: configuration.DefaultInternalEntryPointName,
},
Datadog: &types.Datadog{
Address: "localhost:8125",
PushInterval: "10s",
},
StatsD: &types.Statsd{
Address: "localhost:8125",
PushInterval: "10s",
},
InfluxDB: &types.InfluxDB{
Address: "localhost:8089",
PushInterval: "10s",
},
}
// default Marathon
var defaultMarathon marathon.Provider
defaultMarathon.Watch = true
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
defaultMarathon.ExposedByDefault = true
defaultMarathon.Constraints = types.Constraints{}
defaultMarathon.DialerTimeout = flaeg.Duration(60 * time.Second)
defaultMarathon.KeepAlive = flaeg.Duration(10 * time.Second)
// default Consul
var defaultConsul consul.Provider
defaultConsul.Watch = true
defaultConsul.Endpoint = "127.0.0.1:8500"
defaultConsul.Prefix = "traefik"
defaultConsul.Constraints = types.Constraints{}
// default CatalogProvider
var defaultConsulCatalog consul.CatalogProvider
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
defaultConsulCatalog.ExposedByDefault = true
defaultConsulCatalog.Constraints = types.Constraints{}
defaultConsulCatalog.Prefix = "traefik"
defaultConsulCatalog.FrontEndRule = "Host:{{.ServiceName}}.{{.Domain}}"
// default Etcd
var defaultEtcd etcd.Provider
defaultEtcd.Watch = true
defaultEtcd.Endpoint = "127.0.0.1:2379"
defaultEtcd.Prefix = "/traefik"
defaultEtcd.Constraints = types.Constraints{}
//default Zookeeper
var defaultZookeeper zk.Provider
defaultZookeeper.Watch = true
defaultZookeeper.Endpoint = "127.0.0.1:2181"
defaultZookeeper.Prefix = "traefik"
defaultZookeeper.Constraints = types.Constraints{}
//default Boltdb
var defaultBoltDb boltdb.Provider
defaultBoltDb.Watch = true
defaultBoltDb.Endpoint = "127.0.0.1:4001"
defaultBoltDb.Prefix = "/traefik"
defaultBoltDb.Constraints = types.Constraints{}
//default Kubernetes
var defaultKubernetes kubernetes.Provider
defaultKubernetes.Watch = true
defaultKubernetes.Endpoint = ""
defaultKubernetes.LabelSelector = ""
defaultKubernetes.Constraints = types.Constraints{}
// default Mesos
var defaultMesos mesos.Provider
defaultMesos.Watch = true
defaultMesos.Endpoint = "http://127.0.0.1:5050"
defaultMesos.ExposedByDefault = true
defaultMesos.Constraints = types.Constraints{}
defaultMesos.RefreshSeconds = 30
defaultMesos.ZkDetectionTimeout = 30
defaultMesos.StateTimeoutSecond = 30
//default ECS
var defaultECS ecs.Provider
defaultECS.Watch = true
defaultECS.ExposedByDefault = true
defaultECS.AutoDiscoverClusters = false
defaultECS.Clusters = ecs.Clusters{"default"}
defaultECS.RefreshSeconds = 15
defaultECS.Constraints = types.Constraints{}
//default Rancher
var defaultRancher rancher.Provider
defaultRancher.Watch = true
defaultRancher.ExposedByDefault = true
defaultRancher.RefreshSeconds = 15
// default DynamoDB
var defaultDynamoDB dynamodb.Provider
defaultDynamoDB.Constraints = types.Constraints{}
defaultDynamoDB.RefreshSeconds = 15
defaultDynamoDB.TableName = "traefik"
defaultDynamoDB.Watch = true
// default Eureka
var defaultEureka eureka.Provider
defaultEureka.Delay = "30s"
// default ServiceFabric
var defaultServiceFabric servicefabric.Provider
defaultServiceFabric.APIVersion = sf.DefaultAPIVersion
defaultServiceFabric.RefreshSeconds = 10
// default Ping
var defaultPing = ping.Handler{
EntryPoint: "traefik",
}
// default TraefikLog
defaultTraefikLog := types.TraefikLog{
Format: "common",
FilePath: "",
}
// default AccessLog
defaultAccessLog := types.AccessLog{
Format: accesslog.CommonFormat,
FilePath: "",
}
// default HealthCheckConfig
healthCheck := configuration.HealthCheckConfig{
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
}
// default RespondingTimeouts
respondingTimeouts := configuration.RespondingTimeouts{
IdleTimeout: flaeg.Duration(configuration.DefaultIdleTimeout),
}
// default ForwardingTimeouts
forwardingTimeouts := configuration.ForwardingTimeouts{
DialTimeout: flaeg.Duration(configuration.DefaultDialTimeout),
}
// default LifeCycle
defaultLifeCycle := configuration.LifeCycle{
GraceTimeOut: flaeg.Duration(configuration.DefaultGraceTimeout),
}
// default ApiConfiguration
defaultAPI := api.Handler{
EntryPoint: "traefik",
Dashboard: true,
}
defaultAPI.Statistics = &types.Statistics{
RecentErrors: 10,
}
// default Metrics
defaultMetrics := types.Metrics{
Prometheus: &types.Prometheus{
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
EntryPoint: configuration.DefaultInternalEntryPointName,
},
Datadog: &types.Datadog{
Address: "localhost:8125",
PushInterval: "10s",
},
StatsD: &types.Statsd{
Address: "localhost:8125",
PushInterval: "10s",
},
InfluxDB: &types.InfluxDB{
Address: "localhost:8089",
PushInterval: "10s",
},
}
defaultConfiguration := configuration.GlobalConfiguration{
Docker: &defaultDocker,
File: &defaultFile,
Web: &defaultWeb,
Rest: &defaultRest,
Marathon: &defaultMarathon,
Consul: &defaultConsul,
ConsulCatalog: &defaultConsulCatalog,
Etcd: &defaultEtcd,
Zookeeper: &defaultZookeeper,
Boltdb: &defaultBoltDb,
Kubernetes: &defaultKubernetes,
Mesos: &defaultMesos,
ECS: &defaultECS,
Rancher: &defaultRancher,
Eureka: &defaultEureka,
DynamoDB: &defaultDynamoDB,
Retry: &configuration.Retry{},
HealthCheck: &healthCheck,
RespondingTimeouts: &respondingTimeouts,
ForwardingTimeouts: &forwardingTimeouts,
TraefikLog: &defaultTraefikLog,
AccessLog: &defaultAccessLog,
LifeCycle: &defaultLifeCycle,
Ping: &defaultPing,
API: &defaultAPI,
Metrics: &defaultMetrics,
}
return &TraefikConfiguration{
GlobalConfiguration: defaultConfiguration,
}
}
// NewTraefikConfiguration creates a TraefikConfiguration with default values
func NewTraefikConfiguration() *TraefikConfiguration {
return &TraefikConfiguration{
GlobalConfiguration: configuration.GlobalConfiguration{
AccessLogsFile: "",
TraefikLogsFile: "",
LogLevel: "ERROR",
EntryPoints: map[string]*configuration.EntryPoint{},
Constraints: types.Constraints{},
DefaultEntryPoints: []string{"http"},
ProvidersThrottleDuration: flaeg.Duration(2 * time.Second),
MaxIdleConnsPerHost: 200,
IdleTimeout: flaeg.Duration(0),
HealthCheck: &configuration.HealthCheckConfig{
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
},
LifeCycle: &configuration.LifeCycle{
GraceTimeOut: flaeg.Duration(configuration.DefaultGraceTimeout),
},
CheckNewVersion: true,
},
ConfigFile: "",
}
}

View File

@@ -0,0 +1,71 @@
package main
import (
"crypto/tls"
"errors"
"fmt"
"net/http"
"os"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/configuration"
)
func newHealthCheckCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
return &flaeg.Command{
Name: "healthcheck",
Description: `Calls traefik /ping to check health (web provider must be enabled)`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: runHealthCheck(traefikConfiguration),
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func runHealthCheck(traefikConfiguration *TraefikConfiguration) func() error {
return func() error {
traefikConfiguration.GlobalConfiguration.SetEffectiveConfiguration(traefikConfiguration.ConfigFile)
resp, errPing := healthCheck(traefikConfiguration.GlobalConfiguration)
if errPing != nil {
fmt.Printf("Error calling healthcheck: %s\n", errPing)
os.Exit(1)
}
if resp.StatusCode != http.StatusOK {
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
os.Exit(1)
}
fmt.Printf("OK: %s\n", resp.Request.URL)
os.Exit(0)
return nil
}
}
func healthCheck(globalConfiguration configuration.GlobalConfiguration) (*http.Response, error) {
if globalConfiguration.Ping == nil {
return nil, errors.New("please enable `ping` to use health check")
}
pingEntryPoint, ok := globalConfiguration.EntryPoints[globalConfiguration.Ping.EntryPoint]
if !ok {
return nil, errors.New("missing `ping` entrypoint")
}
client := &http.Client{Timeout: 5 * time.Second}
protocol := "http"
if pingEntryPoint.TLS != nil {
protocol = "https"
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client.Transport = tr
}
path := "/"
if globalConfiguration.Web != nil {
path = globalConfiguration.Web.Path
}
return client.Head(protocol + "://" + pingEntryPoint.Address + path + "ping")
}

145
cmd/traefik/storeconfig.go Normal file
View File

@@ -0,0 +1,145 @@
package main
import (
"encoding/json"
"fmt"
stdlog "log"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/cluster"
"github.com/docker/libkv/store"
)
func newStoreConfigCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
return &flaeg.Command{
Name: "storeconfig",
Description: `Store the static traefik configuration into a Key-value stores. Traefik will not start.`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Metadata: map[string]string{
"parseAllSources": "true",
},
}
}
func runStoreConfig(kv *staert.KvSource, traefikConfiguration *TraefikConfiguration) func() error {
return func() error {
if kv == nil {
return fmt.Errorf("error using command storeconfig, no Key-value store defined")
}
fileConfig := traefikConfiguration.GlobalConfiguration.File
if fileConfig != nil {
traefikConfiguration.GlobalConfiguration.File = nil
if len(fileConfig.Filename) == 0 && len(fileConfig.Directory) == 0 {
fileConfig.Filename = traefikConfiguration.ConfigFile
}
}
jsonConf, err := json.Marshal(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
stdlog.Printf("Storing configuration: %s\n", jsonConf)
err = kv.StoreConfig(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
if fileConfig != nil {
jsonConf, err = json.Marshal(fileConfig)
if err != nil {
return err
}
stdlog.Printf("Storing file configuration: %s\n", jsonConf)
config, err := fileConfig.LoadConfig()
if err != nil {
return err
}
stdlog.Print("Writing config to KV")
err = kv.StoreConfig(config)
if err != nil {
return err
}
}
if traefikConfiguration.GlobalConfiguration.ACME != nil {
var object cluster.Object
if len(traefikConfiguration.GlobalConfiguration.ACME.StorageFile) > 0 {
// convert ACME json file to KV store
localStore := acme.NewLocalStore(traefikConfiguration.GlobalConfiguration.ACME.StorageFile)
object, err = localStore.Load()
if err != nil {
return err
}
} else {
// Create an empty account to create all the keys into the KV store
account := &acme.Account{}
account.Init()
object = account
}
meta := cluster.NewMetadata(object)
err = meta.Marshall()
if err != nil {
return err
}
source := staert.KvSource{
Store: kv,
Prefix: traefikConfiguration.GlobalConfiguration.ACME.Storage,
}
err = source.StoreConfig(meta)
if err != nil {
return err
}
// Force to delete storagefile
err = kv.Delete(kv.Prefix + "/acme/storagefile")
if err != nil {
return err
}
}
return nil
}
}
// createKvSource creates KvSource
// TLS support is enable for Consul and Etcd backends
func createKvSource(traefikConfiguration *TraefikConfiguration) (*staert.KvSource, error) {
var kv *staert.KvSource
var kvStore store.Store
var err error
switch {
case traefikConfiguration.Consul != nil:
kvStore, err = traefikConfiguration.Consul.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Consul.Prefix,
}
case traefikConfiguration.Etcd != nil:
kvStore, err = traefikConfiguration.Etcd.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Etcd.Prefix,
}
case traefikConfiguration.Zookeeper != nil:
kvStore, err = traefikConfiguration.Zookeeper.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Zookeeper.Prefix,
}
case traefikConfiguration.Boltdb != nil:
kvStore, err = traefikConfiguration.Boltdb.CreateStore()
kv = &staert.KvSource{
Store: kvStore,
Prefix: traefikConfiguration.Boltdb.Prefix,
}
}
return kv, err
}

View File

@@ -1,41 +1,40 @@
package main
import (
"crypto/tls"
"encoding/json"
"fmt"
fmtlog "log"
"net/http"
"os"
"path/filepath"
"reflect"
"runtime"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/cenk/backoff"
"github.com/containous/flaeg"
"github.com/containous/staert"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/cluster"
"github.com/containous/traefik/collector"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/job"
"github.com/containous/traefik/log"
"github.com/containous/traefik/middlewares"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/server"
"github.com/containous/traefik/server/uuid"
traefikTls "github.com/containous/traefik/tls"
"github.com/containous/traefik/types"
"github.com/containous/traefik/version"
"github.com/coreos/go-systemd/daemon"
"github.com/docker/libkv/store"
"github.com/satori/go.uuid"
"github.com/ogier/pflag"
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
//traefik config inits
traefikConfiguration := server.NewTraefikConfiguration()
traefikPointersConfiguration := server.NewTraefikDefaultPointersConfiguration()
traefikConfiguration := NewTraefikConfiguration()
traefikPointersConfiguration := NewTraefikDefaultPointersConfiguration()
//traefik Command init
traefikCmd := &flaeg.Command{
Name: "traefik",
@@ -44,75 +43,31 @@ Complete documentation is available at https://traefik.io`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
run(traefikConfiguration)
run(&traefikConfiguration.GlobalConfiguration, traefikConfiguration.ConfigFile)
return nil
},
}
//storeconfig Command init
var kv *staert.KvSource
var err error
storeconfigCmd := &flaeg.Command{
Name: "storeconfig",
Description: `Store the static traefik configuration into a Key-value stores. Traefik will not start.`,
Config: traefikConfiguration,
DefaultPointersConfig: traefikPointersConfiguration,
Run: func() error {
if kv == nil {
return fmt.Errorf("Error using command storeconfig, no Key-value store defined")
}
jsonConf, err := json.Marshal(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
fmtlog.Printf("Storing configuration: %s\n", jsonConf)
err = kv.StoreConfig(traefikConfiguration.GlobalConfiguration)
if err != nil {
return err
}
if traefikConfiguration.GlobalConfiguration.ACME != nil && len(traefikConfiguration.GlobalConfiguration.ACME.StorageFile) > 0 {
// convert ACME json file to KV store
store := acme.NewLocalStore(traefikConfiguration.GlobalConfiguration.ACME.StorageFile)
object, err := store.Load()
if err != nil {
return err
}
meta := cluster.NewMetadata(object)
err = meta.Marshall()
if err != nil {
return err
}
source := staert.KvSource{
Store: kv,
Prefix: traefikConfiguration.GlobalConfiguration.ACME.Storage,
}
err = source.StoreConfig(meta)
if err != nil {
return err
}
}
return nil
},
Metadata: map[string]string{
"parseAllSources": "true",
},
}
storeConfigCmd := newStoreConfigCmd(traefikConfiguration, traefikPointersConfiguration)
//init flaeg source
f := flaeg.New(traefikCmd, os.Args[1:])
//add custom parsers
f.AddParser(reflect.TypeOf(server.EntryPoints{}), &server.EntryPoints{})
f.AddParser(reflect.TypeOf(server.DefaultEntryPoints{}), &server.DefaultEntryPoints{})
f.AddParser(reflect.TypeOf(configuration.EntryPoints{}), &configuration.EntryPoints{})
f.AddParser(reflect.TypeOf(configuration.DefaultEntryPoints{}), &configuration.DefaultEntryPoints{})
f.AddParser(reflect.TypeOf(traefikTls.RootCAs{}), &traefikTls.RootCAs{})
f.AddParser(reflect.TypeOf(types.Constraints{}), &types.Constraints{})
f.AddParser(reflect.TypeOf(kubernetes.Namespaces{}), &kubernetes.Namespaces{})
f.AddParser(reflect.TypeOf(ecs.Clusters{}), &ecs.Clusters{})
f.AddParser(reflect.TypeOf([]acme.Domain{}), &acme.Domains{})
f.AddParser(reflect.TypeOf(types.Buckets{}), &types.Buckets{})
//add commands
f.AddCommand(newVersionCmd())
f.AddCommand(newBugCmd(traefikConfiguration, traefikPointersConfiguration))
f.AddCommand(storeconfigCmd)
f.AddCommand(storeConfigCmd)
f.AddCommand(newHealthCheckCmd(traefikConfiguration, traefikPointersConfiguration))
usedCmd, err := f.GetCommand()
if err != nil {
@@ -121,6 +76,9 @@ Complete documentation is available at https://traefik.io`,
}
if _, err := f.Parse(usedCmd); err != nil {
if err == pflag.ErrHelp {
os.Exit(0)
}
fmtlog.Printf("Error parsing command: %s\n", err)
os.Exit(-1)
}
@@ -134,28 +92,37 @@ Complete documentation is available at https://traefik.io`,
s.AddSource(toml)
s.AddSource(f)
if _, err := s.LoadConfig(); err != nil {
fmtlog.Println(fmt.Errorf("Error reading TOML config file %s : %s", toml.ConfigFileUsed(), err))
fmtlog.Printf("Error reading TOML config file %s : %s\n", toml.ConfigFileUsed(), err)
os.Exit(-1)
}
traefikConfiguration.ConfigFile = toml.ConfigFileUsed()
kv, err = CreateKvSource(traefikConfiguration)
kv, err := createKvSource(traefikConfiguration)
if err != nil {
fmtlog.Printf("Error creating kv store: %s\n", err)
os.Exit(-1)
}
storeConfigCmd.Run = runStoreConfig(kv, traefikConfiguration)
// IF a KV Store is enable and no sub-command called in args
if kv != nil && usedCmd == traefikCmd {
if traefikConfiguration.Cluster == nil {
traefikConfiguration.Cluster = &types.Cluster{Node: uuid.NewV4().String()}
traefikConfiguration.Cluster = &types.Cluster{Node: uuid.Get()}
}
if traefikConfiguration.Cluster.Store == nil {
traefikConfiguration.Cluster.Store = &types.Store{Prefix: kv.Prefix, Store: kv.Store}
}
s.AddSource(kv)
if _, err := s.LoadConfig(); err != nil {
operation := func() error {
_, err := s.LoadConfig()
return err
}
notify := func(err error, time time.Duration) {
log.Errorf("Load config error: %+v, retrying in %s", err, time)
}
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
if err != nil {
fmtlog.Printf("Error loading configuration: %s\n", err)
os.Exit(-1)
}
@@ -169,93 +136,37 @@ Complete documentation is available at https://traefik.io`,
os.Exit(0)
}
func run(traefikConfiguration *server.TraefikConfiguration) {
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
func run(globalConfiguration *configuration.GlobalConfiguration, configFile string) {
configureLogging(globalConfiguration)
// load global configuration
globalConfiguration := traefikConfiguration.GlobalConfiguration
http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = globalConfiguration.MaxIdleConnsPerHost
if globalConfiguration.InsecureSkipVerify {
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
loggerMiddleware := middlewares.NewLogger(globalConfiguration.AccessLogsFile)
defer loggerMiddleware.Close()
if globalConfiguration.File != nil && len(globalConfiguration.File.Filename) == 0 {
// no filename, setting to global config file
if len(traefikConfiguration.ConfigFile) != 0 {
globalConfiguration.File.Filename = traefikConfiguration.ConfigFile
} else {
log.Errorln("Error using file configuration backend, no filename defined")
}
if len(configFile) > 0 {
log.Infof("Using TOML configuration file %s", configFile)
}
if len(globalConfiguration.EntryPoints) == 0 {
globalConfiguration.EntryPoints = map[string]*server.EntryPoint{"http": {Address: ":80"}}
globalConfiguration.DefaultEntryPoints = []string{"http"}
}
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
if globalConfiguration.Debug {
globalConfiguration.LogLevel = "DEBUG"
}
globalConfiguration.SetEffectiveConfiguration(configFile)
globalConfiguration.ValidateConfiguration()
// logging
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
if err != nil {
log.Error("Error getting level", err)
}
log.SetLevel(level)
if len(globalConfiguration.TraefikLogsFile) > 0 {
dir := filepath.Dir(globalConfiguration.TraefikLogsFile)
err := os.MkdirAll(dir, 0755)
if err != nil {
log.Errorf("Failed to create log path %s: %s", dir, err)
}
fi, err := os.OpenFile(globalConfiguration.TraefikLogsFile, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
defer func() {
if err := fi.Close(); err != nil {
log.Error("Error closing file", err)
}
}()
if err != nil {
log.Error("Error opening file", err)
} else {
log.SetOutput(fi)
log.SetFormatter(&logrus.TextFormatter{DisableColors: true, FullTimestamp: true, DisableSorting: true})
}
} else {
log.SetFormatter(&logrus.TextFormatter{FullTimestamp: true, DisableSorting: true})
}
jsonConf, _ := json.Marshal(globalConfiguration)
log.Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
if globalConfiguration.CheckNewVersion {
ticker := time.NewTicker(24 * time.Hour)
safe.Go(func() {
version.CheckNewVersion()
for {
select {
case <-ticker.C:
version.CheckNewVersion()
}
}
})
checkNewVersion()
}
if len(traefikConfiguration.ConfigFile) != 0 {
log.Infof("Using TOML configuration file %s", traefikConfiguration.ConfigFile)
}
stats(globalConfiguration)
log.Debugf("Global configuration loaded %s", string(jsonConf))
svr := server.NewServer(globalConfiguration)
svr := server.NewServer(*globalConfiguration)
svr.Start()
defer svr.Close()
sent, err := daemon.SdNotify(false, "READY=1")
if !sent && err != nil {
log.Error("Fail to notify", err)
}
t, err := daemon.SdWatchdogEnabled(false)
if err != nil {
log.Error("Problem with watchdog", err)
@@ -266,48 +177,114 @@ func run(traefikConfiguration *server.TraefikConfiguration) {
safe.Go(func() {
tick := time.Tick(t)
for range tick {
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.Error("Fail to tick watchdog")
_, errHealthCheck := healthCheck(*globalConfiguration)
if globalConfiguration.Ping == nil || errHealthCheck == nil {
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
log.Error("Fail to tick watchdog")
}
} else {
log.Error(errHealthCheck)
}
}
})
}
svr.Wait()
log.Info("Shutting down")
logrus.Exit(0)
}
// CreateKvSource creates KvSource
// TLS support is enable for Consul and Etcd backends
func CreateKvSource(traefikConfiguration *server.TraefikConfiguration) (*staert.KvSource, error) {
var kv *staert.KvSource
var store store.Store
var err error
func configureLogging(globalConfiguration *configuration.GlobalConfiguration) {
// configure default log flags
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
switch {
case traefikConfiguration.Consul != nil:
store, err = traefikConfiguration.Consul.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Consul.Prefix,
if globalConfiguration.Debug {
globalConfiguration.LogLevel = "DEBUG"
}
// configure log level
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
if err != nil {
log.Error("Error getting level", err)
}
log.SetLevel(level)
// configure log output file
logFile := globalConfiguration.TraefikLogsFile
if len(logFile) > 0 {
log.Warn("top-level traefikLogsFile has been deprecated -- please use traefiklog.filepath")
}
if globalConfiguration.TraefikLog != nil && len(globalConfiguration.TraefikLog.FilePath) > 0 {
logFile = globalConfiguration.TraefikLog.FilePath
}
// configure log format
var formatter logrus.Formatter
if globalConfiguration.TraefikLog != nil && globalConfiguration.TraefikLog.Format == "json" {
formatter = &logrus.JSONFormatter{}
} else {
disableColors := false
if len(logFile) > 0 {
disableColors = true
}
case traefikConfiguration.Etcd != nil:
store, err = traefikConfiguration.Etcd.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Etcd.Prefix,
formatter = &logrus.TextFormatter{DisableColors: disableColors, FullTimestamp: true, DisableSorting: true}
}
log.SetFormatter(formatter)
if len(logFile) > 0 {
dir := filepath.Dir(logFile)
err := os.MkdirAll(dir, 0755)
if err != nil {
log.Errorf("Failed to create log path %s: %s", dir, err)
}
case traefikConfiguration.Zookeeper != nil:
store, err = traefikConfiguration.Zookeeper.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Zookeeper.Prefix,
}
case traefikConfiguration.Boltdb != nil:
store, err = traefikConfiguration.Boltdb.CreateStore()
kv = &staert.KvSource{
Store: store,
Prefix: traefikConfiguration.Boltdb.Prefix,
err = log.OpenFile(logFile)
logrus.RegisterExitHandler(func() {
if err := log.CloseFile(); err != nil {
log.Error("Error closing log", err)
}
})
if err != nil {
log.Error("Error opening file", err)
}
}
return kv, err
}
func checkNewVersion() {
ticker := time.Tick(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
version.CheckNewVersion()
}
})
}
func stats(globalConfiguration *configuration.GlobalConfiguration) {
if globalConfiguration.SendAnonymousUsage {
log.Info(`
Stats collection is enabled.
Many thanks for contributing to Traefik's improvement by allowing us to receive anonymous information from your configuration.
Help us improve Traefik by leaving this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
`)
collect(globalConfiguration)
} else {
log.Info(`
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
`)
}
}
func collect(globalConfiguration *configuration.GlobalConfiguration) {
ticker := time.Tick(24 * time.Hour)
safe.Go(func() {
for time.Sleep(10 * time.Minute); ; <-ticker {
if err := collector.Collect(globalConfiguration); err != nil {
log.Debug(err)
}
}
})
}

View File

@@ -30,7 +30,7 @@ func newVersionCmd() *flaeg.Command {
if err := getVersionPrint(os.Stdout); err != nil {
return err
}
fmt.Printf("\n")
fmt.Print("\n")
return nil
},

79
collector/collector.go Normal file
View File

@@ -0,0 +1,79 @@
package collector
import (
"bytes"
"encoding/base64"
"encoding/json"
"net"
"net/http"
"strconv"
"time"
"github.com/containous/traefik/cmd/traefik/anonymize"
"github.com/containous/traefik/configuration"
"github.com/containous/traefik/log"
"github.com/containous/traefik/version"
"github.com/mitchellh/hashstructure"
)
// collectorURL URL where the stats are send
const collectorURL = "https://collect.traefik.io/619df80498b60f985d766ce62f912b7c"
// Collected data
type data struct {
Version string
Codename string
BuildDate string
Configuration string
Hash string
}
// Collect anonymous data.
func Collect(globalConfiguration *configuration.GlobalConfiguration) error {
anonConfig, err := anonymize.Do(globalConfiguration, false)
if err != nil {
return err
}
log.Infof("Anonymous stats sent to %s: %s", collectorURL, anonConfig)
hashConf, err := hashstructure.Hash(globalConfiguration, nil)
if err != nil {
return err
}
data := &data{
Version: version.Version,
Codename: version.Codename,
BuildDate: version.BuildDate,
Hash: strconv.FormatUint(hashConf, 10),
Configuration: base64.StdEncoding.EncodeToString([]byte(anonConfig)),
}
buf := new(bytes.Buffer)
err = json.NewEncoder(buf).Encode(data)
if err != nil {
return err
}
_, err = makeHTTPClient().Post(collectorURL, "application/json; charset=utf-8", buf)
return err
}
func makeHTTPClient() *http.Client {
dialer := &net.Dialer{
Timeout: configuration.DefaultDialTimeout,
KeepAlive: 30 * time.Second,
DualStack: true,
}
transport := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: dialer.DialContext,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
}
return &http.Client{Transport: transport}
}

View File

@@ -0,0 +1,504 @@
package configuration
import (
"fmt"
"strings"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik-extra-service-fabric"
"github.com/containous/traefik/acme"
"github.com/containous/traefik/api"
"github.com/containous/traefik/log"
"github.com/containous/traefik/ping"
"github.com/containous/traefik/provider/boltdb"
"github.com/containous/traefik/provider/consul"
"github.com/containous/traefik/provider/docker"
"github.com/containous/traefik/provider/dynamodb"
"github.com/containous/traefik/provider/ecs"
"github.com/containous/traefik/provider/etcd"
"github.com/containous/traefik/provider/eureka"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/provider/kubernetes"
"github.com/containous/traefik/provider/marathon"
"github.com/containous/traefik/provider/mesos"
"github.com/containous/traefik/provider/rancher"
"github.com/containous/traefik/provider/rest"
"github.com/containous/traefik/provider/zk"
"github.com/containous/traefik/tls"
"github.com/containous/traefik/types"
)
const (
// DefaultInternalEntryPointName the name of the default internal entry point
DefaultInternalEntryPointName = "traefik"
// DefaultHealthCheckInterval is the default health check interval.
DefaultHealthCheckInterval = 30 * time.Second
// DefaultDialTimeout when connecting to a backend server.
DefaultDialTimeout = 30 * time.Second
// DefaultIdleTimeout before closing an idle connection.
DefaultIdleTimeout = 180 * time.Second
// DefaultGraceTimeout controls how long Traefik serves pending requests
// prior to shutting down.
DefaultGraceTimeout = 10 * time.Second
)
// GlobalConfiguration holds global configuration (with providers, etc.).
// It's populated from the traefik configuration file passed as an argument to the binary.
type GlobalConfiguration struct {
LifeCycle *LifeCycle `description:"Timeouts influencing the server life cycle" export:"true"`
GraceTimeOut flaeg.Duration `short:"g" description:"(Deprecated) Duration to give active requests a chance to finish before Traefik stops" export:"true"` // Deprecated
Debug bool `short:"d" description:"Enable debug mode" export:"true"`
CheckNewVersion bool `description:"Periodically check if a new version has been released" export:"true"`
SendAnonymousUsage bool `description:"send periodically anonymous usage statistics" export:"true"`
AccessLogsFile string `description:"(Deprecated) Access logs file" export:"true"` // Deprecated
AccessLog *types.AccessLog `description:"Access log settings" export:"true"`
TraefikLogsFile string `description:"(Deprecated) Traefik logs file. Stdout is used when omitted or empty" export:"true"` // Deprecated
TraefikLog *types.TraefikLog `description:"Traefik log settings" export:"true"`
LogLevel string `short:"l" description:"Log level" export:"true"`
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'" export:"true"`
Cluster *types.Cluster `description:"Enable clustering" export:"true"`
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags" export:"true"`
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL" export:"true"`
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint" export:"true"`
ProvidersThrottleDuration flaeg.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time." export:"true"`
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used" export:"true"`
IdleTimeout flaeg.Duration `description:"(Deprecated) maximum amount of time an idle (keep-alive) connection will remain idle before closing itself." export:"true"` // Deprecated
InsecureSkipVerify bool `description:"Disable SSL certificate verification" export:"true"`
RootCAs tls.RootCAs `description:"Add cert file for self-signed certificate"`
Retry *Retry `description:"Enable retry sending request if network error" export:"true"`
HealthCheck *HealthCheckConfig `description:"Health check parameters" export:"true"`
RespondingTimeouts *RespondingTimeouts `description:"Timeouts for incoming requests to the Traefik instance" export:"true"`
ForwardingTimeouts *ForwardingTimeouts `description:"Timeouts for requests forwarded to the backend servers" export:"true"`
Web *WebCompatibility `description:"(Deprecated) Enable Web backend with default settings" export:"true"` // Deprecated
Docker *docker.Provider `description:"Enable Docker backend with default settings" export:"true"`
File *file.Provider `description:"Enable File backend with default settings" export:"true"`
Marathon *marathon.Provider `description:"Enable Marathon backend with default settings" export:"true"`
Consul *consul.Provider `description:"Enable Consul backend with default settings" export:"true"`
ConsulCatalog *consul.CatalogProvider `description:"Enable Consul catalog backend with default settings" export:"true"`
Etcd *etcd.Provider `description:"Enable Etcd backend with default settings" export:"true"`
Zookeeper *zk.Provider `description:"Enable Zookeeper backend with default settings" export:"true"`
Boltdb *boltdb.Provider `description:"Enable Boltdb backend with default settings" export:"true"`
Kubernetes *kubernetes.Provider `description:"Enable Kubernetes backend with default settings" export:"true"`
Mesos *mesos.Provider `description:"Enable Mesos backend with default settings" export:"true"`
Eureka *eureka.Provider `description:"Enable Eureka backend with default settings" export:"true"`
ECS *ecs.Provider `description:"Enable ECS backend with default settings" export:"true"`
Rancher *rancher.Provider `description:"Enable Rancher backend with default settings" export:"true"`
DynamoDB *dynamodb.Provider `description:"Enable DynamoDB backend with default settings" export:"true"`
ServiceFabric *servicefabric.Provider `description:"Enable Service Fabric backend with default settings" export:"true"`
Rest *rest.Provider `description:"Enable Rest backend with default settings" export:"true"`
API *api.Handler `description:"Enable api/dashboard" export:"true"`
Metrics *types.Metrics `description:"Enable a metrics exporter" export:"true"`
Ping *ping.Handler `description:"Enable ping" export:"true"`
}
// WebCompatibility is a configuration to handle compatibility with deprecated web provider options
type WebCompatibility struct {
Address string `description:"Web administration port" export:"true"`
CertFile string `description:"SSL certificate" export:"true"`
KeyFile string `description:"SSL certificate" export:"true"`
ReadOnly bool `description:"Enable read only API" export:"true"`
Statistics *types.Statistics `description:"Enable more detailed statistics" export:"true"`
Metrics *types.Metrics `description:"Enable a metrics exporter" export:"true"`
Path string `description:"Root path for dashboard and API" export:"true"`
Auth *types.Auth `export:"true"`
Debug bool `export:"true"`
}
func (gc *GlobalConfiguration) handleWebDeprecation() {
if gc.Web != nil {
log.Warn("web provider configuration is deprecated, you should use these options : api, rest provider, ping and metrics")
if gc.API != nil || gc.Metrics != nil || gc.Ping != nil || gc.Rest != nil {
log.Warn("web option is ignored if you use it with one of these options : api, rest provider, ping or metrics")
return
}
gc.EntryPoints[DefaultInternalEntryPointName] = &EntryPoint{
Address: gc.Web.Address,
Auth: gc.Web.Auth,
}
if gc.Web.CertFile != "" {
gc.EntryPoints[DefaultInternalEntryPointName].TLS = &tls.TLS{
Certificates: []tls.Certificate{
{
CertFile: tls.FileOrContent(gc.Web.CertFile),
KeyFile: tls.FileOrContent(gc.Web.KeyFile),
},
},
}
}
if gc.API == nil {
gc.API = &api.Handler{
EntryPoint: DefaultInternalEntryPointName,
Statistics: gc.Web.Statistics,
Dashboard: true,
}
}
if gc.Ping == nil {
gc.Ping = &ping.Handler{
EntryPoint: DefaultInternalEntryPointName,
}
}
if gc.Metrics == nil {
gc.Metrics = gc.Web.Metrics
}
if !gc.Debug {
gc.Debug = gc.Web.Debug
}
}
}
// SetEffectiveConfiguration adds missing configuration parameters derived from existing ones.
// It also takes care of maintaining backwards compatibility.
func (gc *GlobalConfiguration) SetEffectiveConfiguration(configFile string) {
if len(gc.EntryPoints) == 0 {
gc.EntryPoints = map[string]*EntryPoint{"http": {
Address: ":80",
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
}}
gc.DefaultEntryPoints = []string{"http"}
}
gc.handleWebDeprecation()
if (gc.API != nil && gc.API.EntryPoint == DefaultInternalEntryPointName) ||
(gc.Ping != nil && gc.Ping.EntryPoint == DefaultInternalEntryPointName) ||
(gc.Metrics != nil && gc.Metrics.Prometheus != nil && gc.Metrics.Prometheus.EntryPoint == DefaultInternalEntryPointName) ||
(gc.Rest != nil && gc.Rest.EntryPoint == DefaultInternalEntryPointName) {
if _, ok := gc.EntryPoints[DefaultInternalEntryPointName]; !ok {
gc.EntryPoints[DefaultInternalEntryPointName] = &EntryPoint{Address: ":8080"}
}
}
// ForwardedHeaders must be remove in the next breaking version
for entryPointName := range gc.EntryPoints {
entryPoint := gc.EntryPoints[entryPointName]
if entryPoint.ForwardedHeaders == nil {
entryPoint.ForwardedHeaders = &ForwardedHeaders{Insecure: true}
}
}
// Make sure LifeCycle isn't nil to spare nil checks elsewhere.
if gc.LifeCycle == nil {
gc.LifeCycle = &LifeCycle{}
}
// Prefer legacy grace timeout parameter for backwards compatibility reasons.
if gc.GraceTimeOut > 0 {
log.Warn("top-level grace period configuration has been deprecated -- please use lifecycle grace period")
gc.LifeCycle.GraceTimeOut = gc.GraceTimeOut
}
if gc.Rancher != nil {
// Ensure backwards compatibility for now
if len(gc.Rancher.AccessKey) > 0 ||
len(gc.Rancher.Endpoint) > 0 ||
len(gc.Rancher.SecretKey) > 0 {
if gc.Rancher.API == nil {
gc.Rancher.API = &rancher.APIConfiguration{
AccessKey: gc.Rancher.AccessKey,
SecretKey: gc.Rancher.SecretKey,
Endpoint: gc.Rancher.Endpoint,
}
}
log.Warn("Deprecated configuration found: rancher.[accesskey|secretkey|endpoint]. " +
"Please use rancher.api.[accesskey|secretkey|endpoint] instead.")
}
if gc.Rancher.Metadata != nil && len(gc.Rancher.Metadata.Prefix) == 0 {
gc.Rancher.Metadata.Prefix = "latest"
}
}
if gc.API != nil {
gc.API.Debug = gc.Debug
}
if gc.Debug {
gc.LogLevel = "DEBUG"
}
if gc.Web != nil && (gc.Web.Path == "" || !strings.HasSuffix(gc.Web.Path, "/")) {
gc.Web.Path += "/"
}
// Try to fallback to traefik config file in case the file provider is enabled
// but has no file name configured.
if gc.File != nil && len(gc.File.Filename) == 0 {
if len(configFile) > 0 {
gc.File.Filename = configFile
} else {
log.Errorln("Error using file configuration backend, no filename defined")
}
}
if gc.ACME != nil {
// TODO: to remove in the futurs
if len(gc.ACME.StorageFile) > 0 && len(gc.ACME.Storage) == 0 {
log.Warn("ACME.StorageFile is deprecated, use ACME.Storage instead")
gc.ACME.Storage = gc.ACME.StorageFile
}
if len(gc.ACME.DNSProvider) > 0 {
log.Warn("ACME.DNSProvider is deprecated, use ACME.DNSChallenge instead")
gc.ACME.DNSChallenge = &acme.DNSChallenge{Provider: gc.ACME.DNSProvider, DelayBeforeCheck: gc.ACME.DelayDontCheckDNS}
}
if gc.ACME.OnDemand {
log.Warn("ACME.OnDemand is deprecated")
}
}
}
// ValidateConfiguration validate that configuration is coherent
func (gc *GlobalConfiguration) ValidateConfiguration() {
if gc.ACME != nil {
if _, ok := gc.EntryPoints[gc.ACME.EntryPoint]; !ok {
log.Fatalf("Unknown entrypoint %q for ACME configuration", gc.ACME.EntryPoint)
} else {
if gc.EntryPoints[gc.ACME.EntryPoint].TLS == nil {
log.Fatalf("Entrypoint without TLS %q for ACME configuration", gc.ACME.EntryPoint)
}
}
}
}
// DefaultEntryPoints holds default entry points
type DefaultEntryPoints []string
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (dep *DefaultEntryPoints) String() string {
return strings.Join(*dep, ",")
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (dep *DefaultEntryPoints) Set(value string) error {
entrypoints := strings.Split(value, ",")
if len(entrypoints) == 0 {
return fmt.Errorf("bad DefaultEntryPoints format: %s", value)
}
for _, entrypoint := range entrypoints {
*dep = append(*dep, entrypoint)
}
return nil
}
// Get return the EntryPoints map
func (dep *DefaultEntryPoints) Get() interface{} {
return DefaultEntryPoints(*dep)
}
// SetValue sets the EntryPoints map with val
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
}
// Type is type of the struct
func (dep *DefaultEntryPoints) Type() string {
return "defaultentrypoints"
}
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
type EntryPoints map[string]*EntryPoint
// String is the method to format the flag's value, part of the flag.Value interface.
// The String method's output will be used in diagnostics.
func (ep *EntryPoints) String() string {
return fmt.Sprintf("%+v", *ep)
}
// Set is the method to set the flag value, part of the flag.Value interface.
// Set's argument is a string to be parsed to set the flag.
// It's a comma-separated list, so we split it.
func (ep *EntryPoints) Set(value string) error {
result := parseEntryPointsConfiguration(value)
var configTLS *tls.TLS
if len(result["tls"]) > 0 {
certs := tls.Certificates{}
if err := certs.Set(result["tls"]); err != nil {
return err
}
configTLS = &tls.TLS{
Certificates: certs,
}
} else if len(result["tls_acme"]) > 0 {
configTLS = &tls.TLS{
Certificates: tls.Certificates{},
}
}
if len(result["ca"]) > 0 {
files := strings.Split(result["ca"], ",")
optional := toBool(result, "ca_optional")
configTLS.ClientCA = tls.ClientCA{
Files: files,
Optional: optional,
}
}
var redirect *types.Redirect
if len(result["redirect_entrypoint"]) > 0 || len(result["redirect_regex"]) > 0 || len(result["redirect_replacement"]) > 0 {
redirect = &types.Redirect{
EntryPoint: result["redirect_entrypoint"],
Regex: result["redirect_regex"],
Replacement: result["redirect_replacement"],
}
}
whiteListSourceRange := []string{}
if len(result["whitelistsourcerange"]) > 0 {
whiteListSourceRange = strings.Split(result["whitelistsourcerange"], ",")
}
compress := toBool(result, "compress")
var proxyProtocol *ProxyProtocol
ppTrustedIPs := result["proxyprotocol_trustedips"]
if len(result["proxyprotocol_insecure"]) > 0 || len(ppTrustedIPs) > 0 {
proxyProtocol = &ProxyProtocol{
Insecure: toBool(result, "proxyprotocol_insecure"),
}
if len(ppTrustedIPs) > 0 {
proxyProtocol.TrustedIPs = strings.Split(ppTrustedIPs, ",")
}
}
// TODO must be changed to false by default in the next breaking version.
forwardedHeaders := &ForwardedHeaders{Insecure: true}
if _, ok := result["forwardedheaders_insecure"]; ok {
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
}
fhTrustedIPs := result["forwardedheaders_trustedips"]
if len(fhTrustedIPs) > 0 {
// TODO must be removed in the next breaking version.
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
forwardedHeaders.TrustedIPs = strings.Split(fhTrustedIPs, ",")
}
if proxyProtocol != nil && proxyProtocol.Insecure {
log.Warn("ProxyProtocol.Insecure:true is dangerous. Please use 'ProxyProtocol.TrustedIPs:IPs' and remove 'ProxyProtocol.Insecure:true'")
}
(*ep)[result["name"]] = &EntryPoint{
Address: result["address"],
TLS: configTLS,
Redirect: redirect,
Compress: compress,
WhitelistSourceRange: whiteListSourceRange,
ProxyProtocol: proxyProtocol,
ForwardedHeaders: forwardedHeaders,
}
return nil
}
func parseEntryPointsConfiguration(raw string) map[string]string {
sections := strings.Fields(raw)
config := make(map[string]string)
for _, part := range sections {
field := strings.SplitN(part, ":", 2)
name := strings.ToLower(strings.Replace(field[0], ".", "_", -1))
if len(field) > 1 {
config[name] = field[1]
} else {
if strings.EqualFold(name, "TLS") {
config["tls_acme"] = "TLS"
} else {
config[name] = ""
}
}
}
return config
}
func toBool(conf map[string]string, key string) bool {
if val, ok := conf[key]; ok {
return strings.EqualFold(val, "true") ||
strings.EqualFold(val, "enable") ||
strings.EqualFold(val, "on")
}
return false
}
// Get return the EntryPoints map
func (ep *EntryPoints) Get() interface{} {
return EntryPoints(*ep)
}
// SetValue sets the EntryPoints map with val
func (ep *EntryPoints) SetValue(val interface{}) {
*ep = EntryPoints(val.(EntryPoints))
}
// Type is type of the struct
func (ep *EntryPoints) Type() string {
return "entrypoints"
}
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
type EntryPoint struct {
Network string
Address string
TLS *tls.TLS `export:"true"`
Redirect *types.Redirect `export:"true"`
Auth *types.Auth `export:"true"`
WhitelistSourceRange []string
Compress bool `export:"true"`
ProxyProtocol *ProxyProtocol `export:"true"`
ForwardedHeaders *ForwardedHeaders `export:"true"`
}
// Retry contains request retry config
type Retry struct {
Attempts int `description:"Number of attempts" export:"true"`
}
// HealthCheckConfig contains health check configuration parameters.
type HealthCheckConfig struct {
Interval flaeg.Duration `description:"Default periodicity of enabled health checks" export:"true"`
}
// RespondingTimeouts contains timeout configurations for incoming requests to the Traefik instance.
type RespondingTimeouts struct {
ReadTimeout flaeg.Duration `description:"ReadTimeout is the maximum duration for reading the entire request, including the body. If zero, no timeout is set" export:"true"`
WriteTimeout flaeg.Duration `description:"WriteTimeout is the maximum duration before timing out writes of the response. If zero, no timeout is set" export:"true"`
IdleTimeout flaeg.Duration `description:"IdleTimeout is the maximum amount duration an idle (keep-alive) connection will remain idle before closing itself. Defaults to 180 seconds. If zero, no timeout is set" export:"true"`
}
// ForwardingTimeouts contains timeout configurations for forwarding requests to the backend servers.
type ForwardingTimeouts struct {
DialTimeout flaeg.Duration `description:"The amount of time to wait until a connection to a backend server can be established. Defaults to 30 seconds. If zero, no timeout exists" export:"true"`
ResponseHeaderTimeout flaeg.Duration `description:"The amount of time to wait for a server's response headers after fully writing the request (including its body, if any). If zero, no timeout exists" export:"true"`
}
// ProxyProtocol contains Proxy-Protocol configuration
type ProxyProtocol struct {
Insecure bool
TrustedIPs []string
}
// ForwardedHeaders Trust client forwarding headers
type ForwardedHeaders struct {
Insecure bool
TrustedIPs []string
}
// LifeCycle contains configurations relevant to the lifecycle (such as the
// shutdown phase) of Traefik.
type LifeCycle struct {
RequestAcceptGraceTimeout flaeg.Duration `description:"Duration to keep accepting requests before Traefik initiates the graceful shutdown procedure"`
GraceTimeOut flaeg.Duration `description:"Duration to give active requests a chance to finish before Traefik stops"`
}

View File

@@ -0,0 +1,393 @@
package configuration
import (
"testing"
"time"
"github.com/containous/flaeg"
"github.com/containous/traefik/provider"
"github.com/containous/traefik/provider/file"
"github.com/containous/traefik/tls"
"github.com/containous/traefik/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const defaultConfigFile = "traefik.toml"
func Test_parseEntryPointsConfiguration(t *testing.T) {
testCases := []struct {
name string
value string
expectedResult map[string]string
}{
{
name: "all parameters",
value: "Name:foo TLS:goo TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:WhiteListSourceRange ProxyProtocol.TrustedIPs:192.168.0.1 ProxyProtocol.Insecure:false Address::8000",
expectedResult: map[string]string{
"name": "foo",
"address": ":8000",
"ca": "car",
"tls": "goo",
"tls_acme": "TLS",
"redirect_entrypoint": "RedirectEntryPoint",
"redirect_regex": "RedirectRegex",
"redirect_replacement": "RedirectReplacement",
"whitelistsourcerange": "WhiteListSourceRange",
"proxyprotocol_trustedips": "192.168.0.1",
"proxyprotocol_insecure": "false",
"compress": "true",
},
},
{
name: "compress on",
value: "name:foo Compress:on",
expectedResult: map[string]string{
"name": "foo",
"compress": "on",
},
},
{
name: "TLS",
value: "Name:foo TLS:goo TLS",
expectedResult: map[string]string{
"name": "foo",
"tls": "goo",
"tls_acme": "TLS",
},
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
conf := parseEntryPointsConfiguration(test.value)
assert.Len(t, conf, len(test.expectedResult))
assert.Equal(t, test.expectedResult, conf)
})
}
}
func Test_toBool(t *testing.T) {
testCases := []struct {
name string
value string
key string
expectedBool bool
}{
{
name: "on",
value: "on",
key: "foo",
expectedBool: true,
},
{
name: "true",
value: "true",
key: "foo",
expectedBool: true,
},
{
name: "enable",
value: "enable",
key: "foo",
expectedBool: true,
},
{
name: "arbitrary string",
value: "bar",
key: "foo",
expectedBool: false,
},
{
name: "no existing entry",
value: "bar",
key: "fii",
expectedBool: false,
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
conf := map[string]string{
"foo": test.value,
}
result := toBool(conf, test.key)
assert.Equal(t, test.expectedBool, result)
})
}
}
func TestEntryPoints_Set(t *testing.T) {
testCases := []struct {
name string
expression string
expectedEntryPointName string
expectedEntryPoint *EntryPoint
}{
{
name: "all parameters camelcase",
expression: "Name:foo Address::8000 TLS:goo,gii TLS CA:car CA.Optional:false Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:Range ProxyProtocol.TrustedIPs:192.168.0.1 ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Address: ":8000",
Redirect: &types.Redirect{
EntryPoint: "RedirectEntryPoint",
Regex: "RedirectRegex",
Replacement: "RedirectReplacement",
},
Compress: true,
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"192.168.0.1"},
},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
WhitelistSourceRange: []string{"Range"},
TLS: &tls.TLS{
ClientCA: tls.ClientCA{
Files: []string{"car"},
Optional: false,
},
Certificates: tls.Certificates{
{
CertFile: tls.FileOrContent("goo"),
KeyFile: tls.FileOrContent("gii"),
},
},
},
},
},
{
name: "all parameters lowercase",
expression: "name:foo address::8000 tls:goo,gii tls ca:car ca.optional:true redirect.entryPoint:RedirectEntryPoint redirect.regex:RedirectRegex redirect.replacement:RedirectReplacement compress:true whiteListSourceRange:Range proxyProtocol.trustedIPs:192.168.0.1 forwardedHeaders.trustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Address: ":8000",
Redirect: &types.Redirect{
EntryPoint: "RedirectEntryPoint",
Regex: "RedirectRegex",
Replacement: "RedirectReplacement",
},
Compress: true,
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"192.168.0.1"},
},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
WhitelistSourceRange: []string{"Range"},
TLS: &tls.TLS{
ClientCA: tls.ClientCA{
Files: []string{"car"},
Optional: true,
},
Certificates: tls.Certificates{
{
CertFile: tls.FileOrContent("goo"),
KeyFile: tls.FileOrContent("gii"),
},
},
},
},
},
{
name: "default",
expression: "Name:foo",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "ForwardedHeaders insecure true",
expression: "Name:foo ForwardedHeaders.Insecure:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "ForwardedHeaders insecure false",
expression: "Name:foo ForwardedHeaders.Insecure:false",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: false},
},
},
{
name: "ForwardedHeaders TrustedIPs",
expression: "Name:foo ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
},
},
{
name: "ProxyProtocol insecure true",
expression: "Name:foo ProxyProtocol.Insecure:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{Insecure: true},
},
},
{
name: "ProxyProtocol insecure false",
expression: "Name:foo ProxyProtocol.Insecure:false",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{},
},
},
{
name: "ProxyProtocol TrustedIPs",
expression: "Name:foo ProxyProtocol.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
ProxyProtocol: &ProxyProtocol{
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
},
},
},
{
name: "compress on",
expression: "Name:foo Compress:on",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Compress: true,
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
{
name: "compress true",
expression: "Name:foo Compress:true",
expectedEntryPointName: "foo",
expectedEntryPoint: &EntryPoint{
Compress: true,
WhitelistSourceRange: []string{},
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
},
},
}
for _, test := range testCases {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
eps := EntryPoints{}
err := eps.Set(test.expression)
require.NoError(t, err)
ep := eps[test.expectedEntryPointName]
assert.EqualValues(t, test.expectedEntryPoint, ep)
})
}
}
func TestSetEffectiveConfigurationGraceTimeout(t *testing.T) {
tests := []struct {
desc string
legacyGraceTimeout time.Duration
lifeCycleGraceTimeout time.Duration
wantGraceTimeout time.Duration
}{
{
desc: "legacy grace timeout given only",
legacyGraceTimeout: 5 * time.Second,
wantGraceTimeout: 5 * time.Second,
},
{
desc: "legacy and life cycle grace timeouts given",
legacyGraceTimeout: 5 * time.Second,
lifeCycleGraceTimeout: 12 * time.Second,
wantGraceTimeout: 5 * time.Second,
},
{
desc: "legacy grace timeout omitted",
legacyGraceTimeout: 0,
lifeCycleGraceTimeout: 12 * time.Second,
wantGraceTimeout: 12 * time.Second,
},
}
for _, test := range tests {
test := test
t.Run(test.desc, func(t *testing.T) {
t.Parallel()
gc := &GlobalConfiguration{
GraceTimeOut: flaeg.Duration(test.legacyGraceTimeout),
}
if test.lifeCycleGraceTimeout > 0 {
gc.LifeCycle = &LifeCycle{
GraceTimeOut: flaeg.Duration(test.lifeCycleGraceTimeout),
}
}
gc.SetEffectiveConfiguration(defaultConfigFile)
gotGraceTimeout := time.Duration(gc.LifeCycle.GraceTimeOut)
if gotGraceTimeout != test.wantGraceTimeout {
t.Fatalf("got effective grace timeout %d, want %d", gotGraceTimeout, test.wantGraceTimeout)
}
})
}
}
func TestSetEffectiveConfigurationFileProviderFilename(t *testing.T) {
tests := []struct {
desc string
fileProvider *file.Provider
wantFileProviderFilename string
}{
{
desc: "no filename for file provider given",
fileProvider: &file.Provider{},
wantFileProviderFilename: defaultConfigFile,
},
{
desc: "filename for file provider given",
fileProvider: &file.Provider{BaseProvider: provider.BaseProvider{Filename: "other.toml"}},
wantFileProviderFilename: "other.toml",
},
}
for _, test := range tests {
test := test
t.Run(test.desc, func(t *testing.T) {
t.Parallel()
gc := &GlobalConfiguration{
File: test.fileProvider,
}
gc.SetEffectiveConfiguration(defaultConfigFile)
gotFileProviderFilename := gc.File.Filename
if gotFileProviderFilename != test.wantFileProviderFilename {
t.Fatalf("got file provider file name %q, want %q", gotFileProviderFilename, test.wantFileProviderFilename)
}
})
}
}

170
contrib/scripts/dumpcerts.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
# Copyright (c) 2017 Brian 'redbeard' Harrington <redbeard@dead-city.org>
#
# dumpcerts.sh - A simple utility to explode a Traefik acme.json file into a
# directory of certificates and a private key
#
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
#
# Dependencies -
# util-linux
# openssl
# jq
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# Exit codes:
# 1 - A component is missing or could not be read
# 2 - There was a problem reading acme.json
# 4 - The destination certificate directory does not exist
# 8 - Missing private key
set -o errexit
set -o pipefail
set -o nounset
USAGE="$(basename "$0") <path to acme> <destination cert directory>"
# Platform variations
case "$(uname)" in
'Linux')
# On Linux, -d should always work. --decode does not work with Alpine's busybox-binary
CMD_DECODE_BASE64="base64 -d"
;;
*)
# Max OS-X supports --decode and -D, but --decode may be supported by other platforms as well.
CMD_DECODE_BASE64="base64 --decode"
;;
esac
# Allow us to exit on a missing jq binary
exit_jq() {
echo "
You must have the binary 'jq' to use this.
jq is available at: https://stedolan.github.io/jq/download/
${USAGE}" >&2
exit 1
}
bad_acme() {
echo "
There was a problem parsing your acme.json file.
${USAGE}" >&2
exit 2
}
if [ $# -ne 2 ]; then
echo "
Insufficient number of parameters.
${USAGE}" >&2
exit 1
fi
readonly acmefile="${1}"
readonly certdir="${2%/}"
if [ ! -r "${acmefile}" ]; then
echo "
There was a problem reading from '${acmefile}'
We need to read this file to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 2
fi
if [ ! -d "${certdir}" ]; then
echo "
Path ${certdir} does not seem to be a directory
We need a directory in which to explode the JSON bundle... exiting.
${USAGE}" >&2
exit 4
fi
jq=$(command -v jq) || exit_jq
priv=$(${jq} -e -r '.PrivateKey' "${acmefile}") || bad_acme
if [ ! -n "${priv}" ]; then
echo "
There didn't seem to be a private key in ${acmefile}.
Please ensure that there is a key in this file and try again." >&2
exit 8
fi
# If they do not exist, create the needed subdirectories for our assets
# and place each in a variable for later use, normalizing the path
mkdir -p "${certdir}"/{certs,private}
pdir="${certdir}/private/"
cdir="${certdir}/certs/"
# Save the existing umask, change the default mode to 600, then
# after writing the private key switch it back to the default
oldumask=$(umask)
umask 177
trap 'umask ${oldumask}' EXIT
# traefik stores the private key in stripped base64 format but the certificates
# bundled as a base64 object without stripping headers. This normalizes the
# headers and formatting.
#
# In testing this out it was a balance between the following mechanisms:
# gawk:
# echo ${priv} | awk 'BEGIN {print "-----BEGIN RSA PRIVATE KEY-----"}
# {gsub(/.{64}/,"&\n")}1
# END {print "-----END RSA PRIVATE KEY-----"}' > "${pdir}/letsencrypt.key"
#
# openssl:
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
#
# and sed:
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
# echo ${priv} | sed -E 's/(.{64})/\1\n/g' >> "${pdir}/letsencrypt.key"
# sed -i '$ d' "${pdir}/letsencrypt.key"
# echo "-----END RSA PRIVATE KEY-----" >> "${pdir}/letsencrypt.key"
# openssl rsa -noout -in "${pdir}/letsencrypt.key" -check # To check if the key is valid
# In the end, openssl was chosen because most users will need this script
# *because* of openssl combined with the fact that it will refuse to write the
# key if it does not parse out correctly. The other mechanisms were left as
# comments so that the user can choose the mechanism most appropriate to them.
echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
| openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
# Process the certificates for each of the domains in acme.json
for domain in $(jq -r '.DomainsCertificate.Certs[].Certificate.Domain' ${acmefile}); do
# Traefik stores a cert bundle for each domain. Within this cert
# bundle there is both proper the certificate and the Let's Encrypt CA
echo "Extracting cert bundle for ${domain}"
cert=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
select (.Domain == $domain )| .Certificate' ${acmefile}) || bad_acme
echo "${cert}" | ${CMD_DECODE_BASE64} > "${cdir}/${domain}.crt"
echo "Extracting private key for ${domain}"
key=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
select (.Domain == $domain )| .PrivateKey' ${acmefile}) || bad_acme
echo "${key}" | ${CMD_DECODE_BASE64} > "${pdir}/${domain}.key"
done

11
docs.Dockerfile Normal file
View File

@@ -0,0 +1,11 @@
FROM alpine
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin
COPY requirements.txt /mkdocs/
WORKDIR /mkdocs
RUN apk --update upgrade \
&& apk --no-cache --no-progress add py-pip \
&& rm -rf /var/cache/apk/* \
&& pip install --user -r requirements.txt

View File

@@ -1,7 +1,8 @@
# Basics
# Concepts
## Concepts
Let's take our example from the [overview](https://docs.traefik.io/#overview) again:
Let's take our example from the [overview](/#overview) again:
> Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
@@ -24,7 +25,7 @@ Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
- Finally, the [server](#servers) will forward the request to the corresponding microservice in the private network.
## Entrypoints
### Entrypoints
Entrypoints are the network entry points into Træfik.
They can be defined using:
@@ -61,23 +62,26 @@ And here is another example with client certificate authentication:
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
[entryPoints.https.tls]
[entryPoints.https.tls.ClientCA]
files = ["tests/clientca1.crt", "tests/clientca2.crt"]
optional = false
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
```
- We enable SSL on `https` by giving a certificate and a key.
- One or several files containing Certificate Authorities in PEM format are added.
- It is possible to have multiple CA:s in the same file or keep them in separate files.
## Frontends
### Frontends
A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
Rules may be classified in one of two groups: Modifiers and matchers.
### Modifiers
#### Modifiers
Modifier rules only modify the request. They do not have any impact on routing decisions being made.
@@ -85,49 +89,63 @@ Following is the list of existing modifier rules:
- `AddPrefix: /products`: Add path prefix to the existing request path prior to forwarding the request to the backend.
- `ReplacePath: /serverless-path`: Replaces the path and adds the old path to the `X-Replaced-Path` header. Useful for mapping to AWS Lambda or Google Cloud Functions.
- `ReplacePathRegex: ^/api/v2/(.*) /api/$1`: Replaces the path with a regular expression and adds the old path to the `X-Replaced-Path` header. Separate the regular expression and the replacement by a space.
### Matchers
#### Matchers
Matcher rules determine if a particular request should be forwarded to a backend.
Separate multiple rule values by `,` (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches). Does not work for `Headers` and `HeadersRegexp`.
Separate multiple rule values by `,` (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches).
Does not work for `Headers` and `HeadersRegexp`.
Separate multiple rule values by `;` (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
Following is the list of existing matcher rules along with examples:
- `Headers: Content-Type, application/json`: Match HTTP header. It accepts a comma-separated key/value pair where both key and value must be literals.
- `HeadersRegexp: Content-Type, application/(text|json)`: Match HTTP header. It accepts a comma-separated key/value pair where the key must be a literal and the value may be a literal or a regular expression.
- `Host: traefik.io, www.traefik.io`: Match request host. It accepts a sequence of literal hosts.
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Match request host. It accepts a sequence of literal and regular expression hosts.
- `Method: GET, POST, PUT`: Match request HTTP method. It accepts a sequence of HTTP methods.
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Match exact request path. It accepts a sequence of literal and regular expression paths.
- `PathStrip: /products/`: Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal paths.
- `PathStripRegex: /articles/{category}/{id:[0-9]+}`: Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression paths.
- `PathPrefix: /products/, /articles/{category}/{id:[0-9]+}`: Match request prefix path. It accepts a sequence of literal and regular expression prefix paths.
- `PathPrefixStrip: /products/`: Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header.
- `PathPrefixStripRegex: /articles/{category}/{id:[0-9]+}`: Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header.
| Matcher | Description |
|------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Headers: Content-Type, application/json` | Match HTTP header. It accepts a comma-separated key/value pair where both key and value must be literals. |
| `HeadersRegexp: Content-Type, application/(text/json)` | Match HTTP header. It accepts a comma-separated key/value pair where the key must be a literal and the value may be a literal or a regular expression. |
| `Host: traefik.io, www.traefik.io` | Match request host. It accepts a sequence of literal hosts. |
| `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io` | Match request host. It accepts a sequence of literal and regular expression hosts. |
| `Method: GET, POST, PUT` | Match request HTTP method. It accepts a sequence of HTTP methods. |
| `Path: /products/, /articles/{category}/{id:[0-9]+}` | Match exact request path. It accepts a sequence of literal and regular expression paths. |
| `PathStrip: /products/` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal paths. |
| `PathStripRegex: /articles/{category}/{id:[0-9]+}` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression paths. |
| `PathPrefix: /products/, /articles/{category}/{id:[0-9]+}` | Match request prefix path. It accepts a sequence of literal and regular expression prefix paths. |
| `PathPrefixStrip: /products/` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
| `PathPrefixStripRegex: /articles/{category}/{id:[0-9]+}` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
| `Query: foo=bar, bar=baz` | Match Query String parameters. It accepts a sequence of key=value pairs. |
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces. Any pattern supported by [Go's regexp package](https://golang.org/pkg/regexp/) may be used. Example: `/posts/{id:[0-9]+}`.
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces. Any pattern supported by [Go's regexp package](https://golang.org/pkg/regexp/) may be used (example: `/posts/{id:[0-9]+}`).
(Note that the variable has no special meaning; however, it is required by the gorilla/mux dependency which embeds the regular expression and defines the syntax.)
!!! note
The variable has no special meaning; however, it is required by the [gorilla/mux](https://github.com/gorilla/mux) dependency which embeds the regular expression and defines the syntax.
#### Path Matcher Usage Guidelines
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
You can also optionally enable `passTLSCert` to forward TLS Client certificates to the backend.
##### Path Matcher Usage Guidelines
This section explains when to use the various path matchers.
Use `Path` if your backend listens on the exact path only. For instance, `Path: /products` would match `/products` but not `/products/shoes`.
Use a `*Prefix*` matcher if your backend listens on a particular base path but also serves requests on sub-paths. For instance, `PathPrefix: /products` would match `/products` but also `/products/shoes` and `/products/shirts`. Since the path is forwarded as-is, your backend is expected to listen on `/products`.
Use a `*Prefix*` matcher if your backend listens on a particular base path but also serves requests on sub-paths.
For instance, `PathPrefix: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
Since the path is forwarded as-is, your backend is expected to listen on `/products`.
Use a `*Strip` matcher if your backend listens on the root path (`/`) but should be routeable on a specific prefix. For instance, `PathPrefixStrip: /products` would match `/products` but also `/products/shoes` and `/products/shirts`. Since the path is stripped prior to forwarding, your backend is expected to listen on `/`.
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs. Continuing on the example, the backend should return `/products/shoes/image.png` (and not `/images.png` which Traefik would likely not be able to associate with the same backend). The `X-Forwarded-Prefix` header (available since Traefik 1.3) can be queried to build such URLs dynamically.
Use a `*Strip` matcher if your backend listens on the root path (`/`) but should be routeable on a specific prefix.
For instance, `PathPrefixStrip: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
Since the path is stripped prior to forwarding, your backend is expected to listen on `/`.
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs.
Continuing on the example, the backend should return `/products/shoes/image.png` (and not `/images.png` which Traefik would likely not be able to associate with the same backend).
The `X-Forwarded-Prefix` header (available since Traefik 1.3) can be queried to build such URLs dynamically.
Instead of distinguishing your backends by path only, you can add a Host matcher to the mix. That way, namespacing of your backends happens on the basis of hosts in addition to paths.
Instead of distinguishing your backends by path only, you can add a Host matcher to the mix.
That way, namespacing of your backends happens on the basis of hosts in addition to paths.
### Examples
#### Examples
Here is an example of frontends definition:
@@ -140,6 +158,7 @@ Here is an example of frontends definition:
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
passTLSCert = true
priority = 10
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
@@ -155,45 +174,47 @@ Here is an example of frontends definition:
- `frontend2` will forward the traffic to the `backend1` if the rule `Host:localhost,{subdomain:[a-z]+}.localhost` is matched (forwarding client `Host` header to the backend)
- `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched
### Combining multiple rules
#### Combining multiple rules
As seen in the previous example, you can combine multiple rules.
In TOML file, you can use multiple routes:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Path:/test"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost"
[frontends.frontend3.routes.test_2]
rule = "Path:/test"
```
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
You can also use the notation using a `;` separator, same result:
```toml
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Host:test3.localhost;Path:/test"
```
Finally, you can create a rule to bind multiple domains or Path to a frontend, using the `,` separator:
```toml
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
[frontends.frontend2]
[frontends.frontend2.routes.test_1]
rule = "Host:test1.localhost,test2.localhost"
[frontends.frontend3]
backend = "backend2"
[frontends.frontend3.routes.test_1]
rule = "Path:/test1,/test2"
```
### Rules Order
#### Rules Order
When combining `Modifier` rules with `Matcher` rules, it is important to remember that `Modifier` rules **ALWAYS** apply after the `Matcher` rules.
When combining `Modifier` rules with `Matcher` rules, it is important to remember that `Modifier` rules **ALWAYS** apply after the `Matcher` rules.
The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portion of the rule will apply first, and the `Modifier` will apply later.
- `PathStrip`
@@ -208,40 +229,107 @@ The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portio
3. `PathStripRegex`
4. `PathPrefixStripRegex`
5. `AddPrefix`
6. `ReplacePath`
6. `ReplacePath`
### Priorities
#### Priorities
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
`PathPrefix:/12345` will be matched before `PathPrefix:/1234` that will be matched before `PathPrefix:/1`.
You can customize priority by frontend:
You can customize priority by frontend. The priority value is added to the rule length during sorting:
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
```
Here, `frontend1` will be matched before `frontend2` (`(3 + 10 == 13) > (4 + 5 == 9)`).
#### Custom headers
Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
This allows for setting headers such as `X-Script-Name` to be added to the request, or custom headers to be added to the response.
!!! warning
If the custom header name is the same as one header name of the request or response, it will be replaced.
In this example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, and the `X-Custom-Response-Header` added to the response.
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
priority = 10
passHostHeader = true
[frontends.frontend1.headers.customresponseheaders]
X-Custom-Response-Header = "True"
[frontends.frontend1.headers.customrequestheaders]
X-Script-Name = "test"
[frontends.frontend1.routes.test_1]
rule = "PathPrefix:/to"
[frontends.frontend2]
priority = 5
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefix:/toto"
rule = "PathPrefixStrip:/cheese"
```
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
In this second example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, the `X-Custom-Request-Header` removed to the request and the `X-Custom-Response-Header` removed to the response.
## Backends
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.headers.customresponseheaders]
X-Custom-Response-Header = ""
[frontends.frontend1.headers.customrequestheaders]
X-Script-Name = "test"
X-Custom-Request-Header = ""
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/cheese"
```
#### Security headers
Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
This functionality allows for some easy security features to quickly be set.
An example of some of the security headers:
```toml
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.headers]
FrameDeny = true
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/cheddar"
[frontends.frontend2]
backend = "backend2"
[frontends.frontend2.headers]
SSLRedirect = true
[frontends.frontend2.routes.test_1]
rule = "PathPrefixStrip:/stilton"
```
In this example, traffic routed through the first frontend will have the `X-Frame-Options` header set to `DENY`, and the second will only allow HTTPS request through, otherwise will return a 301 HTTPS redirect.
!!! note
The detailed documentation for those security headers can be found in [unrolled/secure](https://github.com/unrolled/secure#available-options).
### Backends
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
Various methods of load-balancing are supported:
- `wrr`: Weighted Round Robin
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others. It also rolls back to original weights if the servers have changed.
- `wrr`: Weighted Round Robin.
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others.
It also rolls back to original weights if the servers have changed.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
Initial state is Standby. CB observes the statistics and does not modify the request.
@@ -256,16 +344,13 @@ It can be configured using:
For example:
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend.
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in range [500-600) to [0-600)
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in ranges [500-600) and [0-600).
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can
also be applied to each backend.
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and
`maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to
evaluate the maximum connections.
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and `maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
For example:
```toml
@@ -280,11 +365,31 @@ For example:
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
Sticky sessions are supported with both load balancers. When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial
request. On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend
will be assigned.
### Sticky sessions
Sticky sessions are supported with both load balancers.
When sticky sessions are enabled, a cookie is set on the initial request.
The default cookie name is an abbreviation of a sha1 (ex: `_1d52e`).
On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy.
If not, a new backend will be assigned.
```toml
[backends]
[backends.backend1]
# Enable sticky session
[backends.backend1.loadbalancer.stickiness]
# Customize the cookie name
#
# Optional
# Default: a sha1 (6 chars)
#
# cookieName = "my_cookie"
```
The deprecated way:
For example:
```toml
[backends]
[backends.backend1]
@@ -292,12 +397,12 @@ For example:
sticky = true
```
A health check can be configured in order to remove a backend from LB rotation
as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
requests periodically carried out by Traefik. The check is defined by a path
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
often the health check should be executed (the default being 30 seconds). Each
backend must respond to the health check within 5 seconds.
### Health Check
A health check can be configured in order to remove a backend from LB rotation as long as it keeps returning HTTP status codes other than `200 OK` to HTTP GET requests periodically carried out by Traefik.
The check is defined by a pathappended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
Each backend must respond to the health check within 5 seconds.
By default, the port of the backend server is used, however, this may be overridden.
A recovering backend returning 200 OK responses again is being returned to the
LB rotation pool.
@@ -307,13 +412,26 @@ For example:
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
path = "/health"
interval = "10s"
```
## Servers
To use a different port for the healthcheck:
```toml
[backends]
[backends.backend1]
[backends.backend1.healthcheck]
path = "/health"
interval = "10s"
port = 8080
```
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
### Servers
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
!!! note
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
Here is an example of backends and servers definition:
@@ -321,7 +439,7 @@ Here is an example of backends and servers definition:
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
@@ -330,7 +448,7 @@ Here is an example of backends and servers definition:
weight = 1
[backends.backend2]
[backends.backend2.LoadBalancer]
method = "drr"
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
@@ -344,99 +462,260 @@ Here is an example of backends and servers definition:
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
# Configuration
Træfik's configuration has two parts:
## Configuration
- The [static Træfik configuration](/basics#static-trfk-configuration) which is loaded only at the beginning.
- The [dynamic Træfik configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
Træfik's configuration has two parts:
- The [static Træfik configuration](/basics#static-trfik-configuration) which is loaded only at the beginning.
- The [dynamic Træfik configuration](/basics#dynamic-trfik-configuration) which can be hot-reloaded (no need to restart the process).
## Static Træfik configuration
### Static Træfik configuration
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
Træfik can be configured using many configuration sources with the following precedence order.
Træfik can be configured using many configuration sources with the following precedence order.
Each item takes precedence over the item below it:
- [Key-value Store](/basics/#key-value-stores)
- [Key-value store](/basics/#key-value-stores)
- [Arguments](/basics/#arguments)
- [Configuration file](/basics/#configuration-file)
- Default
It means that arguments override configuration file, and Key-value Store overrides arguments.
It means that arguments override configuration file, and key-value store overrides arguments.
### Configuration file
!!! note
the provider-enabling argument parameters (e.g., `--docker`) set all default values for the specific provider.
It must not be used if a configuration source with less precedence wants to set a non-default provider value.
#### Configuration file
By default, Træfik will try to find a `traefik.toml` in the following places:
- `/etc/traefik/`
- `$HOME/.traefik/`
- `.` *the working directory*
- `.` _the working directory_
You can override this by setting a `configFile` argument:
```bash
$ traefik --configFile=foo/bar/myconfigfile.toml
traefik --configFile=foo/bar/myconfigfile.toml
```
Please refer to the [global configuration](/toml/#global-configuration) section to get documentation on it.
Please refer to the [global configuration](/configuration/commons) section to get documentation on it.
### Arguments
#### Arguments
Each argument (and command) is described in the help section:
```bash
$ traefik --help
traefik --help
```
Note that all default values will be displayed as well.
### Key-value stores
#### Key-value stores
Træfik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
## Dynamic Træfik configuration
### Dynamic Træfik configuration
The dynamic configuration concerns :
The dynamic configuration concerns :
- [Frontends](/basics/#frontends)
- [Backends](/basics/#backends)
- [Servers](/basics/#servers)
- [Backends](/basics/#backends)
- [Servers](/basics/#servers)
- HTTPS Certificates
Træfik can hot-reload those rules which could be provided by [multiple configuration backends](/toml/#configuration-backends).
Træfik can hot-reload those rules which could be provided by [multiple configuration backends](/configuration/commons).
We only need to enable `watch` option to make Træfik watch configuration backend changes and generate its configuration automatically.
Routes to services will be created and updated instantly at any changes.
Please refer to the [configuration backends](/toml/#configuration-backends) section to get documentation on it.
Please refer to the [configuration backends](/configuration/commons) section to get documentation on it.
# Commands
## Commands
Usage: `traefik [command] [--flag=flag_argument]`
### traefik
List of Træfik available commands with description :                                                             
Usage:
```bash
traefik [command] [--flag=flag_argument]
```
- `version` : Print version 
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
List of Træfik available commands with description :
- `version` : Print version
- `storeconfig` : Store the static Traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
- `bug`: The easiest way to submit a pre-filled issue.
- `healthcheck`: Calls Traefik `/ping` to check health.
Each command may have related flags.
Each command may have related flags.
All those related flags will be displayed with :
```bash
$ traefik [command] --help
traefik [command] --help
```
Note that each command is described at the beginning of the help section:
Each command is described at the beginning of the help section:
```bash
$ traefik --help
traefik --help
# or
docker run traefik[:version] --help
# ex: docker run traefik:1.5 --help
```
### Command: bug
Here is the easiest way to submit a pre-filled issue on [Træfik GitHub](https://github.com/containous/traefik).
```bash
traefik bug
```
Watch [this demo](https://www.youtube.com/watch?v=Lyz62L8m93I).
### Command: healthcheck
This command allows to check the health of Traefik. Its exit status is `0` if Traefik is healthy and `1` if it is unhealthy.
This can be used with Docker [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck) instruction or any other health check orchestration mechanism.
!!! note
The [`ping`](/configuration/ping) must be enabled to allow the `healthcheck` command to call `/ping`.
```bash
traefik healthcheck
```
```bash
OK: http://:8082/ping
```
## Collected Data
**This feature is disabled by default.**
You can read the public proposal on this topic [here](https://github.com/containous/traefik/issues/2369).
### Why ?
In order to help us learn more about how Træfik is being used and improve it, we collect anonymous usage statistics from running instances.
Those data help us prioritize our developments and focus on what's more important (for example, which configuration backend is used and which is not used).
### What ?
Once a day (the first call begins 10 minutes after the start of Træfik), we collect:
- the Træfik version
- a hash of the configuration
- an **anonymous version** of the static configuration:
- token, user name, password, URL, IP, domain, email, etc, are removed
!!! note
We do not collect the dynamic configuration (frontends & backends).
!!! note
We do not collect data behind the scenes to run advertising programs or to sell such data to third-party.
#### Here is an example
- Source configuration:
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[api]
[Docker]
endpoint = "tcp://10.10.10.10:2375"
domain = "foo.bir"
exposedByDefault = true
swarmMode = true
[Docker.TLS]
CA = "dockerCA"
Cert = "dockerCert"
Key = "dockerKey"
InsecureSkipVerify = true
[ECS]
Domain = "foo.bar"
ExposedByDefault = true
Clusters = ["foo-bar"]
Region = "us-west-2"
AccessKeyID = "AccessKeyID"
SecretAccessKey = "SecretAccessKey"
```
- Obfuscated and anonymous configuration:
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[api]
[Docker]
Endpoint = "xxxx"
Domain = "xxxx"
ExposedByDefault = true
SwarmMode = true
[Docker.TLS]
CA = "xxxx"
Cert = "xxxx"
Key = "xxxx"
InsecureSkipVerify = false
[ECS]
Domain = "xxxx"
ExposedByDefault = true
Clusters = []
Region = "us-west-2"
AccessKeyID = "xxxx"
SecretAccessKey = "xxxx"
```
### Show me the code !
If you want to dig into more details, here is the source code of the collecting system: [collector.go](https://github.com/containous/traefik/blob/master/collector/collector.go)
By default we anonymize all configuration fields, except fields tagged with `export=true`.
You can check all fields in the [godoc](https://godoc.org/github.com/containous/traefik/configuration#GlobalConfiguration).
### How to enable this ?
You can enable the collecting system by:
- adding this line in the configuration TOML file:
```toml
# Send anonymous usage data
#
# Optional
# Default: false
#
sendAnonymousUsage = true
```
- adding this flag in the CLI:
```bash
./traefik --sendAnonymousUsage=true
```

View File

@@ -14,7 +14,7 @@ I used 4 VMs for the tests with the following configuration:
## Setup
1. One VM used to launch the benchmarking tool [wrk](https://github.com/wg/wrk)
2. One VM for traefik (v1.0.0-beta.416) / nginx (v1.4.6)
2. One VM for Traefik (v1.0.0-beta.416) / nginx (v1.4.6)
3. Two VMs for 2 backend servers in go [whoami](https://github.com/emilevauge/whoamI/)
Each VM has been tuned using the following limits:
@@ -65,8 +65,8 @@ http {
keepalive_requests 10000;
types_hash_max_size 2048;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
@@ -182,7 +182,8 @@ Requests/sec: 33591.67
Transfer/sec: 4.97MB
```
### traefik:
### Traefik:
```shell
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
Running 1m test @ http://IP-traefik:8000/bench
@@ -209,5 +210,5 @@ Not bad for young project :) !
Some areas of possible improvements:
- Use [GO_REUSEPORT](https://github.com/kavu/go_reuseport) listener
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with traefik than with nginx)
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with Traefik than with nginx)

407
docs/configuration/acme.md Normal file
View File

@@ -0,0 +1,407 @@
# ACME (Let's Encrypt) configuration
See also [Let's Encrypt examples](/user-guide/examples/#lets-encrypt-support) and [Docker & Let's Encrypt user guide](/user-guide/docker-and-lets-encrypt).
## Configuration
```toml
# Sample entrypoint configuration when using ACME.
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
```
```toml
# Enable ACME (Let's Encrypt): automatic SSL.
[acme]
# Email address used for registration.
#
# Required
#
email = "test@traefik.io"
# File used for certificates storage.
#
# Optional (Deprecated)
#
#storageFile = "acme.json"
# File or key used for certificates storage.
#
# Required
#
storage = "acme.json"
# or `storage = "traefik/acme/account"` if using KV store.
# Entrypoint to proxy acme apply certificates to.
# WARNING, if the TLS-SNI-01 challenge is used, it must point to an entrypoint on port 443
#
# Required
#
entryPoint = "https"
# Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge
#
# Optional (Deprecated, replaced by [acme.dnsChallenge])
#
# dnsProvider = "digitalocean"
# By default, the dnsProvider will verify the TXT DNS challenge record before letting ACME verify.
# If delayDontCheckDNS is greater than zero, avoid this & instead just wait so many seconds.
# Useful if internal networks block external DNS queries.
#
# Optional (Deprecated, replaced by [acme.dnsChallenge])
# Default: 0
#
# delayDontCheckDNS = 0
# If true, display debug log messages from the acme client library.
#
# Optional
# Default: false
#
# acmeLogging = true
# Enable on demand certificate generation.
#
# Optional (Deprecated)
# Default: false
#
# onDemand = true
# Enable certificate generation on frontends Host rules.
#
# Optional
# Default: false
#
# onHostRule = true
# CA server to use.
# - Uncomment the line to run on the staging let's encrypt server.
# - Leave comment to go to prod.
#
# Optional
# Default: "https://acme-v01.api.letsencrypt.org/directory"
#
# caServer = "https://acme-staging.api.letsencrypt.org/directory"
# Domains list.
#
# [[acme.domains]]
# main = "local1.com"
# sans = ["test1.local1.com", "test2.local1.com"]
# [[acme.domains]]
# main = "local2.com"
# sans = ["test1.local2.com", "test2.local2.com"]
# [[acme.domains]]
# main = "local3.com"
# [[acme.domains]]
# main = "local4.com"
# Use a HTTP-01 acme challenge rather than TLS-SNI-01 challenge
#
# Optional but recommend
#
[acme.httpChallenge]
# EntryPoint to use for the challenges.
#
# Required
#
entryPoint = "http"
# Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge
#
# Optional
#
# [acme.dnsChallenge]
# Provider used.
#
# Required
#
# provider = "digitalocean"
# By default, the provider will verify the TXT DNS challenge record before letting ACME verify.
# If delayBeforeCheck is greater than zero, avoid this & instead just wait so many seconds.
# Useful if internal networks block external DNS queries.
#
# Optional
# Default: 0
#
# delayBeforeCheck = 0
```
!!! note
Even if `TLS-SNI-01` challenge is [disabled](https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188) for the moment, it stays the _by default_ ACME Challenge in Træfik.
If `TLS-SNI-01` challenge is not re-enabled in the future, it we will be removed from Træfik.
!!! note
If `TLS-SNI-01` challenge is used, `acme.entryPoint` has to be reachable by Let's Encrypt through the port 443.
If `HTTP-01` challenge is used, `acme.httpChallenge.entryPoint` has to be defined and reachable by Let's Encrypt through the port 80.
These are Let's Encrypt limitations as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
### Let's Encrypt downtime
Let's Encrypt functionality will be limited until Træfik is restarted.
If Let's Encrypt is not reachable, these certificates will be used :
- ACME certificates already generated before downtime
- Expired ACME certificates
- Provided certificates
!!! note
Default Træfik certificate will be used instead of ACME certificates for new (sub)domains (which need Let's Encrypt challenge).
### `storage`
```toml
[acme]
# ...
storage = "acme.json"
# ...
```
The `storage` option sets where are stored your ACME certificates.
There are two kind of `storage` :
- a JSON file,
- a KV store entry.
!!! danger "DEPRECATED"
`storage` replaces `storageFile` which is deprecated.
!!! note
During Træfik configuration migration from a configuration file to a KV store (thanks to `storeconfig` subcommand as described [here](/user-guide/kv-config/#store-configuration-in-key-value-store)), if ACME certificates have to be migrated too, use both `storageFile` and `storage`.
- `storageFile` will contain the path to the `acme.json` file to migrate.
- `storage` will contain the key where the certificates will be stored.
#### Store data in a file
ACME certificates can be stored in a JSON file which with the `600` right mode.
There are two ways to store ACME certificates in a file from Docker:
- create a file on your host and mount it as a volume:
```toml
storage = "acme.json"
```
```bash
docker run -v "/my/host/acme.json:acme.json" traefik
```
- mount the folder containing the file as a volume
```toml
storage = "/etc/traefik/acme/acme.json"
```
```bash
docker run -v "/my/host/acme:/etc/traefik/acme" traefik
```
!!! warning
This file cannot be shared per many instances of Træfik at the same time.
If you have to use Træfik cluster mode, please use [a KV Store entry](/configuration/acme/#storage-kv-entry).
#### Store data in a KV store entry
ACME certificates can be stored in a KV Store entry.
```toml
storage = "traefik/acme/account"
```
**This kind of storage is mandatory in cluster mode.**
Because KV stores (like Consul) have limited entries size, the certificates list is compressed before to be set in a KV store entry.
!!! note
It's possible to store up to approximately 100 ACME certificates in Consul.
### `acme.httpChallenge`
Use `HTTP-01` challenge to generate/renew ACME certificates.
The redirection is fully compatible with the HTTP-01 challenge.
You can use redirection with HTTP-01 challenge without problem.
```toml
[acme]
# ...
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
#### `entryPoint`
Specify the entryPoint to use during the challenges.
```toml
defaultEntryPoints = ["http", "http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# ...
[acme]
# ...
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
!!! note
`acme.httpChallenge.entryPoint` has to be reachable by Let's Encrypt through the port 80.
It's a Let's Encrypt limitation as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
### `acme.dnsChallenge`
Use `DNS-01` challenge to generate/renew ACME certificates.
```toml
[acme]
# ...
[acme.dnsChallenge]
provider = "digitalocean"
delayBeforeCheck = 0
# ...
```
#### `provider`
Select the provider that matches the DNS domain that will host the challenge TXT record, and provide environment variables to enable setting it:
| Provider Name | Provider code | Configuration |
|--------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------|
| [Auroradns](https://www.pcextreme.com/aurora/dns) | `auroradns` | `AURORA_USER_ID`, `AURORA_KEY`, `AURORA_ENDPOINT` |
| [Azure](https://azure.microsoft.com/services/dns/) | `azure` | `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_SUBSCRIPTION_ID`, `AZURE_TENANT_ID`, `AZURE_RESOURCE_GROUP` |
| [Cloudflare](https://www.cloudflare.com) | `cloudflare` | `CLOUDFLARE_EMAIL`, `CLOUDFLARE_API_KEY` - The Cloudflare `Global API Key` needs to be used and not the `Origin CA Key` |
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | `DO_AUTH_TOKEN` |
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` |
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET`, `DNSMADEEASY_SANDBOX` |
| [DNSPod](http://www.dnspod.net/) | `dnspod` | `DNSPOD_API_KEY` |
| [Dyn](https://dyn.com) | `dyn` | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` |
| [Exoscale](https://www.exoscale.ch) | `exoscale` | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET`, `EXOSCALE_ENDPOINT` |
| [Gandi](https://www.gandi.net) | `gandi` | `GANDI_API_KEY` |
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | `GODADDY_API_KEY`, `GODADDY_API_SECRET` |
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, `GCE_SERVICE_ACCOUNT_FILE` |
| [Linode](https://www.linode.com) | `linode` | `LINODE_API_KEY` |
| manual | - | none, but run Træfik interactively & turn on `acmeLogging` to see instructions & press <kbd>Enter</kbd>. |
| [Namecheap](https://www.namecheap.com) | `namecheap` | `NAMECHEAP_API_USER`, `NAMECHEAP_API_KEY` |
| [Ns1](https://ns1.com/) | `ns1` | `NS1_API_KEY` |
| [Open Telekom Cloud](https://cloud.telekom.de/en/) | `otc` | `OTC_DOMAIN_NAME`, `OTC_USER_NAME`, `OTC_PASSWORD`, `OTC_PROJECT_NAME`, `OTC_IDENTITY_ENDPOINT` |
| [OVH](https://www.ovh.com) | `ovh` | `OVH_ENDPOINT`, `OVH_APPLICATION_KEY`, `OVH_APPLICATION_SECRET`, `OVH_CONSUMER_KEY` |
| [PowerDNS](https://www.powerdns.com) | `pdns` | `PDNS_API_KEY`, `PDNS_API_URL` |
| [Rackspace](https://www.rackspace.com/cloud/dns) | `rackspace` | `RACKSPACE_USER`, `RACKSPACE_API_KEY` |
| [RFC2136](https://tools.ietf.org/html/rfc2136) | `rfc2136` | `RFC2136_TSIG_KEY`, `RFC2136_TSIG_SECRET`, `RFC2136_TSIG_ALGORITHM`, `RFC2136_NAMESERVER` |
| [Route 53](https://aws.amazon.com/route53/) | `route53` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_HOSTED_ZONE_ID` or configured user/instance IAM profile. |
| [VULTR](https://www.vultr.com) | `vultr` | `VULTR_API_KEY` |
#### `delayBeforeCheck`
By default, the `provider` will verify the TXT DNS challenge record before letting ACME verify.
If `delayBeforeCheck` is greater than zero, avoid this & instead just wait so many seconds.
Useful if internal networks block external DNS queries.
!!! note
This field has no sense if a `provider` is not defined.
### `onDemand` (Deprecated)
!!! danger "DEPRECATED"
This option is deprecated.
```toml
[acme]
# ...
onDemand = true
# ...
```
Enable on demand certificate.
This will request a certificate from Let's Encrypt during the first TLS handshake for a host name that does not yet have a certificate.
!!! warning
TLS handshakes will be slow when requesting a host name certificate for the first time, this can lead to DoS attacks.
!!! warning
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
### `onHostRule`
```toml
[acme]
# ...
onHostRule = true
# ...
```
Enable certificate generation on frontends `Host` rules (for frontends wired on the `acme.entryPoint`).
This will request a certificate from Let's Encrypt for each frontend with a Host rule.
For example, a rule `Host:test1.traefik.io,test2.traefik.io` will request a certificate with main domain `test1.traefik.io` and SAN `test2.traefik.io`.
### `caServer`
```toml
[acme]
# ...
caServer = "https://acme-staging.api.letsencrypt.org/directory"
# ...
```
CA server to use.
- Uncomment the line to run on the staging Let's Encrypt server.
- Leave comment to go to prod.
### `acme.domains`
```toml
[acme]
# ...
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
# ...
```
You can provide SANs (alternative domains) to each main domain.
All domains must have A/AAAA records pointing to Træfik.
!!! warning
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
Each domain & SANs will lead to a certificate request.
### `dnsProvider` (Deprecated)
!!! danger "DEPRECATED"
This option is deprecated.
Please refer to [DNS challenge provider section](/configuration/acme/#provider)
### `delayDontCheckDNS` (Deprecated)
!!! danger "DEPRECATED"
This option is deprecated.
Please refer to [DNS challenge delayBeforeCheck section](/configuration/acme/#delaybeforecheck)

213
docs/configuration/api.md Normal file
View File

@@ -0,0 +1,213 @@
# API Definition
## Configuration
```toml
# API definition
[api]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
# Enabled Dashboard
#
# Optional
# Default: true
#
dashboard = true
# Enable debug mode.
# This will install HTTP handlers to expose Go expvars under /debug/vars and
# pprof profiling data under /debug/pprof.
# Additionally, the log level will be set to DEBUG.
#
# Optional
# Default: false
#
debug = true
```
For more customization, see [entry points](/configuration/entrypoints/) documentation and [examples](/user-guide/examples/#ping-health-check).
## Web UI
![Web UI Providers](/img/web.frontend.png)
![Web UI Health](/img/traefik-health.png)
## API
| Path | Method | Description |
|-----------------------------------------------------------------|------------------|-------------------------------------------|
| `/` | `GET` | Provides a simple HTML frontend of Træfik |
| `/health` | `GET` | json health metrics |
| `/api` | `GET` | Configuration for all providers |
| `/api/providers` | `GET` | Providers |
| `/api/providers/{provider}` | `GET`, `PUT` | Get or update provider (1) |
| `/api/providers/{provider}/backends` | `GET` | List backends |
| `/api/providers/{provider}/backends/{backend}` | `GET` | Get backend |
| `/api/providers/{provider}/backends/{backend}/servers` | `GET` | List servers in backend |
| `/api/providers/{provider}/backends/{backend}/servers/{server}` | `GET` | Get a server in a backend |
| `/api/providers/{provider}/frontends` | `GET` | List frontends |
| `/api/providers/{provider}/frontends/{frontend}` | `GET` | Get a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes` | `GET` | List routes in a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes/{route}` | `GET` | Get a route in a frontend |
<1> See [Rest](/configuration/backends/rest/#api) for more information.
!!! warning
For compatibility reason, when you activate the rest provider, you can use `web` or `rest` as `provider` value.
But be careful, in the configuration for all providers the key is still `web`.
### Provider configurations
```shell
curl -s "http://localhost:8080/api" | jq .
```
```json
{
"file": {
"frontends": {
"frontend2": {
"routes": {
"test_2": {
"rule": "Path:/test"
}
},
"backend": "backend1"
},
"frontend1": {
"routes": {
"test_1": {
"rule": "Host:test.localhost"
}
},
"backend": "backend2"
}
},
"backends": {
"backend2": {
"loadBalancer": {
"method": "drr"
},
"servers": {
"server2": {
"weight": 2,
"URL": "http://172.17.0.5:80"
},
"server1": {
"weight": 1,
"url": "http://172.17.0.4:80"
}
}
},
"backend1": {
"loadBalancer": {
"method": "wrr"
},
"circuitBreaker": {
"expression": "NetworkErrorRatio() > 0.5"
},
"servers": {
"server2": {
"weight": 1,
"url": "http://172.17.0.3:80"
},
"server1": {
"weight": 10,
"url": "http://172.17.0.2:80"
}
}
}
}
}
}
```
### Health
```shell
curl -s "http://localhost:8080/health" | jq .
```
```json
{
// Træfik PID
"pid": 2458,
// Træfik server uptime (formated time)
"uptime": "39m6.885931127s",
// Træfik server uptime in seconds
"uptime_sec": 2346.885931127,
// current server date
"time": "2015-10-07 18:32:24.362238909 +0200 CEST",
// current server date in seconds
"unixtime": 1444235544,
// count HTTP response status code in realtime
"status_code_count": {
"502": 1
},
// count HTTP response status code since Træfik started
"total_status_code_count": {
"200": 7,
"404": 21,
"502": 13
},
// count HTTP response
"count": 1,
// count HTTP response
"total_count": 41,
// sum of all response time (formated time)
"total_response_time": "35.456865605s",
// sum of all response time in seconds
"total_response_time_sec": 35.456865605,
// average response time (formated time)
"average_response_time": "864.8016ms",
// average response time in seconds
"average_response_time_sec": 0.8648016000000001,
// request statistics [requires --statistics to be set]
// ten most recent requests with 4xx and 5xx status codes
"recent_errors": [
{
// status code
"status_code": 500,
// description of status code
"status": "Internal Server Error",
// request HTTP method
"method": "GET",
// request hostname
"host": "localhost",
// request path
"path": "/path",
// RFC 3339 formatted date/time
"time": "2016-10-21T16:59:15.418495872-07:00"
}
]
}
```
## Metrics
You can enable Traefik to export internal metrics to different monitoring systems.
```toml
[api]
# ...
# Enable more detailed statistics.
[api.statistics]
# Number of recent errors logged.
#
# Default: 10
#
recentErrors = 10
# ...
```
| Path | Method | Description |
|------------|---------------|-------------------------|
| `/metrics` | `GET` | Export internal metrics |

View File

@@ -0,0 +1,59 @@
# BoltDB Backend
Træfik can be configured to use BoltDB as a backend configuration.
```toml
################################################################
# BoltDB configuration backend
################################################################
# Enable BoltDB configuration backend.
[boltdb]
# BoltDB file.
#
# Required
# Default: "127.0.0.1:4001"
#
endpoint = "/my.db"
# Enable watch BoltDB changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "/traefik"
#
prefix = "/traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
filename = "boltdb.tmpl"
# Use BoltDB user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable BoltDB TLS connection.
#
# Optional
#
# [boltdb.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/boltdb.crt"
# key = "/etc/ssl/boltdb.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).

View File

@@ -0,0 +1,61 @@
# Consul Key-Value backend
Træfik can be configured to use Consul as a backend configuration.
```toml
################################################################
# Consul KV configuration backend
################################################################
# Enable Consul KV configuration backend.
[consul]
# Consul server endpoint.
#
# Required
# Default: "127.0.0.1:8500"
#
endpoint = "127.0.0.1:8500"
# Enable watch Consul changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: traefik
#
prefix = "traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "consul.tmpl"
# Use Consul user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable Consul TLS connection.
#
# Optional
#
# [consul.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/consul.crt"
# key = "/etc/ssl/consul.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.

View File

@@ -0,0 +1,93 @@
# Consul Catalog backend
Træfik can be configured to use service discovery catalog of Consul as a backend configuration.
```toml
################################################################
# Consul Catalog configuration backend
################################################################
# Enable Consul Catalog configuration backend.
[consulCatalog]
# Consul server endpoint.
#
# Required
# Default: "127.0.0.1:8500"
#
endpoint = "127.0.0.1:8500"
# Expose Consul catalog services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Default domain used.
#
# Optional
#
domain = "consul.localhost"
# Prefix for Consul catalog tags.
#
# Optional
# Default: "traefik"
#
prefix = "traefik"
# Default frontEnd Rule for Consul services.
#
# The format is a Go Template with:
# - ".ServiceName", ".Domain" and ".Attributes" available
# - "getTag(name, tags, defaultValue)", "hasTag(name, tags)" and "getAttribute(name, tags, defaultValue)" functions are available
# - "getAttribute(...)" function uses prefixed tag names based on "prefix" value
#
# Optional
# Default: "Host:{{.ServiceName}}.{{.Domain}}"
#
#frontEndRule = "Host:{{.ServiceName}}.{{.Domain}}"
```
This backend will create routes matching on hostname based on the service name used in Consul.
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
### Tags
Additional settings can be defined using Consul Catalog tags.
| Tag | Description |
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.backend.weight=10` | Assign this weight to the container |
| `traefik.backend.circuitbreaker=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend, ex: `NetworkErrorRatio() > 0.` |
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.backend.loadbalancer=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
### Examples
If you want that Træfik uses Consul tags correctly you need to defined them like that:
```json
traefik.enable=true
traefik.tags=api
traefik.tags=external
```
If the prefix defined in Træfik configuration is `bla`, tags need to be defined like that:
```json
bla.enable=true
bla.tags=api
bla.tags=external
```

View File

@@ -0,0 +1,250 @@
# Docker Backend
Træfik can be configured to use Docker as a backend configuration.
## Docker
```toml
################################################################
# Docker configuration backend
################################################################
# Enable Docker configuration backend.
[docker]
# Docker server endpoint. Can be a tcp or a unix socket endpoint.
#
# Required
#
endpoint = "unix:///var/run/docker.sock"
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on a container.
#
# Required
#
domain = "docker.localhost"
# Enable watch docker changes.
#
# Optional
#
watch = true
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "docker.tmpl"
# Expose containers by default in Traefik.
# If set to false, containers that don't have `traefik.enable=true` will be ignored.
#
# Optional
# Default: true
#
exposedbydefault = true
# Use the IP address from the binded port instead of the inner network one.
# For specific use-case :)
#
# Optional
# Default: false
#
usebindportip = true
# Use Swarm Mode services as data provider.
#
# Optional
# Default: false
#
swarmmode = false
# Enable docker TLS connection.
#
# Optional
#
# [docker.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/docker.crt"
# key = "/etc/ssl/docker.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Docker Swarm Mode
```toml
################################################################
# Docker Swarmmode configuration backend
################################################################
# Enable Docker configuration backend.
[docker]
# Docker server endpoint.
# Can be a tcp or a unix socket endpoint.
#
# Required
# Default: "unix:///var/run/docker.sock"
#
endpoint = "tcp://127.0.0.1:2375"
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on a services.
#
# Optional
# Default: ""
#
domain = "docker.localhost"
# Enable watch docker changes.
#
# Optional
# Default: true
#
watch = true
# Use Docker Swarm Mode as data provider.
#
# Optional
# Default: false
#
swarmmode = true
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "docker.tmpl"
# Expose services by default in Traefik.
#
# Optional
# Default: true
#
exposedbydefault = false
# Enable docker TLS connection.
#
# Optional
#
# [docker.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/docker.crt"
# key = "/etc/ssl/docker.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Labels: overriding default behaviour
!!! note
If you use a compose file, labels should be defined in the `deploy` part of your service.
This behavior is only enabled for docker-compose version 3+ ([Compose file reference](https://docs.docker.com/compose/compose-file/#labels-1)).
```yaml
version: "3"
services:
whoami:
deploy:
labels:
traefik.docker.network: traefik
```
### On Containers
Labels can be used on containers to override default behaviour.
| Label | Description |
|------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.backend=foo` | Give the name `foo` to the generated backend for this container. |
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
| `traefik.backend.loadbalancer.swarm=true` | Use Swarm's inbuilt load balancer (only relevant under Swarm Mode). |
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.port=80` | Register this port. Useful when the container exposes multiples ports. |
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.weight=10` | Assign this weight to the container |
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Default: `Host:{containerName}.{domain}` or `Host:{service}.{project_name}.{domain}` if you are using `docker-compose`. |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access. If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
| `traefik.docker.network` | Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with `docker inspect <container_id>`) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name. |
| `traefik.frontend.redirect.entryPoint=https` | Enables Redirect to another entryPoint for that frontend (e.g. HTTPS) |
| `traefik.frontend.redirect.regex=^http://localhost/(.*)` | Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.replacement`. |
| `traefik.frontend.redirect.replacement=http://mydomain/$1` | Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.regex`. |
#### Security Headers
| Label | Description |
|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.frontend.headers.allowedHosts=EXPR` | Provides a list of allowed hosts that requests will be processed. Format: `Host1,Host2` |
| `traefik.frontend.headers.customRequestHeaders=EXPR ` | Provides the container with custom request headers that will be appended to each request forwarded to the container. Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `traefik.frontend.headers.customResponseHeaders=EXPR` | Appends the headers to each response returned by the container, before forwarding the response to the client. Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `traefik.frontend.headers.hostsProxyHeaders=EXPR ` | Provides a list of headers that the proxied hostname may be stored. Format: `HEADER1,HEADER2` |
| `traefik.frontend.headers.SSLRedirect=true` | Forces the frontend to redirect to SSL if a non-SSL request is sent. |
| `traefik.frontend.headers.SSLTemporaryRedirect=true` | Forces the frontend to redirect to SSL if a non-SSL request is sent, but by sending a 302 instead of a 301. |
| `traefik.frontend.headers.SSLHost=HOST` | This setting configures the hostname that redirects will be based on. Default is "", which is the same host as the request. |
| `traefik.frontend.headers.SSLProxyHeaders=EXPR` | Header combinations that would signify a proper SSL Request (Such as `X-Forwarded-For:https`). Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `traefik.frontend.headers.STSSeconds=315360000` | Sets the max-age of the STS header. |
| `traefik.frontend.headers.STSIncludeSubdomains=true` | Adds the `IncludeSubdomains` section of the STS header. |
| `traefik.frontend.headers.STSPreload=true` | Adds the preload flag to the STS header. |
| `traefik.frontend.headers.forceSTSHeader=false` | Adds the STS header to non-SSL requests. |
| `traefik.frontend.headers.frameDeny=false` | Adds the `X-Frame-Options` header with the value of `DENY`. |
| `traefik.frontend.headers.customFrameOptionsValue=VALUE` | Overrides the `X-Frame-Options` header with the custom value. |
| `traefik.frontend.headers.contentTypeNosniff=true` | Adds the `X-Content-Type-Options` header with the value `nosniff`. |
| `traefik.frontend.headers.browserXSSFilter=true` | Adds the X-XSS-Protection header with the value `1; mode=block`. |
| `traefik.frontend.headers.contentSecurityPolicy=VALUE` | Adds CSP Header with the custom value. |
| `traefik.frontend.headers.publicKey=VALUE` | Adds pinned HTST public key header. |
| `traefik.frontend.headers.referrerPolicy=VALUE` | Adds referrer policy header. |
| `traefik.frontend.headers.isDevelopment=false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
### On Service
Services labels can be used for overriding default behaviour
| Label | Description |
|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| `traefik.<service-name>.port=PORT` | Overrides `traefik.port`. If several ports need to be exposed, the service labels could be used. |
| `traefik.<service-name>.protocol` | Overrides `traefik.protocol`. |
| `traefik.<service-name>.weight` | Assign this service weight. Overrides `traefik.weight`. |
| `traefik.<service-name>.frontend.backend=BACKEND` | Assign this service frontend to `BACKEND`. Default is to assign to the service backend. |
| `traefik.<service-name>.frontend.entryPoints` | Overrides `traefik.frontend.entrypoints` |
| `traefik.<service-name>.frontend.auth.basic` | Sets a Basic Auth for that frontend |
| `traefik.<service-name>.frontend.passHostHeader` | Overrides `traefik.frontend.passHostHeader`. |
| `traefik.<service-name>.frontend.priority` | Overrides `traefik.frontend.priority`. |
| `traefik.<service-name>.frontend.rule` | Overrides `traefik.frontend.rule`. |
| `traefik.<service-name>.frontend.redirect` | Overrides `traefik.frontend.redirect`. |
| `traefik.<service-name>.frontend.redirect.entryPoint=https` | Overrides `traefik.frontend.redirect.entryPoint`. |
| `traefik.<service-name>.frontend.redirect.regex=^http://localhost/(.*)` | Overrides `traefik.frontend.redirect.regex`. |
| `traefik.<service-name>.frontend.redirect.replacement=http://mydomain/$1` | Overrides `traefik.frontend.redirect.replacement`. |
!!! note
If a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
!!! warning
When running inside a container, Træfik will need network access through:
`docker network connect <network> <traefik-container>`

View File

@@ -0,0 +1,71 @@
# DynamoDB Backend
Træfik can be configured to use Amazon DynamoDB as a backend configuration.
## Configuration
```toml
################################################################
# DynamoDB configuration backend
################################################################
# Enable DynamoDB configuration backend.
[dynamodb]
# Region to use when connecting to AWS.
#
# Required
#
region = "us-west-1"
# DyanmoDB Table Name.
#
# Optional
# Default: "traefik"
#
tableName = "traefik"
# Enable watch DynamoDB changes.
#
# Optional
# Default: true
#
watch = true
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# AccessKeyID to use when connecting to AWS.
#
# Optional
#
accessKeyID = "abc"
# SecretAccessKey to use when connecting to AWS.
#
# Optional
#
secretAccessKey = "123"
# Endpoint of local dynamodb instance for testing?
#
# Optional
#
endpoint = "http://localhost:8080"
```
## Table Items
Items in the `dynamodb` table must have three attributes:
- `id` (string): The id is the primary key.
- `name`(string): The name is used as the name of the frontend or backend.
- `frontend` or `backend` (map): This attribute's structure matches exactly the structure of a Frontend or Backend type in Traefik.
See `types/types.go` for details.
The presence or absence of this attribute determines its type.
So an item should never have both a `frontend` and a `backend` attribute.

View File

@@ -0,0 +1,143 @@
# ECS Backend
Træfik can be configured to use Amazon ECS as a backend configuration.
## Configuration
```toml
################################################################
# ECS configuration backend
################################################################
# Enable ECS configuration backend.
[ecs]
# ECS Cluster Name.
#
# DEPRECATED - Please use `clusters`.
#
cluster = "default"
# ECS Clusters Name.
#
# Optional
# Default: ["default"]
#
clusters = ["default"]
# Enable watch ECS changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
#
# Optional
# Default: ""
#
domain = "ecs.localhost"
# Enable auto discover ECS clusters.
#
# Optional
# Default: false
#
autoDiscoverClusters = false
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# Expose ECS services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Region to use when connecting to AWS.
#
# Optional
#
region = "us-east-1"
# AccessKeyID to use when connecting to AWS.
#
# Optional
#
accessKeyID = "abc"
# SecretAccessKey to use when connecting to AWS.
#
# Optional
#
secretAccessKey = "123"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "ecs.tmpl"
```
If `AccessKeyID`/`SecretAccessKey` is not given credentials will be resolved in the following order:
- From environment variables; `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN`.
- Shared credentials, determined by `AWS_PROFILE` and `AWS_SHARED_CREDENTIALS_FILE`, defaults to `default` and `~/.aws/credentials`.
- EC2 instance role or ECS task role
## Policy
Træfik needs the following policy to read ECS information:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TraefikECSReadAccess",
"Effect": "Allow",
"Action": [
"ecs:ListClusters",
"ecs:DescribeClusters",
"ecs:ListTasks",
"ecs:DescribeTasks",
"ecs:DescribeContainerInstances",
"ecs:DescribeTaskDefinition",
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}
```
## Labels: overriding default behaviour
Labels can be used on task containers to override default behaviour:
| Label | Description |
|-----------------------------------------------------------|------------------------------------------------------------------------------------------|
| `traefik.protocol=https` | override the default `http` protocol |
| `traefik.weight=10` | assign this weight to the container |
| `traefik.enable=false` | disable this container in Træfik |
| `traefik.port=80` | override the default `port` value. Overrides `NetworkBindings` from Docker Container |
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
| `traefik.backend.healthcheck.path=/health` | enable health checks for the backend, hitting the container at `path` |
| `traefik.backend.healthcheck.interval=1s` | configure the health check interval |
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |

View File

@@ -0,0 +1,75 @@
# Etcd Backend
Træfik can be configured to use Etcd as a backend configuration.
```toml
################################################################
# Etcd configuration backend
################################################################
# Enable Etcd configuration backend.
[etcd]
# Etcd server endpoint.
#
# Required
# Default: "127.0.0.1:2379"
#
endpoint = "127.0.0.1:2379"
# Enable watch Etcd changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "/traefik"
#
prefix = "/traefik"
# Force to use API V3 (otherwise still use API V2)
#
# Deprecated
#
# Optional
# Default: false
#
useAPIV3 = true
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "etcd.tmpl"
# Use etcd user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable etcd TLS connection.
#
# Optional
#
# [etcd.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/etcd.crt"
# key = "/etc/ssl/etcd.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.
!!! note
The option `useAPIV3` allows using Etcd API V3 only if it's set to true.
This option is **deprecated** and API V2 won't be supported in the future.

View File

@@ -0,0 +1,32 @@
# Eureka Backend
Træfik can be configured to use Eureka as a backend configuration.
```toml
################################################################
# Eureka configuration backend
################################################################
# Enable Eureka configuration backend.
[eureka]
# Eureka server endpoint.
#
# Required
#
endpoint = "http://my.eureka.server/eureka"
# Override default configuration time between refresh.
#
# Optional
# Default: 30s
#
delay = "1m"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "eureka.tmpl"
```

View File

@@ -0,0 +1,247 @@
# File Backends
Træfik can be configured with a file.
## Reference
```toml
# Backends
[backends]
[backends.backend1]
[backends.backend1.servers]
[backends.backend1.servers.server0]
url = "http://10.10.10.1:80"
weight = 1
[backends.backend1.servers.server1]
url = "http://10.10.10.2:80"
weight = 2
# ...
[backends.backend1.circuitBreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.loadBalancer]
method = "drr"
[backends.backend1.loadBalancer.stickiness]
cookieName = "foobar"
[backends.backend1.maxConn]
amount = 10
extractorfunc = "request.host"
[backends.backend1.healthCheck]
path = "/health"
port = 88
interval = "30s"
[backends.backend2]
# ...
# Frontends
[frontends]
[frontends.frontend1]
entryPoints = ["http", "https"]
backend = "backend1"
passHostHeader = true
passTLSCert = true
priority = 42
basicAuth = [
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
]
whitelistSourceRange = ["10.42.0.0/16", "152.89.1.33/32", "afed:be44::/16"]
[frontends.frontend1.routes]
[frontends.frontend1.routes.route0]
rule = "Host:test.localhost"
[frontends.frontend1.routes.Route1]
rule = "Method:GET"
# ...
[frontends.frontend1.headers]
allowedHosts = ["foobar", "foobar"]
hostsProxyHeaders = ["foobar", "foobar"]
SSLRedirect = true
SSLTemporaryRedirect = true
SSLHost = "foobar"
STSSeconds = 42
STSIncludeSubdomains = true
STSPreload = true
forceSTSHeader = true
frameDeny = true
customFrameOptionsValue = "foobar"
contentTypeNosniff = true
browserXSSFilter = true
contentSecurityPolicy = "foobar"
publicKey = "foobar"
referrerPolicy = "foobar"
isDevelopment = true
[frontends.frontend1.headers.customRequestHeaders]
X-Foo-Bar-01 = "foobar"
X-Foo-Bar-02 = "foobar"
# ...
[frontends.frontend1.headers.customResponseHeaders]
X-Foo-Bar-03 = "foobar"
X-Foo-Bar-04 = "foobar"
# ...
[frontends.frontend1.headers.SSLProxyHeaders]
X-Foo-Bar-05 = "foobar"
X-Foo-Bar-06 = "foobar"
# ...
[frontends.frontend1.errors]
[frontends.frontend1.errors.errorPage0]
status = ["500-599"]
backend = "error"
query = "/{status}.html"
[frontends.frontend1.errors.errorPage1]
status = ["404", "403"]
backend = "error"
query = "/{status}.html"
# ...
[frontends.frontend1.ratelimit]
extractorfunc = "client.ip"
[frontends.frontend1.ratelimit.rateset.rateset1]
period = "10s"
average = 100
burst = 200
[frontends.frontend1.ratelimit.rateset.rateset2]
period = "3s"
average = 5
burst = 10
# ...
[frontends.frontend1.redirect]
entryPoint = "https"
regex = "^http://localhost/(.*)"
replacement = "http://mydomain/$1"
[frontends.frontend2]
# ...
# HTTPS certificates
[[tls]]
entryPoints = ["https"]
[tls.certificate]
certFile = "path/to/my.cert"
keyFile = "path/to/my.key"
[[tls]]
# ...
```
## Configuration mode
You have three choices:
- [Simple](/configuration/backends/file/#simple)
- [Rules in a Separate File](/configuration/backends/file/#rules-in-a-separate-file)
- [Multiple `.toml` Files](/configuration/backends/file/#multiple-toml-files)
To enable the file backend, you must either pass the `--file` option to the Træfik binary or put the `[file]` section (with or without inner settings) in the configuration file.
The configuration file allows managing both backends/frontends and HTTPS certificates (which are not [Let's Encrypt](https://letsencrypt.org) certificates generated through Træfik).
### Simple
Add your configuration at the end of the global configuration file `traefik.toml`:
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
# ...
[entryPoints.https]
# ...
[file]
# rules
[backends]
[backends.backend1]
# ...
[backends.backend2]
# ...
[frontends]
[frontends.frontend1]
# ...
[frontends.frontend2]
# ...
[frontends.frontend3]
# ...
# HTTPS certificate
[[tls]]
# ...
[[tls]]
# ...
```
!!! note
adding certificates directly to the entrypoint is still maintained but certificates declared in this way cannot be managed dynamically.
It's recommended to use the file provider to declare certificates.
### Rules in a Separate File
Put your rules in a separate file, for example `rules.toml`:
```toml
# traefik.toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
# ...
[entryPoints.https]
# ...
[file]
filename = "rules.toml"
```
```toml
# rules.toml
[backends]
[backends.backend1]
# ...
[backends.backend2]
# ...
[frontends]
[frontends.frontend1]
# ...
[frontends.frontend2]
# ...
[frontends.frontend3]
# ...
# HTTPS certificate
[[tls]]
# ...
[[tls]]
# ...
```
### Multiple `.toml` Files
You could have multiple `.toml` files in a directory (and recursively in its sub-directories):
```toml
[file]
directory = "/path/to/config/"
```
If you want Træfik to watch file changes automatically, just add:
```toml
[file]
watch = true
```

View File

@@ -0,0 +1,180 @@
# Kubernetes Ingress Backend
Træfik can be configured to use Kubernetes Ingress as a backend configuration.
See also [Kubernetes user guide](/user-guide/kubernetes).
## Configuration
```toml
################################################################
# Kubernetes Ingress configuration backend
################################################################
# Enable Kubernetes Ingress configuration backend.
[kubernetes]
# Kubernetes server endpoint.
#
# Optional for in-cluster configuration, required otherwise.
# Default: empty
#
# endpoint = "http://localhost:8080"
# Bearer token used for the Kubernetes client configuration.
#
# Optional
# Default: empty
#
# token = "my token"
# Path to the certificate authority file.
# Used for the Kubernetes client configuration.
#
# Optional
# Default: empty
#
# certAuthFilePath = "/my/ca.crt"
# Array of namespaces to watch.
#
# Optional
# Default: all namespaces (empty array).
#
# namespaces = ["default", "production"]
# Ingress label selector to filter Ingress objects that should be processed.
#
# Optional
# Default: empty (process all Ingresses)
#
# labelselector = "A and not B"
# Disable PassHost Headers.
#
# Optional
# Default: false
#
# disablePassHostHeaders = true
# Enable PassTLSCert Headers.
#
# Optional
# Default: false
#
# enablePassTLSCert = true
# Override default configuration template.
#
# Optional
# Default: <built-in template>
#
# filename = "kubernetes.tmpl"
```
### `endpoint`
The Kubernetes server endpoint as URL.
When deployed into Kubernetes, Traefik will read the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` to construct the endpoint.
The access token will be looked up in `/var/run/secrets/kubernetes.io/serviceaccount/token` and the SSL CA certificate in `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
Both are provided mounted automatically when deployed inside Kubernetes.
The endpoint may be specified to override the environment variable values inside a cluster.
When the environment variables are not found, Traefik will try to connect to the Kubernetes API server with an external-cluster client.
In this case, the endpoint is required.
Specifically, it may be set to the URL used by `kubectl proxy` to connect to a Kubernetes cluster using the granted autentication and authorization of the associated kubeconfig.
### `labelselector`
By default, Traefik processes all Ingress objects in the configured namespaces.
A label selector can be defined to filter on specific Ingress objects only.
See [label-selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) for details.
## Annotations
### General annotations
The following general annotations are applicable on the Ingress object:
- `traefik.frontend.rule.type: PathPrefixStrip`
Override the default frontend rule type. Default: `PathPrefix`.
- `traefik.frontend.priority: "3"`
Override the default frontend rule priority.
- `traefik.frontend.redirect.entryPoint: https`:
Enables Redirect to another entryPoint for that frontend (e.g. HTTPS).
- `traefik.frontend.redirect.regex: ^http://localhost/(.*)`:
Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.replacement`.
- `traefik.frontend.redirect.replacement: http://mydomain/$1`:
Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.regex`.
- `traefik.frontend.entryPoints: http,https`
Override the default frontend endpoints.
- `traefik.frontend.passTLSCert: true`
Override the default frontend PassTLSCert value. Default: `false`.
- `ingress.kubernetes.io/rewrite-target: /users`
Replaces each matched Ingress path with the specified one, and adds the old path to the `X-Replaced-Path` header.
- `ingress.kubernetes.io/whitelist-source-range: "1.2.3.0/24, fe80::/16"`
A comma-separated list of IP ranges permitted for access. all source IPs are permitted if the list is empty or a single range is ill-formatted.
!!! note
Please note that `traefik.frontend.redirect.regex` and `traefik.frontend.redirect.replacement` do not have to be set if `traefik.frontend.redirect.entryPoint` is defined for the redirection (they will not be used in this case).
The following annotations are applicable on the Service object associated with a particular Ingress object:
- `traefik.backend.loadbalancer.method=drr`
Override the default `wrr` load balancer algorithm.
- `traefik.backend.loadbalancer.stickiness=true`
Enable backend sticky sessions.
- `traefik.backend.loadbalancer.stickiness.cookieName=NAME`
Manually set the cookie name for sticky sessions.
- `traefik.backend.loadbalancer.sticky=true`
Enable backend sticky sessions (DEPRECATED).
- `traefik.backend.circuitbreaker: <expression>`
Set the circuit breaker expression for the backend.
### Security annotations
The following security annotations are applicable on the Ingress object:
| Annotation | Description |
| -------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ingress.kubernetes.io/allowed-hosts:EXPR` | Provides a list of allowed hosts that requests will be processed. Format: `Host1,Host2` |
| `ingress.kubernetes.io/custom-request-headers:EXPR` | Provides the container with custom request headers that will be appended to each request forwarded to the container. Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `ingress.kubernetes.io/custom-response-headers:EXPR` | Appends the headers to each response returned by the container, before forwarding the response to the client. Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `ingress.kubernetes.io/proxy-headers:EXPR` | Provides a list of headers that the proxied hostname may be stored. Format: `HEADER1,HEADER2` |
| `ingress.kubernetes.io/ssl-redirect:true` | Forces the frontend to redirect to SSL if a non-SSL request is sent. |
| `ingress.kubernetes.io/ssl-temporary-redirect:true` | Forces the frontend to redirect to SSL if a non-SSL request is sent, but by sending a 302 instead of a 301. |
| `ingress.kubernetes.io/ssl-host:HOST` | This setting configures the hostname that redirects will be based on. Default is "", which is the same host as the request. |
| `ingress.kubernetes.io/ssl-proxy-headers:EXPR` | Header combinations that would signify a proper SSL Request (Such as `X-Forwarded-For:https`). Format: <code>HEADER:value&vert;&vert;HEADER2:value2</code> |
| `ingress.kubernetes.io/hsts-max-age:315360000` | Sets the max-age of the HSTS header. |
| `ingress.kubernetes.io/hsts-include-subdomains:true` | Adds the IncludeSubdomains section of the STS header. |
| `ingress.kubernetes.io/hsts-preload:true` | Adds the preload flag to the HSTS header. |
| `ingress.kubernetes.io/force-hsts:false` | Adds the STS header to non-SSL requests. |
| `ingress.kubernetes.io/frame-deny:false` | Adds the `X-Frame-Options` header with the value of `DENY`. |
| `ingress.kubernetes.io/custom-frame-options-value:VALUE` | Overrides the `X-Frame-Options` header with the custom value. |
| `ingress.kubernetes.io/content-type-nosniff:true` | Adds the `X-Content-Type-Options` header with the value `nosniff`. |
| `ingress.kubernetes.io/browser-xss-filter:true` | Adds the X-XSS-Protection header with the value `1; mode=block`. |
| `ingress.kubernetes.io/content-security-policy:VALUE` | Adds CSP Header with the custom value. |
| `ingress.kubernetes.io/public-key:VALUE` | Adds pinned HTST public key header. |
| `ingress.kubernetes.io/referrer-policy:VALUE` | Adds referrer policy header. |
| `ingress.kubernetes.io/is-development:false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
### Authentication
Is possible to add additional authentication annotations to the Ingress object.
The source of the authentication is a Secret object that contains the credentials.
- `ingress.kubernetes.io/auth-type`: `basic`
Contains the authentication type. The only permitted type is `basic`.
- `ingress.kubernetes.io/auth-secret`: `mysecret`
Contains the username and password with access to the paths defined in the Ingress object.
The secret must be created in the same namespace as the Ingress object.
The following limitations hold:
- The realm is not configurable; the only supported (and default) value is `traefik`.
- The Secret must contain a single file only.

View File

@@ -0,0 +1,201 @@
# Marathon Backend
Træfik can be configured to use Marathon as a backend configuration.
See also [Marathon user guide](/user-guide/marathon).
## Configuration
```toml
################################################################
# Mesos/Marathon configuration backend
################################################################
# Enable Marathon configuration backend.
[marathon]
# Marathon server endpoint.
# You can also specify multiple endpoint for Marathon:
# endpoint = "http://10.241.1.71:8080,10.241.1.72:8080,10.241.1.73:8080"
#
# Required
# Default: "http://127.0.0.1:8080"
#
endpoint = "http://127.0.0.1:8080"
# Enable watch Marathon changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an application.
#
# Required
#
domain = "marathon.localhost"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "marathon.tmpl"
# Expose Marathon apps by default in Traefik.
#
# Optional
# Default: true
#
# exposedByDefault = false
# Convert Marathon groups to subdomains.
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
#
# Optional
# Default: false
#
# groupsAsSubDomains = true
# Enable compatibility with marathon-lb labels.
#
# Optional
# Default: false
#
# marathonLBCompatibility = true
# Enable filtering using Marathon constraints..
# If enabled, Traefik will read Marathon constraints, as defined in https://mesosphere.github.io/marathon/docs/constraints.html
# Each individual constraint will be treated as a verbatim compounded tag.
# i.e. "rack_id:CLUSTER:rack-1", with all constraint groups concatenated together using ":"
#
# Optional
# Default: false
#
# filterMarathonConstraints = true
# Enable Marathon basic authentication.
#
# Optional
#
# [marathon.basic]
# httpBasicAuthUser = "foo"
# httpBasicPassword = "bar"
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
#
# Optional
#
# [marathon.TLS]
# CA = "/etc/ssl/ca.crt"
# Cert = "/etc/ssl/marathon.cert"
# Key = "/etc/ssl/marathon.key"
# InsecureSkipVerify = true
# DCOSToken for DCOS environment.
# This will override the Authorization header.
#
# Optional
#
# dcosToken = "xxxxxx"
# Override DialerTimeout.
# Amount of time to allow the Marathon provider to wait to open a TCP connection
# to a Marathon master.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
# values (digits).
# If no units are provided, the value is parsed assuming seconds.
#
# Optional
# Default: "60s"
#
# dialerTimeout = "60s"
# Set the TCP Keep Alive interval for the Marathon HTTP Client.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
# values (digits).
# If no units are provided, the value is parsed assuming seconds.
#
# Optional
# Default: "10s"
#
# keepAlive = "10s"
# By default, a task's IP address (as returned by the Marathon API) is used as
# backend server if an IP-per-task configuration can be found; otherwise, the
# name of the host running the task is used.
# The latter behavior can be enforced by enabling this switch.
#
# Optional
# Default: false
#
# forceTaskHostname = true
# Applications may define readiness checks which are probed by Marathon during
# deployments periodically and the results exposed via the API.
# Enabling the following parameter causes Traefik to filter out tasks
# whose readiness checks have not succeeded.
# Note that the checks are only valid at deployment times.
# See the Marathon guide for details.
#
# Optional
# Default: false
#
# respectReadinessChecks = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Labels: overriding default behaviour
Marathon labels may be used to dynamically change the routing and forwarding behaviour.
They may be specified on one of two levels: Application or service.
### Application Level
The following labels can be defined on Marathon applications. They adjust the behaviour for the entire application.
| Label | Description |
|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.backend=foo` | assign the application to `foo` backend |
| `traefik.backend.maxconn.amount=10` | set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.backend.healthcheck.path=/health` | set the Traefik health check path [default: no health checks] |
| `traefik.backend.healthcheck.interval=5s` | sets a custom health check interval in Go-parseable (`time.ParseDuration`) format [default: 30s] |
| `traefik.portIndex=1` | register port by index in the application's ports array. Useful when the application exposes multiple ports. |
| `traefik.port=80` | register the explicit application port value. Cannot be used alongside `traefik.portIndex`. |
| `traefik.protocol=https` | override the default `http` protocol |
| `traefik.weight=10` | assign this weight to the application |
| `traefik.enable=false` | disable this application in Træfik |
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
### Service Level
For applications that expose multiple ports, specific labels can be used to extract one frontend/backend configuration pair per port. Each such pair is called a _service_. The (freely choosable) name of the service is an integral part of the service label name.
| Label | Description |
|--------------------------------------------------------|------------------------------------------------------------------------------------------------------|
| `traefik.<service-name>.port=443` | create a service binding with frontend/backend using this port. Overrides `traefik.port`. |
| `traefik.<service-name>.portIndex=1` | create a service binding with frontend/backend using this port index. Overrides `traefik.portIndex`. |
| `traefik.<service-name>.protocol=https` | assign `https` protocol. Overrides `traefik.protocol`. |
| `traefik.<service-name>.weight=10` | assign this service weight. Overrides `traefik.weight`. |
| `traefik.<service-name>.frontend.backend=fooBackend` | assign this service frontend to `foobackend`. Default is to assign to the service backend. |
| `traefik.<service-name>.frontend.entryPoints=http` | assign this service entrypoints. Overrides `traefik.frontend.entrypoints`. |
| `traefik.<service-name>.frontend.auth.basic=test:EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.<service-name>.frontend.passHostHeader=true` | Forward client `Host` header to the backend. Overrides `traefik.frontend.passHostHeader`. |
| `traefik.<service-name>.frontend.priority=10` | assign the service frontend priority. Overrides `traefik.frontend.priority`. |
| `traefik.<service-name>.frontend.rule=Path:/foo` | assign the service frontend rule. Overrides `traefik.frontend.rule`. |

View File

@@ -0,0 +1,93 @@
# Mesos Generic Backend
Træfik can be configured to use Mesos as a backend configuration.
```toml
################################################################
# Mesos configuration backend
################################################################
# Enable Mesos configuration backend.
[mesos]
# Mesos server endpoint.
# You can also specify multiple endpoint for Mesos:
# endpoint = "192.168.35.40:5050,192.168.35.41:5050,192.168.35.42:5050"
# endpoint = "zk://192.168.35.20:2181,192.168.35.21:2181,192.168.35.22:2181/mesos"
#
# Required
# Default: "http://127.0.0.1:5050"
#
endpoint = "http://127.0.0.1:8080"
# Enable watch Mesos changes.
#
# Optional
# Default: true
#
watch = true
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an application.
#
# Required
#
domain = "mesos.localhost"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "mesos.tmpl"
# Expose Mesos apps by default in Traefik.
#
# Optional
# Default: true
#
# ExposedByDefault = false
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
#
# Optional
#
# [mesos.TLS]
# InsecureSkipVerify = true
# Zookeeper timeout (in seconds).
#
# Optional
# Default: 30
#
# ZkDetectionTimeout = 30
# Polling interval (in seconds).
#
# Optional
# Default: 30
#
# RefreshSeconds = 30
# IP sources (e.g. host, docker, mesos, rkt).
#
# Optional
#
# IPSources = "host"
# HTTP Timeout (in seconds).
#
# Optional
# Default: 30
#
# StateTimeoutSecond = "30"
# Convert groups to subdomains.
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
#
# Optional
# Default: false
#
# groupsAsSubDomains = true
```

View File

@@ -0,0 +1,140 @@
# Rancher Backend
Træfik can be configured to use Rancher as a backend configuration.
## Global Configuration
```toml
################################################################
# Rancher configuration backend
################################################################
# Enable Rancher configuration backend.
[rancher]
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on an service.
#
# Required
#
domain = "rancher.localhost"
# Enable watch Rancher changes.
#
# Optional
# Default: true
#
watch = true
# Polling interval (in seconds).
#
# Optional
# Default: 15
#
refreshSeconds = 15
# Expose Rancher services by default in Traefik.
#
# Optional
# Default: true
#
exposedByDefault = false
# Filter services with unhealthy states and inactive states.
#
# Optional
# Default: false
#
enableServiceHealthFilter = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
## Rancher Metadata Service
```toml
# Enable Rancher metadata service configuration backend instead of the API
# configuration backend.
#
# Optional
# Default: false
#
[rancher.metadata]
# Poll the Rancher metadata service for changes every `rancher.RefreshSeconds`.
# NOTE: this is less accurate than the default long polling technique which
# will provide near instantaneous updates to Traefik
#
# Optional
# Default: false
#
intervalPoll = true
# Prefix used for accessing the Rancher metadata service.
#
# Optional
# Default: "/latest"
#
prefix = "/2016-07-29"
```
## Rancher API
```toml
# Enable Rancher API configuration backend.
#
# Optional
# Default: true
#
[rancher.api]
# Endpoint to use when connecting to the Rancher API.
#
# Required
endpoint = "http://rancherserver.example.com/v1"
# AccessKey to use when connecting to the Rancher API.
#
# Required
accessKey = "XXXXXXXXXXXXXXXXXXXX"
# SecretKey to use when connecting to the Rancher API.
#
# Required
secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
!!! note
If Traefik needs access to the Rancher API, you need to set the `endpoint`, `accesskey` and `secretkey` parameters.
To enable Traefik to fetch information about the Environment it's deployed in only, you need to create an `Environment API Key`.
This can be found within the API Key advanced options.
Add these labels to traefik docker deployment to autogenerated these values:
```
io.rancher.container.agent.role: environment
io.rancher.container.create_agent: true
```
## Labels: overriding default behaviour
Labels can be used on task containers to override default behaviour:
| Label | Description |
|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| `traefik.protocol=https` | Override the default `http` protocol |
| `traefik.weight=10` | Assign this weight to the container |
| `traefik.enable=false` | Disable this container in Træfik |
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
| `traefik.frontend.redirect.entryPoint=https` | Enables Redirect to another entryPoint for that frontend (e.g. HTTPS) |
| `traefik.frontend.redirect.regex: ^http://localhost/(.*)` | Redirect to another URL for that frontend.<br>Must be set with `traefik.frontend.redirect.replacement`. |
| `traefik.frontend.redirect.replacement: http://mydomain/$1` | Redirect to another URL for that frontend.<br>Must be set with `traefik.frontend.redirect.regex`. |
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |

View File

@@ -0,0 +1,91 @@
# Rest Backend
Træfik can be configured:
- using a RESTful api.
## Configuration
```toml
# Enable rest backend.
[rest]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
```
## API
| Path | Method | Description |
|------------------------------|--------|-----------------|
| `/api/providers/web` | `PUT` | update provider |
| `/api/providers/rest` | `PUT` | update provider |
!!! warning
For compatibility reason, when you activate the rest provider, you can use `web` or `rest` as `provider` value.
```shell
curl -XPUT @file "http://localhost:8080/api"
```
with `@file`
```json
{
"frontends": {
"frontend2": {
"routes": {
"test_2": {
"rule": "Path:/test"
}
},
"backend": "backend1"
},
"frontend1": {
"routes": {
"test_1": {
"rule": "Host:test.localhost"
}
},
"backend": "backend2"
}
},
"backends": {
"backend2": {
"loadBalancer": {
"method": "drr"
},
"servers": {
"server2": {
"weight": 2,
"URL": "http://172.17.0.5:80"
},
"server1": {
"weight": 1,
"url": "http://172.17.0.4:80"
}
}
},
"backend1": {
"loadBalancer": {
"method": "wrr"
},
"circuitBreaker": {
"expression": "NetworkErrorRatio() > 0.5"
},
"servers": {
"server2": {
"weight": 1,
"url": "http://172.17.0.3:80"
},
"server1": {
"weight": 10,
"url": "http://172.17.0.2:80"
}
}
}
}
}
```

View File

@@ -0,0 +1,114 @@
# Service Fabric Backend
Træfik can be configured to use Service Fabric as a backend configuration.
See [this repository for an example deployment package and further documentation.](https://aka.ms/traefikonsf)
## Service Fabric
```toml
################################################################
# Service Fabric provider
################################################################
# Enable Service Fabric configuration backend
[serviceFabric]
# Service Fabric Management Endpoint
#
# Required
#
clusterManagementUrl = "https://localhost:19080"
# Service Fabric Management Endpoint API Version
#
# Required
# Default: "3.0"
#
apiVersion = "3.0"
# Service Fabric Polling Interval (in seconds)
#
# Required
# Default: 10
#
refreshSeconds = 10
# Enable TLS connection.
#
# Optional
#
# [serviceFabric.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/servicefabric.crt"
# key = "/etc/ssl/servicefabric.key"
# insecureskipverify = true
```
## Labels
The provider uses labels to configure how services are exposed through Træfik.
These can be set using Extensions and the Property Manager API
#### Extensions
Set labels with extensions through the services `ServiceManifest.xml` file.
Here is an example of an extension setting Træfik labels:
```xml
<StatelessServiceType ServiceTypeName="WebServiceType">
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.frontend.rule.example2">PathPrefixStrip: /a/path/to/strip</Label>
<Label Key="traefik.expose">true</Label>
<Label Key="traefik.frontend.passHostHeader">true</Label>
</Labels>
</Extension>
</Extensions>
</StatelessServiceType>
```
#### Property Manager
Set Labels with the property manager API to overwrite and add labels, while your service is running.
Here is an example of adding a frontend rule using the property manager API.
```shell
curl -X PUT \
'http://localhost:19080/Names/GettingStartedApplication2/WebService/$/GetProperty?api-version=6.0&IncludeValues=true' \
-d '{
"PropertyName": "traefik.frontend.rule.default",
"Value": {
"Kind": "String",
"Data": "PathPrefixStrip: /a/path/to/strip"
},
"CustomTypeId": "LabelType"
}'
```
!!! note
This functionality will be released in a future version of the [sfctl](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-lifecycle-sfctl) tool.
## Available Labels
Labels, set through extensions or the property manager, can be used on services to override default behaviour.
| Label | Description |
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend.<br>Must be used in conjunction with the below label to take effect. |
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by.<br>Must be used in conjunction with the above label to take effect. |
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
| `traefik.backend.weight=10` | Assign this weight to the container |
| `traefik.expose=true` | Expose this service using træfik |
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Defaults to SF address. |
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
| `traefik.frontend.priority=10` | Override default frontend priority |
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
| `traefik.frontend.auth.basic=EXPR` | Set basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access.<br>If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
| `traefik.backend.group.name` | Group all services with the same name into a single backend in Træfik |
| `traefik.backend.group.weight` | Set the weighting of the current services nodes in the backend group |

View File

@@ -0,0 +1,481 @@
# Web Backend
!!! danger "DEPRECATED"
The web provider is deprecated, please use the [api](/configuration/api.md), the [ping](/configuration/ping.md), the [metrics](/configuration/metrics) and the [rest](/configuration/backends/rest.md) provider.
Træfik can be configured:
- using a RESTful api.
- to use a monitoring system (like Prometheus, DataDog or StatD, ...).
- to expose a Web Dashboard.
## Configuration
```toml
# Enable web backend.
[web]
# Web administration port.
#
# Required
# Default: ":8080"
#
address = ":8080"
# SSL certificate and key used.
#
# Optional
#
# certFile = "traefik.crt"
# keyFile = "traefik.key"
# Set REST API to read-only mode.
#
# Optional
# Default: false
#
readOnly = true
# Set the root path for webui and API
#
# Deprecated
# Optional
#
# path = "/mypath"
#
```
## Web UI
![Web UI Providers](/img/web.frontend.png)
![Web UI Health](/img/traefik-health.png)
### Authentication
!!! note
The `/ping` path of the API is excluded from authentication (since 1.4).
#### Basic Authentication
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate those ones.
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence.
```toml
[web]
# ...
# To enable basic auth on the webui with 2 user/pass: test:test and test2:test2
[web.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
usersFile = "/path/to/.htpasswd"
# ...
```
#### Digest Authentication
You can use `htdigest` to generate those ones.
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence
```toml
[web]
# ...
# To enable digest auth on the webui with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
[web.auth.digest]
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
usersFile = "/path/to/.htdigest"
# ...
```
## Metrics
You can enable Træfik to export internal metrics to different monitoring systems.
### Prometheus
```toml
[web]
# ...
# To enable Traefik to export internal metrics to Prometheus
[web.metrics.prometheus]
# Buckets for latency metrics
#
# Optional
# Default: [0.1, 0.3, 1.2, 5]
buckets=[0.1,0.3,1.2,5.0]
# ...
```
### DataDog
```toml
[web]
# ...
# DataDog metrics exporter type
[web.metrics.datadog]
# DataDog's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# DataDog push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
### StatsD
```toml
[web]
# ...
# StatsD metrics exporter type
[web.metrics.statsd]
# StatD's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# StatD push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
### InfluxDB
```toml
[web]
# ...
# InfluxDB metrics exporter type
[web.metrics.influxdb]
# InfluxDB's address.
#
# Required
# Default: "localhost:8089"
#
address = "localhost:8089"
# InfluxDB push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
## Statistics
```toml
[web]
# ...
# Enable more detailed statistics.
[web.statistics]
# Number of recent errors logged.
#
# Default: 10
#
recentErrors = 10
# ...
```
## API
| Path | Method | Description |
|-----------------------------------------------------------------|:-------------:|----------------------------------------------------------------------------------------------------|
| `/` | `GET` | Provides a simple HTML frontend of Træfik |
| `/ping` | `GET`, `HEAD` | A simple endpoint to check for Træfik process liveness. Return a code `200` with the content: `OK` |
| `/health` | `GET` | JSON health metrics |
| `/api` | `GET` | Configuration for all providers |
| `/api/providers` | `GET` | Providers |
| `/api/providers/{provider}` | `GET`, `PUT` | Get or update provider |
| `/api/providers/{provider}/backends` | `GET` | List backends |
| `/api/providers/{provider}/backends/{backend}` | `GET` | Get backend |
| `/api/providers/{provider}/backends/{backend}/servers` | `GET` | List servers in backend |
| `/api/providers/{provider}/backends/{backend}/servers/{server}` | `GET` | Get a server in a backend |
| `/api/providers/{provider}/frontends` | `GET` | List frontends |
| `/api/providers/{provider}/frontends/{frontend}` | `GET` | Get a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes` | `GET` | List routes in a frontend |
| `/api/providers/{provider}/frontends/{frontend}/routes/{route}` | `GET` | Get a route in a frontend |
| `/metrics` | `GET` | Export internal metrics |
### Example
#### Ping
```shell
curl -sv "http://localhost:8080/ping"
```
```shell
* Trying ::1...
* Connected to localhost (::1) port 8080 (\#0)
> GET /ping HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 25 Aug 2016 01:35:36 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
<
* Connection \#0 to host localhost left intact
OK
```
#### Health
```shell
curl -s "http://localhost:8080/health" | jq .
```
```json
{
// Træfik PID
"pid": 2458,
// Træfik server uptime (formated time)
"uptime": "39m6.885931127s",
// Træfik server uptime in seconds
"uptime_sec": 2346.885931127,
// current server date
"time": "2015-10-07 18:32:24.362238909 +0200 CEST",
// current server date in seconds
"unixtime": 1444235544,
// count HTTP response status code in realtime
"status_code_count": {
"502": 1
},
// count HTTP response status code since Træfik started
"total_status_code_count": {
"200": 7,
"404": 21,
"502": 13
},
// count HTTP response
"count": 1,
// count HTTP response
"total_count": 41,
// sum of all response time (formated time)
"total_response_time": "35.456865605s",
// sum of all response time in seconds
"total_response_time_sec": 35.456865605,
// average response time (formated time)
"average_response_time": "864.8016ms",
// average response time in seconds
"average_response_time_sec": 0.8648016000000001,
// request statistics [requires --web.statistics to be set]
// ten most recent requests with 4xx and 5xx status codes
"recent_errors": [
{
// status code
"status_code": 500,
// description of status code
"status": "Internal Server Error",
// request HTTP method
"method": "GET",
// request host name
"host": "localhost",
// request path
"path": "/path",
// RFC 3339 formatted date/time
"time": "2016-10-21T16:59:15.418495872-07:00"
}
]
}
```
#### Provider configurations
```shell
curl -s "http://localhost:8080/api" | jq .
```
```json
{
"file": {
"frontends": {
"frontend2": {
"routes": {
"test_2": {
"rule": "Path:/test"
}
},
"backend": "backend1"
},
"frontend1": {
"routes": {
"test_1": {
"rule": "Host:test.localhost"
}
},
"backend": "backend2"
}
},
"backends": {
"backend2": {
"loadBalancer": {
"method": "drr"
},
"servers": {
"server2": {
"weight": 2,
"URL": "http://172.17.0.5:80"
},
"server1": {
"weight": 1,
"url": "http://172.17.0.4:80"
}
}
},
"backend1": {
"loadBalancer": {
"method": "wrr"
},
"circuitBreaker": {
"expression": "NetworkErrorRatio() > 0.5"
},
"servers": {
"server2": {
"weight": 1,
"url": "http://172.17.0.3:80"
},
"server1": {
"weight": 10,
"url": "http://172.17.0.2:80"
}
}
}
}
}
}
```
### Deprecation compatibility
#### Path
As the web provider is deprecated, you can handle the `Path` option like this:
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.dashboard]
address = ":8080"
[entryPoints.api]
address = ":8081"
# Activate API and Dashboard
[api]
entryPoint = "api"
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://127.0.0.1:8081"
[frontends]
[frontends.frontend1]
entryPoints = ["dashboard"]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/yourprefix;PathPrefix:/yourprefix"
```
#### Address
As the web provider is deprecated, you can handle the `Address` option like this:
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.ping]
address = ":8082"
[entryPoints.api]
address = ":8083"
[ping]
entryPoint = "ping"
[api]
entryPoint = "api"
```
In the above example, you would access a regular path, administration panel, and health-check as follows:
* Regular path: `http://hostname:80/foo`
* Admin Panel: `http://hostname:8083/`
* Ping URL: `http://hostname:8082/ping`
In the above example, it is _very_ important to create a named dedicated entry point, and do **not** include it in `defaultEntryPoints`.
Otherwise, you are likely to expose _all_ services via that entry point.
#### Authentication
As the web provider is deprecated, you can handle the `auth` option like this:
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.api]
address=":8080"
[entryPoints.api.auth]
[entryPoints.api.auth.basic]
users = [
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
]
[api]
entrypoint="api"
```
For more information, see [entry points](/configuration/entrypoints/) .

View File

@@ -0,0 +1,61 @@
# Zookeeper Backend
Træfik can be configured to use Zookeeper as a backend configuration.
```toml
################################################################
# Zookeeper configuration backend
################################################################
# Enable Zookeeperconfiguration backend.
[zookeeper]
# Zookeeper server endpoint.
#
# Required
# Default: "127.0.0.1:2181"
#
endpoint = "127.0.0.1:2181"
# Enable watch Zookeeper changes.
#
# Optional
# Default: true
#
watch = true
# Prefix used for KV store.
#
# Optional
# Default: "traefik"
#
prefix = "traefik"
# Override default configuration template.
# For advanced users :)
#
# Optional
#
# filename = "zookeeper.tmpl"
# Use Zookeeper user/pass authentication.
#
# Optional
#
# username = foo
# password = bar
# Enable Zookeeper TLS connection.
#
# Optional
#
# [zookeeper.tls]
# ca = "/etc/ssl/ca.crt"
# cert = "/etc/ssl/zookeeper.crt"
# key = "/etc/ssl/zookeeper.key"
# insecureskipverify = true
```
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.

View File

@@ -0,0 +1,528 @@
# Global Configuration
## Main Section
```toml
# DEPRECATED - for general usage instruction see [lifeCycle.graceTimeOut].
#
# If both the deprecated option and the new one are given, the deprecated one
# takes precedence.
# A value of zero is equivalent to omitting the parameter, causing
# [lifeCycle.graceTimeOut] to be effective. Pass zero to the new option in
# order to disable the grace period.
#
# Optional
# Default: "0s"
#
# graceTimeOut = "10s"
# Enable debug mode.
# This will install HTTP handlers to expose Go expvars under /debug/vars and
# pprof profiling data under /debug/pprof.
# Additionally, the log level will be set to DEBUG.
#
# Optional
# Default: false
#
# debug = true
# Periodically check if a new version has been released.
#
# Optional
# Default: true
#
# checkNewVersion = false
# Backends throttle duration.
#
# Optional
# Default: "2s"
#
# ProvidersThrottleDuration = "2s"
# Controls the maximum idle (keep-alive) connections to keep per-host.
#
# Optional
# Default: 200
#
# MaxIdleConnsPerHost = 200
# If set to true invalid SSL certificates are accepted for backends.
# This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
#
# Optional
# Default: false
#
# InsecureSkipVerify = true
# Register Certificates in the RootCA.
#
# Optional
# Default: []
#
# RootCAs = [ "/mycert.cert" ]
# Entrypoints to be used by frontends that do not specify any entrypoint.
# Each frontend can specify its own entrypoints.
#
# Optional
# Default: ["http"]
#
# defaultEntryPoints = ["http", "https"]
```
- `graceTimeOut`: Duration to give active requests a chance to finish before Traefik stops.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
**Note:** in this time frame no new requests are accepted.
- `ProvidersThrottleDuration`: Backends throttle duration: minimum duration in seconds between 2 events from providers before applying a new configuration.
It avoids unnecessary reloads if multiples events are sent in a short amount of time.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `MaxIdleConnsPerHost`: Controls the maximum idle (keep-alive) connections to keep per-host.
If zero, `DefaultMaxIdleConnsPerHost` from the Go standard library net/http module is used.
If you encounter 'too many open files' errors, you can either increase this value or change the `ulimit`.
- `InsecureSkipVerify` : If set to true invalid SSL certificates are accepted for backends.
**Note:** This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
- `RootCAs`: Register Certificates in the RootCA. This certificates will be use for backends calls.
**Note** You can use file path or cert content directly
- `defaultEntryPoints`: Entrypoints to be used by frontends that do not specify any entrypoint.
Each frontend can specify its own entrypoints.
## Constraints
In a micro-service architecture, with a central service discovery, setting constraints limits Træfik scope to a smaller number of routes.
Træfik filters services according to service attributes/tags set in your configuration backends.
Supported filters:
- `tag`
### Simple
```toml
# Simple matching constraint
constraints = ["tag==api"]
# Simple mismatching constraint
constraints = ["tag!=api"]
# Globbing
constraints = ["tag==us-*"]
```
### Multiple
```toml
# Multiple constraints
# - "tag==" must match with at least one tag
# - "tag!=" must match with none of tags
constraints = ["tag!=us-*", "tag!=asia-*"]
```
### Backend-specific
Supported backends:
- Docker
- Consul K/V
- BoltDB
- Zookeeper
- Etcd
- Consul Catalog
- Rancher
- Marathon
- Kubernetes (using a provider-specific mechanism based on label selectors)
```toml
# Backend-specific constraint
[consulCatalog]
# ...
constraints = ["tag==api"]
# Backend-specific constraint
[marathon]
# ...
constraints = ["tag==api", "tag!=v*-beta"]
```
## Logs Definition
### Traefik logs
```toml
# Traefik logs file
# If not defined, logs to stdout
#
# DEPRECATED - see [traefikLog] lower down
# In case both traefikLogsFile and traefikLog.filePath are specified, the latter will take precedence.
# Optional
#
traefikLogsFile = "log/traefik.log"
# Log level
#
# Optional
# Default: "ERROR"
#
# Accepted values, in order of severity: "DEBUG", "INFO", "WARN", "ERROR", "FATAL", "PANIC"
# Messages at and above the selected level will be logged.
#
logLevel = "ERROR"
```
## Traefik Logs
By default the Traefik log is written to stdout in text format.
To write the logs into a logfile specify the `filePath`.
```toml
[traefikLog]
filePath = "/path/to/traefik.log"
```
To write JSON format logs, specify `json` as the format:
```toml
[traefikLog]
filePath = "/path/to/traefik.log"
format = "json"
```
### Access Logs
Access logs are written when `[accessLog]` is defined.
By default it will write to stdout and produce logs in the textual Common Log Format (CLF), extended with additional fields.
To enable access logs using the default settings just add the `[accessLog]` entry.
```toml
[accessLog]
```
To write the logs into a logfile specify the `filePath`.
```toml
[accessLog]
filePath = "/path/to/access.log"
```
To write JSON format logs, specify `json` as the format:
```toml
[accessLog]
filePath = "/path/to/access.log"
format = "json"
```
Deprecated way (before 1.4):
```toml
# Access logs file
#
# DEPRECATED - see [accessLog] lower down
#
accessLogsFile = "log/access.log"
```
### Log Rotation
Traefik will close and reopen its log files, assuming they're configured, on receipt of a USR1 signal.
This allows the logs to be rotated and processed by an external program, such as `logrotate`.
!!! note
This does not work on Windows due to the lack of USR signals.
## Custom Error pages
Custom error pages can be returned, in lieu of the default, according to frontend-configured ranges of HTTP Status codes.
In the example below, if a 503 status is returned from the frontend "website", the custom error page at http://2.3.4.5/503.html is returned with the actual status code set in the HTTP header.
!!! note
The `503.html` page itself is not hosted on Traefik, but some other infrastructure.
```toml
[frontends]
[frontends.website]
backend = "website"
[frontends.website.errors]
[frontends.website.errors.network]
status = ["500-599"]
backend = "error"
query = "/{status}.html"
[frontends.website.routes.website]
rule = "Host: website.mydomain.com"
[backends]
[backends.website]
[backends.website.servers.website]
url = "https://1.2.3.4"
[backends.error]
[backends.error.servers.error]
url = "http://2.3.4.5"
```
In the above example, the error page rendered was based on the status code.
Instead, the query parameter can also be set to some generic error page like so: `query = "/500s.html"`
Now the `500s.html` error page is returned for the configured code range.
The configured status code ranges are inclusive; that is, in the above example, the `500s.html` page will be returned for status codes `500` through, and including, `599`.
Custom error pages are easiest to implement using the file provider.
For dynamic providers, the corresponding template file needs to be customized accordingly and referenced in the Traefik configuration.
## Rate limiting
Rate limiting can be configured per frontend.
Multiple sets of rates can be added to each frontend, but the time periods must be unique.
```toml
[frontends]
[frontends.frontend1]
# ...
[frontends.frontend1.ratelimit]
extractorfunc = "client.ip"
[frontends.frontend1.ratelimit.rateset.rateset1]
period = "10s"
average = 100
burst = 200
[frontends.frontend1.ratelimit.rateset.rateset2]
period = "3s"
average = 5
burst = 10
```
In the above example, frontend1 is configured to limit requests by the client's ip address.
An average of 5 requests every 3 seconds is allowed and an average of 100 requests every 10 seconds.
These can "burst" up to 10 and 200 in each period respectively.
## Retry Configuration
```toml
# Enable retry sending request if network error
[retry]
# Number of attempts
#
# Optional
# Default: (number servers in backend) -1
#
# attempts = 3
```
## Health Check Configuration
```toml
# Enable custom health check options.
[healthcheck]
# Set the default health check interval.
#
# Optional
# Default: "30s"
#
# interval = "30s"
```
- `interval` set the default health check interval.
Will only be effective if health check paths are defined.
Given provider-specific support, the value may be overridden on a per-backend basis.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
## Life Cycle
Controls the behavior of Traefik during the shutdown phase.
```toml
[lifeCycle]
# Duration to keep accepting requests prior to initiating the graceful
# termination period (as defined by the `graceTimeOut` option). This
# option is meant to give downstream load-balancers sufficient time to
# take Traefik out of rotation.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
# If no units are provided, the value is parsed assuming seconds.
# The zero duration disables the request accepting grace period, i.e.,
# Traefik will immediately proceed to the grace period.
#
# Optional
# Default: 0
#
# requestAcceptGraceTimeout = "10s"
# Duration to give active requests a chance to finish before Traefik stops.
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
# If no units are provided, the value is parsed assuming seconds.
# Note: in this time frame no new requests are accepted.
#
# Optional
# Default: "10s"
#
# graceTimeOut = "10s"
```
## Timeouts
### Responding Timeouts
`respondingTimeouts` are timeouts for incoming requests to the Traefik instance.
```toml
[respondingTimeouts]
# readTimeout is the maximum duration for reading the entire request, including the body.
#
# Optional
# Default: "0s"
#
# readTimeout = "5s"
# writeTimeout is the maximum duration before timing out writes of the response.
#
# Optional
# Default: "0s"
#
# writeTimeout = "5s"
# idleTimeout is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
#
# Optional
# Default: "180s"
#
# idleTimeout = "360s"
```
- `readTimeout` is the maximum duration for reading the entire request, including the body.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `writeTimeout` is the maximum duration before timing out writes of the response.
It covers the time from the end of the request header read to the end of the response write.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `idleTimeout` is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
### Forwarding Timeouts
`forwardingTimeouts` are timeouts for requests forwarded to the backend servers.
```toml
[forwardingTimeouts]
# dialTimeout is the amount of time to wait until a connection to a backend server can be established.
#
# Optional
# Default: "30s"
#
# dialTimeout = "30s"
# responseHeaderTimeout is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
#
# Optional
# Default: "0s"
#
# responseHeaderTimeout = "0s"
```
- `dialTimeout` is the amount of time to wait until a connection to a backend server can be established.
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
- `responseHeaderTimeout` is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
If zero, no timeout exists.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
### Idle Timeout (deprecated)
Use [respondingTimeouts](/configuration/commons/#responding-timeouts) instead of `IdleTimeout`.
In the case both settings are configured, the deprecated option will be overwritten.
`IdleTimeout` is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself.
This is set to enforce closing of stale client connections.
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
If no units are provided, the value is parsed assuming seconds.
```toml
# IdleTimeout
#
# DEPRECATED - see [respondingTimeouts] section.
#
# Optional
# Default: "180s"
#
IdleTimeout = "360s"
```
## Override Default Configuration Template
!!! warning
For advanced users only.
Supported by all backends except: File backend, Web backend and DynamoDB backend.
```toml
[backend_name]
# Override default configuration template. For advanced users :)
#
# Optional
# Default: ""
#
filename = "custom_config_template.tpml"
# Enable debug logging of generated configuration template.
#
# Optional
# Default: false
#
debugLogGeneratedTemplate = true
```
Example:
```toml
[marathon]
filename = "my_custom_config_template.tpml"
```
The template files can be written using functions provided by:
- [go template](https://golang.org/pkg/text/template/)
- [sprig library](https://masterminds.github.io/sprig/)
Example:
```tmpl
[backends]
[backends.backend1]
url = "http://firstserver"
[backends.backend2]
url = "http://secondserver"
{{$frontends := dict "frontend1" "backend1" "frontend2" "backend2"}}
[frontends]
{{range $frontend, $backend := $frontends}}
[frontends.{{$frontend}}]
backend = "{{$backend}}"
{{end}}
```

View File

@@ -0,0 +1,402 @@
# Entry Points Definition
## Reference
### TOML
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
whitelistSourceRange = ["10.42.0.0/16", "152.89.1.33/32", "afed:be44::/16"]
compress = true
[entryPoints.http.tls]
minVersion = "VersionTLS12"
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384"
]
[[entryPoints.http.tls.certificates]]
certFile = "path/to/my.cert"
keyFile = "path/to/my.key"
[[entryPoints.http.tls.certificates]]
certFile = "path/to/other.cert"
keyFile = "path/to/other.key"
# ...
[entryPoints.http.tls.clientCA]
files = ["path/to/ca1.crt", "path/to/ca2.crt"]
optional = false
[entryPoints.http.redirect]
entryPoint = "https"
regex = "^http://localhost/(.*)"
replacement = "http://mydomain/$1"
[entryPoints.http.auth]
headerField = "X-WebAuth-User"
[entryPoints.http.auth.basic]
users = [
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
]
usersFile = "/path/to/.htpasswd"
[entryPoints.http.auth.digest]
users = [
"test:traefik:a2688e031edb4be6a3797f3882655c05",
"test2:traefik:518845800f9e2bfb1f1f740ec24f074e",
]
usersFile = "/path/to/.htdigest"
[entryPoints.http.auth.forward]
address = "https://authserver.com/auth"
trustForwardHeader = true
[entryPoints.http.auth.forward.tls]
ca = [ "path/to/local.crt"]
caOptional = true
cert = "path/to/foo.cert"
key = "path/to/foo.key"
insecureSkipVerify = true
[entryPoints.http.proxyProtocol]
insecure = true
trustedIPs = ["10.10.10.1", "10.10.10.2"]
[entryPoints.http.forwardedHeaders]
trustedIPs = ["10.10.10.1", "10.10.10.2"]
[entryPoints.https]
# ...
```
### CLI
For more information about the CLI, see the documentation about [Traefik command](/basics/#traefik).
```shell
--entryPoints='Name:http Address::80'
--entryPoints='Name:https Address::443 TLS'
```
!!! note
Whitespace is used as option separator and `,` is used as value separator for the list.
The names of the options are case-insensitive.
In compose file the entrypoint syntax is different:
```yaml
traefik:
image: traefik
command:
- --defaultentrypoints=powpow
- "--entryPoints=Name:powpow Address::42 Compress:true"
```
or
```yaml
traefik:
image: traefik
command: --defaultentrypoints=powpow --entryPoints='Name:powpow Address::42 Compress:true'
```
#### All available options:
```ini
Name:foo
Address::80
TLS:goo,gii
TLS
CA:car
CA.Optional:true
Redirect.EntryPoint:https
Redirect.Regex:http://localhost/(.*)
Redirect.Replacement:http://mydomain/$1
Compress:true
WhiteListSourceRange:10.42.0.0/16,152.89.1.33/32,afed:be44::/16
ProxyProtocol.TrustedIPs:192.168.0.1
ProxyProtocol.Insecure:tue
ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24
```
## Basic
```toml
# Entrypoints definition
#
# Default:
# [entryPoints]
# [entryPoints.http]
# address = ":80"
#
[entryPoints]
[entryPoints.http]
address = ":80"
```
## Redirect HTTP to HTTPS
To redirect an http entrypoint to an https entrypoint (with SNI support).
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
!!! note
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an entrypoint is defined for the redirection (they will not be used in this case).
## Rewriting URL
To redirect an entrypoint rewriting the URL.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
regex = "^http://localhost/(.*)"
replacement = "http://mydomain/$1"
```
!!! note
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an `entrypoint` is defined for the redirection (they will not be used in this case).
Care should be taken when defining replacement expand variables: `$1x` is equivalent to `${1x}`, not `${1}x` (see [Regexp.Expand](https://golang.org/pkg/regexp/#Regexp.Expand)), so use `${1}` syntax.
Regular expressions and replacements can be tested using online tools such as [Go Playground](https://play.golang.org/p/mWU9p-wk2ru) or the [Regex101](https://regex101.com/r/58sIgx/2).
## TLS
### Static Certificates
Define an entrypoint with SNI support.
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
```
!!! note
If an empty TLS configuration is done, default self-signed certificates are generated.
### Dynamic Certificates
If you need to add or remove TLS certificates while Traefik is started, Dynamic TLS certificates are supported using the [file provider](/configuration/backends/file).
## TLS Mutual Authentication
TLS Mutual Authentication can be `optional` or not.
If it's `optional`, Træfik will authorize connection with certificates not signed by a specified Certificate Authority (CA).
Otherwise, Træfik will only accept clients that present a certificate signed by a specified Certificate Authority (CA).
`ClientCAFiles` can be configured with multiple `CA:s` in the same file or use multiple files containing one or several `CA:s`.
The `CA:s` has to be in PEM format.
By default, `ClientCAFiles` is not optional, all clients will be required to present a valid cert.
The requirement will apply to all server certs in the entrypoint.
In the example below both `snitest.com` and `snitest.org` will require client certs
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.tls.ClientCA]
files = ["tests/clientca1.crt", "tests/clientca2.crt"]
optional = false
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
!!! note
The deprecated argument `ClientCAFiles` allows adding Client CA files which are mandatory.
If this parameter exists, the new ones are not checked.
## Authentication
### Basic Authentication
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate them.
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence.
```toml
# To enable basic auth on an entrypoint with 2 user/pass: test:test and test2:test2
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.basic]
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
usersFile = "/path/to/.htpasswd"
```
### Digest Authentication
You can use `htdigest` to generate them.
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
if both are provided, the two are merged, with external file contents having precedence
```toml
# To enable digest auth on an entrypoint with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.digest]
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
usersFile = "/path/to/.htdigest"
```
### Forward Authentication
This configuration will first forward the request to `http://authserver.com/auth`.
If the response code is 2XX, access is granted and the original request is performed.
Otherwise, the response from the authentication server is returned.
```toml
[entryPoints]
[entryPoints.http]
# ...
# To enable forward auth on an entrypoint
[entryPoints.http.auth.forward]
address = "https://authserver.com/auth"
# Trust existing X-Forwarded-* headers.
# Useful with another reverse proxy in front of Traefik.
#
# Optional
# Default: false
#
trustForwardHeader = true
# Enable forward auth TLS connection.
#
# Optional
#
[entryPoints.http.auth.forward.tls]
cert = "authserver.crt"
key = "authserver.key"
```
## Specify Minimum TLS Version
To specify an https entry point with a minimum TLS version, and specifying an array of cipher suites (from [crypto/tls](https://godoc.org/crypto/tls#pkg-constants)).
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
minVersion = "VersionTLS12"
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384"
]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
## Compression
To enable compression support using gzip format.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
```
Responses are compressed when:
* The response body is larger than `512` bytes
* And the `Accept-Encoding` request header contains `gzip`
* And the response is not already compressed, i.e. the `Content-Encoding` response header is not already set.
## Whitelisting
To enable IP whitelisting at the entrypoint level.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
whiteListSourceRange = ["127.0.0.1/32", "192.168.1.7"]
```
## ProxyProtocol
To enable [ProxyProtocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support.
Only IPs in `trustedIPs` will lead to remote client address replacement: you should declare your load-balancer IP or CIDR range here (in testing environment, you can trust everyone using `insecure = true`).
!!! danger
When queuing Træfik behind another load-balancer, be sure to carefully configure Proxy Protocol on both sides.
Otherwise, it could introduce a security risk in your system by forging requests.
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
# Enable ProxyProtocol
[entryPoints.http.proxyProtocol]
# List of trusted IPs
#
# Required
# Default: []
#
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
# Insecure mode FOR TESTING ENVIRONNEMENT ONLY
#
# Optional
# Default: false
#
# insecure = true
```
## Forwarded Header
Only IPs in `trustedIPs` will be authorized to trust the client forwarded headers (`X-Forwarded-*`).
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
# Enable Forwarded Headers
[entryPoints.http.forwardedHeaders]
# List of trusted IPs
#
# Required
# Default: []
#
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
```

View File

@@ -0,0 +1,126 @@
# Metrics Definition
## Prometheus
```toml
# Metrics definition
[metrics]
#...
# To enable Traefik to export internal metrics to Prometheus
[metrics.prometheus]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
# Buckets for latency metrics
#
# Optional
# Default: [0.1, 0.3, 1.2, 5]
#
buckets = [0.1,0.3,1.2,5.0]
# ...
```
## DataDog
```toml
# Metrics definition
[metrics]
#...
# DataDog metrics exporter type
[metrics.datadog]
# DataDog's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# DataDog push interval
#
# Optional
# Default: "10s"
#
pushInterval = "10s"
# ...
```
## StatsD
```toml
# Metrics definition
[metrics]
#...
# StatsD metrics exporter type
[metrics.statsd]
# StatD's address.
#
# Required
# Default: "localhost:8125"
#
address = "localhost:8125"
# StatD push interval
#
# Optional
# Default: "10s"
#
pushInterval = "10s"
# ...
```
### InfluxDB
```toml
[metrics]
# ...
# InfluxDB metrics exporter type
[metrics.influxdb]
# InfluxDB's address.
#
# Required
# Default: "localhost:8089"
#
address = "localhost:8089"
# InfluxDB push interval
#
# Optional
# Default: "10s"
#
pushinterval = "10s"
# ...
```
## Statistics
```toml
# Metrics definition
[metrics]
# ...
# Enable more detailed statistics.
[metrics.statistics]
# Number of recent errors logged.
#
# Default: 10
#
recentErrors = 10
# ...
```

View File

@@ -0,0 +1,44 @@
# Ping Definition
## Configuration
```toml
# Ping definition
[ping]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
```
| Path | Method | Description |
|---------|---------------|----------------------------------------------------------------------------------------------------|
| `/ping` | `GET`, `HEAD` | A simple endpoint to check for Træfik process liveness. Return a code `200` with the content: `OK` |
!!! warning
Even if you have authentication configured on entry point, the `/ping` path of the api is excluded from authentication.
## Example
```shell
curl -sv "http://localhost:8080/ping"
```
```shell
* Trying ::1...
* Connected to localhost (::1) port 8080 (#0)
> GET /ping HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 25 Aug 2016 01:35:36 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host localhost left intact
OK
```

View File

@@ -1,61 +0,0 @@
a {
color: #37ABC8;
text-decoration: none;
}
a:hover, a:focus {
color: #25606F;
text-decoration: underline;
}
h1, h2, h3, H4 {
color: #37ABC8;
}
.navbar-default {
background-color: #37ABC8;
border-color: #25606F;
}
.navbar-default .navbar-nav>.active>a, .navbar-default .navbar-nav>.active>a:hover, .navbar-default .navbar-nav>.active>a:focus {
color: #fff;
background-color: #25606F;
}
.navbar-default .navbar-nav>li>a:hover, .navbar-default .navbar-nav>li>a:focus {
color: #fff;
background-color: #25606F;
}
.navbar-default .navbar-toggle {
border-color: #25606F;
}
.navbar-default .navbar-toggle:hover, .navbar-default .navbar-toggle:focus .navbar-toggle {
background-color: #25606F;
}
.navbar-default .navbar-collapse, .navbar-default .navbar-form {
border-color: #25606F;
}
blockquote p {
font-size: 14px;
}
.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus {
color: #fff;
background-color: #25606F;
}
.dropdown-menu>li>a:hover, .dropdown-menu>li>a:focus {
color: #fff;
text-decoration: none;
background-color: #25606F;
}
.dropdown-menu>.active>a, .dropdown-menu>.active>a:hover, .dropdown-menu>.active>a:focus {
color: #fff;
text-decoration: none;
background-color: #25606F;
outline: 0;
}

4
docs/img/grpc.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 186 KiB

View File

@@ -2,16 +2,16 @@
<img src="img/traefik.logo.png" alt="Træfik" title="Træfik" />
</p>
[![Build Status](https://travis-ci.org/containous/traefik.svg?branch=master)](https://travis-ci.org/containous/traefik)
[![Build Status SemaphoreCI](https://semaphoreci.com/api/v1/containous/traefik/branches/master/shields_badge.svg)](https://semaphoreci.com/containous/traefik)
[![Docs](https://img.shields.io/badge/docs-current-brightgreen.svg)](https://docs.traefik.io)
[![Go Report Card](https://goreportcard.com/badge/kubernetes/helm)](http://goreportcard.com/report/containous/traefik)
[![Go Report Card](https://goreportcard.com/badge/github.com/containous/traefik)](https://goreportcard.com/report/github.com/containous/traefik)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/containous/traefik/blob/master/LICENSE.md)
[![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
[![Twitter](https://img.shields.io/twitter/follow/traefikproxy.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefikproxy)
Træfik (pronounced like [traffic](https://speak-ipa.bearbin.net/speak.cgi?speak=%CB%88tr%C3%A6f%C9%AAk)) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Mesos/Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), [Amazon ECS](https://aws.amazon.com/ecs/), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), Rest API, file...) to manage its configuration automatically and dynamically.
Træfik (pronounced like _traffic_) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
It supports several backends ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), and a lot more) to manage its configuration automatically and dynamically.
## Overview
@@ -22,7 +22,7 @@ If you want your users to access some of your microservices from the Internet, y
- path `domain.com/web` will point the microservice `web` in your private network
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
Microservices are often deployed in dynamic environments where services are added, removed, killed, upgraded or scaled many times a day.
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
@@ -35,20 +35,50 @@ Routes to your services will be created instantly.
Run it and forget it!
## Features
- [It's fast](/benchmarks)
- No dependency hell, single binary made with go
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
- Rest API
- Hot-reloading of configuration. No need to restart the process
- Circuit breakers, retry
- Round Robin, rebalancer load-balancers
- Metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
- Clean AngularJS Web UI
- Websocket, HTTP/2, GRPC ready
- Access Logs (JSON, CLF)
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
- High Availability with cluster mode
## Supported backends
- [Docker](https://www.docker.com/) / [Swarm mode](https://docs.docker.com/engine/swarm/)
- [Kubernetes](https://kubernetes.io)
- [Mesos](https://github.com/apache/mesos) / [Marathon](https://mesosphere.github.io/marathon/)
- [Rancher](https://rancher.com) (API, Metadata)
- [Consul](https://www.consul.io/) / [Etcd](https://coreos.com/etcd/) / [Zookeeper](https://zookeeper.apache.org) / [BoltDB](https://github.com/boltdb/bolt)
- [Eureka](https://github.com/Netflix/eureka)
- [Amazon ECS](https://aws.amazon.com/ecs)
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- File
- Rest API
## Quickstart
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com).
You will learn Træfik basics in less than 10 minutes.
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
You will learn fundamental Træfik features and see some demos with Kubernetes.
[![Traefik ContainerCamp UK](http://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
You will learn fundamental Træfik features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
[![Traefik Devoxx France](http://img.youtube.com/vi/QvAz9mVx5TI/0.jpg)](http://www.youtube.com/watch?v=QvAz9mVx5TI)
[![Traefik ContainerCamp UK](https://img.youtube.com/vi/aFtpIShV60I/0.jpg)](https://www.youtube.com/watch?v=aFtpIShV60I)
## Get it
@@ -78,7 +108,7 @@ version: '2'
services:
proxy:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
command: --api --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
@@ -95,9 +125,11 @@ networks:
Start it from within the `traefik` folder:
docker-compose up -d
```shell
docker-compose up -d
```
In a browser you may open `http://localhost:8080` to access Træfik's dashboard and observe the following magic.
In a browser, you may open [http://localhost:8080](http://localhost:8080) to access Træfik's dashboard and observe the following magic.
Now, create a folder named `test` and create a `docker-compose.yml` in it with this content:
@@ -129,7 +161,10 @@ docker-compose scale whoami=2
Finally, test load-balancing between the two services `test_whoami_1` and `test_whoami_2`:
```shell
$ curl -H Host:whoami.docker.localhost http://127.0.0.1
curl -H Host:whoami.docker.localhost http://127.0.0.1
```
```yaml
Hostname: ef194d07634a
IP: 127.0.0.1
IP: ::1
@@ -144,8 +179,13 @@ X-Forwarded-For: 172.17.0.1
X-Forwarded-Host: 172.17.0.4:80
X-Forwarded-Proto: http
X-Forwarded-Server: dbb60406010d
```
$ curl -H Host:whoami.docker.localhost http://127.0.0.1
```shell
curl -H Host:whoami.docker.localhost http://127.0.0.1
```
```yaml
Hostname: 6c3c5df0c79a
IP: 127.0.0.1
IP: ::1

4
docs/theme/js/extra.js vendored Normal file
View File

@@ -0,0 +1,4 @@
/* Highlight */
(function(hljs) {
hljs.initHighlightingOnLoad();
})(hljs);

24
docs/theme/js/hljs/LICENSE vendored Normal file
View File

@@ -0,0 +1,24 @@
Copyright (c) 2006, Ivan Sagalaev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of highlight.js nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

2
docs/theme/js/hljs/highlight.pack.js vendored Normal file

File diff suppressed because one or more lines are too long

104
docs/theme/partials/footer.html vendored Normal file
View File

@@ -0,0 +1,104 @@
<!--
Copyright (c) 2016-2017 Martin Donath <martin.donath@squidfunk.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
-->
{% import "partials/language.html" as lang with context %}
<!-- Application footer -->
<footer class="md-footer">
<!-- Link to previous and/or next page -->
{% if page.previous_page or page.next_page %}
<!--<div class="md-footer-nav">-->
<!--<nav class="md-footer-nav__inner md-grid">-->
<!-- -->
<!-- Link to previous page -->
<!--{% if page.previous_page %}-->
<!--<a href="{{ page.previous_page.url }}"-->
<!--title="{{ page.previous_page.title }}"-->
<!--class="md-flex md-footer-nav__link md-footer-nav__link&#45;&#45;prev"-->
<!--rel="prev">-->
<!--<div class="md-flex__cell md-flex__cell&#45;&#45;shrink">-->
<!--<i class="md-icon md-icon&#45;&#45;arrow-back-->
<!--md-footer-nav__button"></i>-->
<!--</div>-->
<!--<div class="md-flex__cell md-flex__cell&#45;&#45;stretch-->
<!--md-footer-nav__title">-->
<!--<span class="md-flex__ellipsis">-->
<!--<span class="md-footer-nav__direction">-->
<!--{{ lang.t("footer.previous") }} -->
<!--</span>-->
<!--{{ page.previous_page.title }}-->
<!--</span>-->
<!--</div>-->
<!--</a>-->
<!--{% endif %}-->
<!-- -->
<!-- Link to next page -->
<!--{% if page.next_page %}-->
<!--<a href="{{ page.next_page.url }}" title="{{ page.next_page.title }}"-->
<!--class="md-flex md-footer-nav__link md-footer-nav__link&#45;&#45;next"-->
<!--rel="next">-->
<!--<div class="md-flex__cell md-flex__cell&#45;&#45;stretch-->
<!--md-footer-nav__title">-->
<!--<span class="md-flex__ellipsis">-->
<!--<span class="md-footer-nav__direction">-->
<!--{{ lang.t("footer.next") }}-->
<!--</span>-->
<!--{{ page.next_page.title }}-->
<!--</span>-->
<!--</div>-->
<!--<div class="md-flex__cell md-flex__cell&#45;&#45;shrink">-->
<!--<i class="md-icon md-icon&#45;&#45;arrow-forward-->
<!--md-footer-nav__button"></i>-->
<!--</div>-->
<!--</a>-->
<!--{% endif %}-->
<!--</nav>-->
<!--</div>-->
{% endif %}
<!-- Further information -->
<div class="md-footer-meta md-typeset">
<div class="md-footer-meta__inner md-grid">
<!-- Copyright and theme information -->
<div class="md-footer-copyright">
{% if config.copyright %}
<div class="md-footer-copyright__highlight">
{{ config.copyright }}
</div>
{% endif %}
powered by
<a href="http://www.mkdocs.org" title="MkDocs">MkDocs</a>
and
<a href="http://squidfunk.github.io/mkdocs-material/"
title="Material for MkDocs">
Material for MkDocs</a>
</div>
<!-- Social links -->
{% block social %}
{% include "partials/social.html" %}
{% endblock %}
</div>
</div>
</footer>

96
docs/theme/styles/atom-one-light.css vendored Normal file
View File

@@ -0,0 +1,96 @@
/*
Atom One Light by Daniel Gamage
Original One Light Syntax theme from https://github.com/atom/one-light-syntax
base: #fafafa
mono-1: #383a42
mono-2: #686b77
mono-3: #a0a1a7
hue-1: #0184bb
hue-2: #4078f2
hue-3: #a626a4
hue-4: #50a14f
hue-5: #e45649
hue-5-2: #c91243
hue-6: #986801
hue-6-2: #c18401
*/
.hljs {
display: block;
overflow-x: auto;
padding: 0.5em;
color: #383a42;
background: #fafafa;
}
.hljs-comment,
.hljs-quote {
color: #a0a1a7;
font-style: italic;
}
.hljs-doctag,
.hljs-keyword,
.hljs-formula {
color: #a626a4;
}
.hljs-section,
.hljs-name,
.hljs-selector-tag,
.hljs-deletion,
.hljs-subst {
color: #e45649;
}
.hljs-literal {
color: #0184bb;
}
.hljs-string,
.hljs-regexp,
.hljs-addition,
.hljs-attribute,
.hljs-meta-string {
color: #50a14f;
}
.hljs-built_in,
.hljs-class .hljs-title {
color: #c18401;
}
.hljs-attr,
.hljs-variable,
.hljs-template-variable,
.hljs-type,
.hljs-selector-class,
.hljs-selector-attr,
.hljs-selector-pseudo,
.hljs-number {
color: #986801;
}
.hljs-symbol,
.hljs-bullet,
.hljs-link,
.hljs-meta,
.hljs-selector-id,
.hljs-title {
color: #4078f2;
}
.hljs-emphasis {
font-style: italic;
}
.hljs-strong {
font-weight: bold;
}
.hljs-link {
text-decoration: underline;
}

20
docs/theme/styles/extra.css vendored Normal file
View File

@@ -0,0 +1,20 @@
.md-logo img {
background-color: white;
border-radius: 50%;
width: 30px;
height: 30px;
}
/* Fix for Chrome */
.md-typeset__table td code {
word-break: unset;
}
.md-typeset__table tr :nth-child(1) {
word-wrap: break-word;
max-width: 30em;
}
p {
text-align: justify;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,294 @@
# Clustering / High Availability on Docker Swarm with Consul
This guide explains how to use Træfik in high availability mode in a Docker Swarm and with Let's Encrypt.
Why do we need Træfik in cluster mode? Running multiple instances should work out of the box?
If you want to use Let's Encrypt with Træfik, sharing configuration or TLS certificates between many Træfik instances, you need Træfik cluster/HA.
Ok, could we mount a shared volume used by all my instances? Yes, you can, but it will not work.
When you use Let's Encrypt, you need to store certificates, but not only.
When Træfik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
If the challenge is not knowing by other Træfik instances, the validation will fail.
For more information about challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#tls-with-server-name-indication-tls-sni)
## Prerequisites
You will need a working Docker Swarm cluster.
## Træfik configuration
In this guide, we will not use a TOML configuration file, but only command line flag.
With that, we can use the base image without mounting configuration file or building custom image.
What Træfik should do:
- Listen to 80 and 443
- Redirect HTTP traffic to HTTPS
- Generate SSL certificate when a domain is added
- Listen to Docker Swarm event
### EntryPoints configuration
TL;DR:
```shell
$ traefik \
--entrypoints='Name:http Address::80 Redirect.EntryPoint:https' \
--entrypoints='Name:https Address::443 TLS' \
--defaultentrypoints=http,https
```
To listen to different ports, we need to create an entry point for each.
The CLI syntax is `--entrypoints='Name:a_name Address:an_ip_or_empty:a_port options'`.
If you want to redirect traffic from one entry point to another, it's the option `Redirect.EntryPoint:entrypoint_name`.
By default, we don't want to configure all our services to listen on http and https, we add a default entry point configuration: `--defaultentrypoints=http,https`.
### Let's Encrypt configuration
TL;DR:
```shell
$ traefik \
--acme \
--acme.storage=/etc/traefik/acme/acme.json \
--acme.entryPoint=https \
--acme.httpChallenge.entryPoint=http \
--acme.email=contact@mydomain.ca
```
Let's Encrypt needs 4 parameters: an TLS entry point to listen to, a non-TLS entry point to allow HTTP challenges, a storage for certificates, and an email for the registration.
To enable Let's Encrypt support, you need to add `--acme` flag.
Now, Træfik needs to know where to store the certificates, we can choose between a key in a Key-Value store, or a file path: `--acme.storage=my/key` or `--acme.storage=/path/to/acme.json`.
The `acme.httpChallenge.entryPoint` flag enables the `HTTP-01` challenge and specifies the entryPoint to use during the challenges.
For your email and the entry point, it's `--acme.entryPoint` and `--acme.email` flags.
### Docker configuration
TL;DR:
```shell
$ traefik \
--docker \
--docker.swarmmode \
--docker.domain=mydomain.ca \
--docker.watch
```
To enable docker and swarm-mode support, you need to add `--docker` and `--docker.swarmmode` flags.
To watch docker events, add `--docker.watch`.
### Full docker-compose file
```yaml
version: "3"
services:
traefik:
image: traefik:1.5
command:
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=/etc/traefik/acme/acme.json"
- "--acme.entryPoint=https"
- "--acme.httpChallenge.entryPoint=http"
- "--acme.OnHostRule=true"
- "--acme.onDemand=false"
- "--acme.email=contact@mydomain.ca"
- "--docker"
- "--docker.swarmmode"
- "--docker.domain=mydomain.ca"
- "--docker.watch"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
networks:
webgateway:
driver: overlay
external: true
traefik:
driver: overlay
```
## Migrate configuration to Consul
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file and/or CLI flags.
## Deploy a Træfik cluster
The best way we found is to have an initializer service.
This service will push the config to Consul via the `storeconfig` sub-command.
This service will retry until finishing without error because Consul may not be ready when the service tries to push the configuration.
The initializer in a docker-compose file will be:
```yaml
traefik_init:
image: traefik:1.5
command:
- "storeconfig"
- "--api"
[...]
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
networks:
- traefik
deploy:
restart_policy:
condition: on-failure
depends_on:
- consul
```
And now, the Træfik part will only have the Consul configuration.
```yaml
traefik:
image: traefik:1.5
depends_on:
- traefik_init
- consul
command:
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
[...]
```
!!! note
For Træfik <1.5.0 add `acme.storage=traefik/acme/account` because Træfik is not reading it from Consul.
If you have some update to do, update the initializer service and re-deploy it.
The new configuration will be stored in Consul, and you need to restart the Træfik node: `docker service update --force traefik_traefik`.
## Full docker-compose file
```yaml
version: "3.4"
services:
traefik_init:
image: traefik:1.5
command:
- "storeconfig"
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=traefik/acme/account"
- "--acme.entryPoint=https"
- "--acme.httpChallenge.entryPoint=http"
- "--acme.OnHostRule=true"
- "--acme.onDemand=false"
- "--acme.email=foobar@example.com"
- "--docker"
- "--docker.swarmmode"
- "--docker.domain=example.com"
- "--docker.watch"
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
networks:
- traefik
deploy:
restart_policy:
condition: on-failure
depends_on:
- consul
traefik:
image: traefik:1.5
depends_on:
- traefik_init
- consul
command:
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
consul:
image: consul
command: agent -server -bootstrap-expect=1
volumes:
- consul-data:/consul/data
environment:
- CONSUL_LOCAL_CONFIG={"datacenter":"us_east2","server":true}
- CONSUL_BIND_INTERFACE=eth0
- CONSUL_CLIENT_INTERFACE=eth0
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- traefik
networks:
webgateway:
driver: overlay
external: true
traefik:
driver: overlay
volumes:
consul-data:
driver: [not local]
```

View File

@@ -1,20 +1,33 @@
# Clustering / High Availability (beta)
This guide explains how tu use Træfik in high availability mode.
This guide explains how to use Træfik in high availability mode.
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
## Prerequisites
You will need a working KV store cluster.
_(Currently, we recommend [Consul](https://consul.io) .)_
## File configuration to KV store migration
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file.
Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) to get more details.
## Deploy a Træfik cluster
Once your Træfik configuration is uploaded on your KV store, you can start each Træfik instance.
A Træfik cluster is based on a master/slave model.
When starting, Træfik will elect a master. If this instance fails, another master will be automatically elected.
A Træfik cluster is based on a manager/worker model.
When starting, Træfik will elect a manager.
If this instance fails, another manager will be automatically elected.
## Træfik cluster and Let's Encrypt
**In cluster mode, ACME certificates have to be stored in [a KV Store entry](/configuration/acme/#storage-kv-entry).**
Thanks to the Træfik cluster mode algorithm (based on [the Raft Consensus Algorithm](https://raft.github.io/)), only one instance will contact Let's encrypt to solve the challenges.
The others instances will get ACME certificate from the KV Store entry.

View File

@@ -0,0 +1,264 @@
# Docker & Traefik
In this use case, we want to use Træfik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
We also want to automatically _discover any services_ on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
## Setting Up
In order for this to work, you'll need a server with a public IP address, with Docker installed on it.
In this example, we're using the fictitious domain _my-awesome-app.org_.
In real-life, you'll want to use your own domain and have the DNS configured accordingly so the hostname records you'll want to use point to the aforementioned public IP address.
## Networking
Docker containers can only communicate with each other over TCP when they share at least one network.
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Træfik) will reside in.
On the Docker host, run the following command:
```shell
docker network create web
```
Now, let's create a directory on the server where we will configure the rest of Træfik:
```shell
mkdir -p /opt/traefik
```
Within this directory, we're going to create 3 empty files:
```shell
touch /opt/traefik/docker-compose.yml
touch /opt/traefik/acme.json && chmod 600 /opt/traefik/acme.json
touch /opt/traefik/traefik.toml
```
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Træfik.
The contents of the file is as follows:
```yaml
version: '2'
services:
traefik:
image: traefik:1.3.5
restart: always
ports:
- 80:80
- 443:443
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.toml:/traefik.toml
- /opt/traefik/acme.json:/acme.json
container_name: traefik
networks:
web:
external: true
```
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
Finally, we're giving this container a static name called `traefik`.
Let's take a look at a simple `traefik.toml` configuration as well before we'll create the Træfik container:
```toml
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "my-awesome-app.org"
watch = true
exposedbydefault = false
[acme]
email = "your-email-here@my-awesome-app.org"
storage = "acme.json"
entryPoint = "https"
OnHostRule = true
[acme.httpChallenge]
entryPoint = "http"
```
This is the minimum configuration required to do the following:
- Log `ERROR`-level messages (or more severe) to the console, but silence `DEBUG`-level messages
- Check for new versions of Træfik periodically
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
- Enable the Docker configuration backend and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Træfik by default, we'll get into this in a bit!**
- Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Træfik container.
## Exposing Web Services to the Outside World
Now that we've fully configured and started Træfik, it's time to get our applications running!
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not.
The `docker-compose.yml` of our project looks like this:
```yaml
version: "2.1"
services:
app:
image: my-docker-registry.com/my-awesome-app/app:latest
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- web
- default
expose:
- "9000"
labels:
- "traefik.backend=my-awesome-app-app"
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:app.my-awesome-app.org"
- "traefik.enable=true"
- "traefik.port=9000"
- "traefik.default.protocol=http"
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
- "traefik.admin.protocol=https"
- "traefik.admin.port=9443"
db:
image: my-docker-registry.com/back-end/5.7
restart: always
redis:
image: my-docker-registry.com/back-end/redis:4-alpine
restart: always
events:
image: my-docker-registry.com/my-awesome-app/events:latest
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- web
- default
expose:
- "3000"
labels:
- "traefik.backend=my-awesome-app-events"
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:events.my-awesome-app.org"
- "traefik.enable=true"
- "traefik.port=3000"
networks:
web:
external: true
```
Here, we can see a set of services with two applications that we're actually exposing to the outside world.
Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Also, only the containers that we want traffic to get routed to are attached to the `web` network we created at the start of this document.
Since the `traefik` container we've created and started earlier is also attached to this network, HTTP requests can now get routed to these containers.
### Labels
As mentioned earlier, we don't want containers exposed automatically by Træfik.
The reason behind this is simple: we want to have control over this process ourselves.
Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
```yaml
- "traefik.backend=my-awesome-app-app"
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:app.my-awesome-app.org"
- "traefik.enable=true"
- "traefik.port=9000"
- "traefik.default.protocol=http"
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
- "traefik.admin.protocol=https"
- "traefik.admin.port=9443"
```
We use both `container labels` and `service labels`.
#### Container labels
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
We also tell Træfik to use the `web` network to route HTTP traffic to this container.
With the `traefik.enable` label, we tell Træfik to include this container in its internal configuration.
With the `frontend.rule` label, we tell Træfik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
Essentially, this is the actual rule used for Layer-7 load balancing.
Finally but not unimportantly, we tell Træfik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
### Service labels
`Service labels` allow managing many routes for the same container.
When both `container labels` and `service labels` are defined, `container labels` are just used as default values for missing `service labels` but no frontend/backend are going to be defined only with these labels.
Obviously, labels `traefik.frontend.rule` and `traefik.port` described above, will only be used to complete information set in `service labels` during the container frontends/bakends creation.
In the example, two service names are defined : `default` and `admin`.
They allow creating two frontends and two backends.
- `default` has only one `service label` : `traefik.default.protocol`.
Træfik will use values set in `traefik.frontend.rule` and `traefik.port` to create the `default` frontend and backend.
The frontend listens to incoming HTTP requests which contain the `Host` `app.my-awesome-app.org` and redirect them in `HTTP` to the port `9000` of the backend.
- `admin` has all the `services labels` needed to create the `admin` frontend and backend (`traefik.admin.frontend.rule`, `traefik.admin.protocol`, `traefik.admin.port`).
Træfik will create a frontend to listen to incoming HTTP requests which contain the `Host` `admin-app.my-awesome-app.org` and redirect them in `HTTPS` to the port `9443` of the backend.
#### Gotchas and tips
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
If a container exposes multiple ports, Træfik may forward traffic to the wrong port.
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Træfik will automatically be reachable from the outside world, for everyone and everyone to see.
Usually, this is a bad idea.
- With the `traefik.frontend.auth.basic` label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
- Træfik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
- Træfik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
### Final thoughts
Using Træfik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.

View File

@@ -1,4 +1,3 @@
# Examples
You will find here some configuration examples of Træfik.
@@ -7,6 +6,7 @@ You will find here some configuration examples of Træfik.
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
@@ -16,6 +16,7 @@ defaultEntryPoints = ["http"]
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
@@ -23,11 +24,11 @@ defaultEntryPoints = ["http", "https"]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.org.cert"
KeyFile = "integration/fixtures/https/snitest.org.key"
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
Note that we can either give path to certificate file or directly the file content itself ([like in this TOML example](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store)).
@@ -35,6 +36,7 @@ Note that we can either give path to certificate file or directly the file conte
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
@@ -44,27 +46,36 @@ defaultEntryPoints = ["http", "https"]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
certFile = "examples/traefik.crt"
keyFile = "examples/traefik.key"
```
!!! note
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an entrypoint is defined for the redirection (they will not be used in this case)
## Let's Encrypt support
!!! note
Even if `TLS-SNI-01` challenge is [disabled](https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188), for the moment, it stays the _by default_ ACME Challenge in Træfik but all the examples use the `HTTP-01` challenge (except DNS challenge examples).
If `TLS-SNI-01` challenge is not re-enabled in the future, it we will be removed from Træfik.
### Basic example with HTTP challenge
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# certs used as default certs
[[entryPoints.https.tls.certificates]]
certFile = "tests/traefik.crt"
keyFile = "tests/traefik.key"
[acme]
email = "test@traefik.io"
storageFile = "acme.json"
onDemand = true
storage = "acme.json"
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
@@ -78,37 +89,220 @@ entryPoint = "https"
main = "local4.com"
```
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com` with described SANs.
Træfik generates these certificates when it starts and it needs to be restart if new domains are added.
### OnHostRule option (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
onHostRule = true
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
```
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com`.
Træfik generates these certificates when it starts.
If a backend is added with a `onHost` rule, Træfik will automatically generate the Let's Encrypt certificate for the new domain (for frontends wired on the `acme.entryPoint`).
### OnDemand option (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
onDemand = true
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
This configuration allows generating a Let's Encrypt certificate (thanks to `HTTP-01` challenge) during the first HTTPS request on a new domain.
!!! note
This option simplifies the configuration but :
* TLS handshakes will be slow when requesting a host name certificate for the first time, this can leads to DDoS attacks.
* Let's Encrypt have rate limiting: https://letsencrypt.org/docs/rate-limits
That's why, it's better to use the `onHostRule` option if possible.
### DNS challenge
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.dnsChallenge]
provider = "digitalocean" # DNS Provider name (cloudflare, OVH, gandi...)
delayBeforeCheck = 0
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
```
DNS challenge needs environment variables to be executed.
These variables have to be set on the machine/container which host Træfik.
These variables are described [in this section](/configuration/acme/#provider).
### OnHostRule option and provided certificates (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "examples/traefik.crt"
keyFile = "examples/traefik.key"
[acme]
email = "test@traefik.io"
storage = "acme.json"
onHostRule = true
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
Træfik will only try to generate a Let's encrypt certificate (thanks to `HTTP-01` challenge) if the domain cannot be checked by the provided certificates.
### Cluster mode
#### Prerequisites
Before you use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely at [this section](/user-guide/kv-config/#store-configuration-in-key-value-store), which will describe how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
#### Configuration
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "traefik/acme/account"
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
```
This configuration allows to use the key `traefik/acme/account` to get/set Let's Encrypt certificates content.
The `consul` provider contains the configuration.
!!! note
It's possible to use others key-value store providers as described [here](/user-guide/kv-config/#key-value-store-configuration).
## Override entrypoints in frontends
```toml
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
passTLSCert = true
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
rule = "Path:/test"
```
## Enable Basic authentication in an entrypoint
## Enable Basic authentication in an entry point
With two user/pass:
- `test`:`test`
- `test2`:`test2`
Passwords are encoded in MD5: you can use htpasswd to generate those ones.
Passwords are encoded in MD5: you can use `htpasswd` to generate them.
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
@@ -119,10 +313,11 @@ defaultEntryPoints = ["http"]
## Pass Authenticated user to application via headers
Providing an authentication method as described above, it is possible to pass the user to the application
via a configurable header value
via a configurable header value.
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
@@ -135,6 +330,73 @@ defaultEntryPoints = ["http"]
## Override the Traefik HTTP server IdleTimeout and/or throttle configurations from re-loading too quickly
```toml
IdleTimeout = "360s"
ProvidersThrottleDuration = "5s"
providersThrottleDuration = "5s"
[respondingTimeouts]
idleTimeout = "360s"
```
## Ping Health Check
The `/ping` health-check URL is enabled with the command-line `--ping` or config file option `[ping]`.
Thus, if you have a regular path for `/foo` and an entrypoint on `:80`, you would access them as follows:
* Regular path: `http://hostname:80/foo`
* Admin panel: `http://hostname:8080/`
* Ping URL: `http://hostname:8080/ping`
However, for security reasons, you may want to be able to expose the `/ping` health-check URL to outside health-checkers, e.g. an Internet service or cloud load-balancer, _without_ exposing your administration panel's port.
In many environments, the security staff may not _allow_ you to expose it.
You have two options:
* Enable `/ping` on a regular entry point
* Enable `/ping` on a dedicated port
### Enable ping health check on a regular entry point
To proxy `/ping` from a regular entry point to the administration one without exposing the panel, do the following:
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[ping]
entryPoint = "http"
```
The above link `ping` on the `http` entry point and then expose it on port `80`
### Enable ping health check on dedicated port
If you do not want to or cannot expose the health-check on a regular entry point - e.g. your security rules do not allow it, or you have a conflicting path - then you can enable health-check on its own entry point.
Use the following configuration:
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.ping]
address = ":8082"
[ping]
entryPoint = "ping"
```
The above is similar to the previous example, but instead of enabling `/ping` on the _default_ entry point, we enable it on a _dedicated_ entry point.
In the above example, you would access a regular path and health-check as follows:
* Regular path: `http://hostname:80/foo`
* Ping URL: `http://hostname:8082/ping`
Note the dedicated port `:8082` for `/ping`.
In the above example, it is _very_ important to create a named dedicated entry point, and do **not** include it in `defaultEntryPoints`.
Otherwise, you are likely to expose _all_ services via this entry point.

150
docs/user-guide/grpc.md Normal file
View File

@@ -0,0 +1,150 @@
# gRPC example
This section explains how to use Traefik as reverse proxy for gRPC application with self-signed certificates.
!!! warning
As gRPC needs HTTP2, we need HTTPS certificates on both gRPC Server and Træfik.
<p align="center">
<img src="/img/grpc.svg" alt="gRPC architecture" title="gRPC architecture" />
</p>
## gRPC Server certificate
In order to secure the gRPC server, we generate a self-signed certificate for backend url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./backend.key -out ./backend.cert
```
That will prompt for information, the important answer is:
```
Common Name (e.g. server FQDN or YOUR name) []: backend.local
```
## gRPC Client certificate
Generate your self-signed certificate for frontend url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./frontend.key -out ./frontend.cert
```
with
```
Common Name (e.g. server FQDN or YOUR name) []: frontend.local
```
## Træfik configuration
At last, we configure our Træfik instance to use both self-signed certificates.
```toml
defaultEntryPoints = ["https"]
# For secure connection on backend.local
RootCAs = [ "./backend.cert" ]
[entryPoints]
[entryPoints.https]
address = ":4443"
[entryPoints.https.tls]
# For secure connection on frontend.local
[[entryPoints.https.tls.certificates]]
certFile = "./frontend.cert"
keyFile = "./frontend.key"
[api]
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
# Access on backend with HTTPS
url = "https://backend.local:8080"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:frontend.local"
```
!!! warning
With some backends, the server URLs use the IP, so you may need to configure `InsecureSkipVerify` instead of the `RootCAS` to activate HTTPS without hostname verification.
## Conclusion
We don't need specific configuration to use gRPC in Træfik, we just need to be careful that all the exchanges (between client and Træfik, and between Træfik and backend) are HTTPS communications because gRPC uses HTTP2.
## A gRPC example in go
We will use the gRPC greeter example in [grpc-go](https://github.com/grpc/grpc-go/tree/master/examples/helloworld)
!!! warning
In order to use this gRPC example, we need to modify it to use HTTPS
So we modify the "gRPC server example" to use our own self-signed certificate:
```go
// ...
// Read cert and key file
BackendCert, _ := ioutil.ReadFile("./backend.cert")
BackendKey, _ := ioutil.ReadFile("./backend.key")
// Generate Certificate struct
cert, err := tls.X509KeyPair(BackendCert, BackendKey)
if err != nil {
log.Fatalf("failed to parse certificate: %v", err)
}
// Create credentials
creds := credentials.NewServerTLSFromCert(&cert)
// Use Credentials in gRPC server options
serverOption := grpc.Creds(creds)
var s *grpc.Server = grpc.NewServer(serverOption)
defer s.Stop()
pb.RegisterGreeterServer(s, &server{})
err := s.Serve(lis)
// ...
```
Next we will modify gRPC Client to use our Træfik self-signed certificate:
```go
// ...
// Read cert file
FrontendCert, _ := ioutil.ReadFile("./frontend.cert")
// Create CertPool
roots := x509.NewCertPool()
roots.AppendCertsFromPEM(FrontendCert)
// Create credentials
credsClient := credentials.NewClientTLSFromCert(roots, "")
// Dial with specific Transport (with credentials)
conn, err := grpc.Dial("frontend.local:4443", grpc.WithTransportCredentials(credsClient))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
client := pb.NewGreeterClient(conn)
name := "World"
r, err := client.SayHello(context.Background(), &pb.HelloRequest{Name: name})
// ...
```

View File

@@ -1,26 +1,32 @@
# Kubernetes Ingress Controller
This guide explains how to use Træfik as an Ingress controller in a Kubernetes cluster.
If you are not familiar with Ingresses in Kubernetes you might want to read the [Kubernetes user guide](http://kubernetes.io/docs/user-guide/ingress/)
This guide explains how to use Træfik as an Ingress controller for a Kubernetes cluster.
If you are not familiar with Ingresses in Kubernetes you might want to read the [Kubernetes user guide](https://kubernetes.io/docs/concepts/services-networking/ingress/)
The config files used in this guide can be found in the [examples directory](https://github.com/containous/traefik/tree/master/examples/k8s)
## Prerequisites
1. A working Kubernetes cluster. If you want to follow along with this guide, you should setup [minikube](http://kubernetes.io/docs/getting-started-guides/minikube/)
on your machine, as it is the quickest way to get a local Kubernetes cluster setup for experimentation and development.
1. A working Kubernetes cluster. If you want to follow along with this guide, you should setup [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) on your machine, as it is the quickest way to get a local Kubernetes cluster setup for experimentation and development.
2. The `kubectl` binary should be [installed on your workstation](http://kubernetes.io/docs/getting-started-guides/minikube/#download-kubectl).
!!! note
The guide is likely not fully adequate for a production-ready setup.
2. The `kubectl` binary should be [installed on your workstation](https://kubernetes.io/docs/getting-started-guides/minikube/#download-kubectl).
### Role Based Access Control configuration (Kubernetes 1.6+ only)
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/) in 1.6+ to allow fine-grained control
of Kubernetes resources and api.
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/) in 1.6+ to allow fine-grained control of Kubernetes resources and API.
If your cluster is configured with RBAC, you may need to authorize Traefik to use
kubernetes API using ClusterRole and ClusterRoleBinding resources:
If your cluster is configured with RBAC, you will need to authorize Træfik to use the Kubernetes API. There are two ways to set up the proper permission: Via namespace-specific RoleBindings or a single, global ClusterRoleBinding.
_Note: your cluster may have suitable ClusterRoles already setup, but the following should work everywhere_
RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Træfik is watching over, thereby following the least-privileges principle. This is the preferred approach if Træfik is not supposed to watch all namespaces, and the set of namespaces does not change dynamically. Otherwise, a single ClusterRoleBinding must be employed.
!!! note
RoleBindings per namespace are available in Træfik 1.5 and later. Please use ClusterRoleBindings for older versions.
For the sake of simplicity, this guide will use a ClusterRoleBinding:
```yaml
---
@@ -32,9 +38,9 @@ rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
@@ -68,11 +74,18 @@ subjects:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
```
## Deploy Træfik using a Deployment object
For namespaced restrictions, one RoleBinding is required per watched namespace along with a corresponding configuration of Træfik's `kubernetes.namespaces` parameter.
We are going to deploy Træfik with a
[Deployment](http://kubernetes.io/docs/user-guide/deployments/), as this will
allow you to easily roll out config changes or update the image.
## Deploy Træfik using a Deployment or DaemonSet
It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
whereas both options have their own pros and cons:
- The scalability is much better when using a Deployment, because you will have a Single-Pod-per-Node model when using the DeaemonSet.
- It is possible to exclusively run a Service on a dedicated set of machines using taints and tolerations with a DaemonSet.
- On the other hand the DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) object with a Deployment.
The Deployment objects looks like this:
```yaml
---
@@ -105,81 +118,182 @@ spec:
containers:
- image: traefik
name: traefik-ingress-lb
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
ports:
- containerPort: 80
hostPort: 80
- containerPort: 8080
args:
- --web
- --api
- --kubernetes
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
```
[examples/k8s/traefik.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik.yaml)
> notice that we binding port 80 on the Træfik container to port 80 on the host.
> With a multi node cluster we might expose Træfik with a NodePort or LoadBalancer service
> and run more than 1 replica of Træfik for high availability.
[examples/k8s/traefik-deployment.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik-deployment.yaml)
To deploy Træfik to your cluster start by submitting the deployment to the cluster with `kubectl`:
!!! note
The Service will expose two NodePorts which allow access to the ingress and the web interface.
The DaemonSet objects looks not much different:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
privileged: true
args:
- -d
- --api
- --kubernetes
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
```
[examples/k8s/traefik-ds.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik-ds.yaml)
To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with `kubectl`:
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik.yaml
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
```
### Check the deployment
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
```
Now lets check if our deployment was successful.
There are some significant differences between using Deployments and DaemonSets:
- The Deployment has easier up and down scaling possibilities.
It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2.
At least one Pod is needed to run the Deployment.
- The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
### Check the Pods
Now lets check if our command was successful.
Start by listing the pods in the `kube-system` namespace:
```shell
$kubectl --namespace=kube-system get pods
kubectl --namespace=kube-system get pods
```
```shell
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikubevm 1/1 Running 0 4h
kubernetes-dashboard-s8krj 1/1 Running 0 4h
traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m
```
You should see that after submitting the Deployment to Kubernetes it has launched
a pod, and it is now running. _It might take a few moments for kubernetes to pull
the Træfik image and start the container._
You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched a Pod, and it is now running.
_It might take a few moments for kubernetes to pull the Træfik image and start the container._
> You could also check the deployment with the Kubernetes dashboard, run
> `minikube dashboard` to open it in your browser, then choose the `kube-system`
> namespace from the menu at the top right of the screen.
!!! note
You could also check the deployment with the Kubernetes dashboard, run
`minikube dashboard` to open it in your browser, then choose the `kube-system`
namespace from the menu at the top right of the screen.
You should now be able to access Træfik on port 80 of your minikube instance.
You should now be able to access Træfik on port 80 of your Minikube instance when using the DaemonSet:
```sh
```shell
curl $(minikube ip)
```
```shell
404 page not found
```
> We expect to see a 404 response here as we haven't yet given Træfik any configuration.
If you decided to use the deployment, then you need to target the correct NodePort, which can be seen when you execute `kubectl get services --namespace=kube-system`.
```shell
curl $(minikube ip):<NODEPORT>
```
```shell
404 page not found
```
!!! note
We expect to see a 404 response here as we haven't yet given Træfik any configuration.
All further examples below assume a DaemonSet installation. Deployment users will need to append the NodePort when constructing requests.
## Deploy Træfik using Helm Chart
Instead of installing Træfik via a Deployment object, you can also use the Træfik Helm chart.
!!! note
The Helm Chart is maintained by the community, not the Traefik project maintainers.
Install Træfik chart by:
Instead of installing Træfik via Kubernetes object directly, you can also use the Træfik Helm chart.
```sh
Install the Træfik chart by:
```shell
helm install stable/traefik
```
For more information, check out [the doc](https://github.com/kubernetes/charts/tree/master/stable/traefik).
For more information, check out [the documentation](https://github.com/kubernetes/charts/tree/master/stable/traefik).
## Submitting An Ingress to the cluster.
## Submitting an Ingress to the Cluster
Lets start by creating a Service and an Ingress that will expose the
[Træfik Web UI](https://github.com/containous/traefik#web-ui).
Lets start by creating a Service and an Ingress that will expose the [Træfik Web UI](https://github.com/containous/traefik#web-ui).
```yaml
apiVersion: v1
@@ -203,38 +317,103 @@ metadata:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.local
- host: traefik-ui.minikube
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
```
[examples/k8s/ui.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/ui.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
```
Now lets setup an entry in our /etc/hosts file to route `traefik-ui.local`
to our cluster.
Now lets setup an entry in our `/etc/hosts` file to route `traefik-ui.minikube` to our cluster.
> In production you would want to set up real dns entries.
> You can get the ip address of your minikube instance by running `minikube ip`
In production you would want to set up real DNS entries.
You can get the IP address of your minikube instance by running `minikube ip`:
```shell
echo "$(minikube ip) traefik-ui.local" | sudo tee -a /etc/hosts
echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
```
We should now be able to visit [traefik-ui.local](http://traefik-ui.local) in the browser and view the Træfik Web UI.
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Træfik web UI.
## Name based routing
## Basic Authentication
In this example we are going to setup websites for 3 of the United Kingdoms
best loved cheeses, Cheddar, Stilton and Wensleydale.
It's possible to protect access to Traefik through basic authentication. (See the [Kubernetes Ingress](/configuration/backends/kubernetes) configuration page for syntactical details and restrictions.)
First lets start by launching the 3 pods for the cheese websites.
### Creating the Secret
A. Use `htpasswd` to create a file containing the username and the base64-encoded password:
```shell
htpasswd -c ./auth myusername
```
You will be prompted for a password which you will have to enter twice.
`htpasswd` will create a file with the following:
```shell
cat auth
```
```shell
myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
```
B. Now use `kubectl` to create a secret in the `monitoring` namespace using the file created by `htpasswd`.
```shell
kubectl create secret generic mysecret --from-file auth --namespace=monitoring
```
!!! note
Secret must be in same namespace as the Ingress object.
C. Attach the following annotations to the Ingress object:
- `ingress.kubernetes.io/auth-type: "basic"`
- `ingress.kubernetes.io/auth-secret: "mysecret"`
They specify basic authentication and reference the Secret `mysecret` containing the credentials.
Following is a full Ingress example based on Prometheus:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
```
You can apply the example as following:
```shell
kubectl create -f prometheus-ingress.yaml -n monitoring
```
## Name-based Routing
In this example we are going to setup websites for three of the United Kingdoms best loved cheeses: Cheddar, Stilton, and Wensleydale.
First lets start by launching the pods for the cheese websites.
```yaml
---
@@ -261,13 +440,6 @@ spec:
containers:
- name: cheese
image: errm/cheese:stilton
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
@@ -294,13 +466,6 @@ spec:
containers:
- name: cheese
image: errm/cheese:cheddar
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
---
@@ -327,23 +492,17 @@ spec:
containers:
- name: cheese
image: errm/cheese:wensleydale
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
```
[examples/k8s/cheese-deployments.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-deployments.yaml)
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-deployments.yaml
```
Next we need to setup a service for each of the cheese pods.
Next we need to setup a Service for each of the cheese pods.
```yaml
---
@@ -389,9 +548,8 @@ spec:
task: wensleydale
```
> Notice that we also set a [circuit breaker expression](https://docs.traefik.io/basics/#backends) for one of the backends
> by setting the `traefik.backend.circuitbreaker` annotation on the service.
!!! note
We also set a [circuit breaker expression](/basics/#backends) for one of the backends by setting the `traefik.backend.circuitbreaker` annotation on the service.
[examples/k8s/cheese-services.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-services.yaml)
@@ -410,21 +568,21 @@ metadata:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: stilton.local
- host: stilton.minikube
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
- host: cheddar.local
- host: cheddar.minikube
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: http
- host: wensleydale.local
- host: wensleydale.minikube
http:
paths:
- path: /
@@ -432,38 +590,35 @@ spec:
serviceName: wensleydale
servicePort: http
```
[examples/k8s/cheese-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-ingress.yaml)
> Notice that we list each hostname, and add a backend service.
!!! note
we list each hostname, and add a backend service.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
```
Now visit the [Træfik dashboard](http://traefik-ui.local/) and you should
see a frontend for each host. Along with a backend listing for each service
with a Server set up for each pod.
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host.
Along with a backend listing for each service with a server set up for each pod.
If you edit your `/etc/hosts` again you should be able to access the cheese
websites in your browser.
If you edit your `/etc/hosts` again you should be able to access the cheese websites in your browser.
```shell
echo "$(minikube ip) stilton.local cheddar.local wensleydale.local" | sudo tee -a /etc/hosts
echo "$(minikube ip) stilton.minikube cheddar.minikube wensleydale.minikube" | sudo tee -a /etc/hosts
```
* [Stilton](http://stilton.local/)
* [Cheddar](http://cheddar.local/)
* [Wensleydale](http://wensleydale.local/)
- [Stilton](http://stilton.minikube/)
- [Cheddar](http://cheddar.minikube/)
- [Wensleydale](http://wensleydale.minikube/)
## Path based routing
## Path-based Routing
Now lets suppose that our fictional client has decided that while they are
super happy about our cheesy web design, when they asked for 3 websites
they had not really bargained on having to buy 3 domain names.
Now lets suppose that our fictional client has decided that while they are super happy about our cheesy web design, when they asked for 3 websites they had not really bargained on having to buy 3 domain names.
No problem, we say, why don't we reconfigure the sites to host all 3 under one domain.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
@@ -474,7 +629,7 @@ metadata:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- host: cheeses.local
- host: cheeses.minikube
http:
paths:
- path: /stilton
@@ -490,45 +645,92 @@ spec:
serviceName: wensleydale
servicePort: http
```
[examples/k8s/cheeses-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheeses-ingress.yaml)
> Notice that we are configuring Træfik to strip the prefix from the url path
> with the `traefik.frontend.rule.type` annotation so that we can use
> the containers from the previous example without modification.
!!! note
We are configuring Træfik to strip the prefix from the url path with the `traefik.frontend.rule.type` annotation so that we can use the containers from the previous example without modification.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheeses-ingress.yaml
```
```shell
echo "$(minikube ip) cheeses.local" | sudo tee -a /etc/hosts
echo "$(minikube ip) cheeses.minikube" | sudo tee -a /etc/hosts
```
You should now be able to visit the websites in your browser.
* [cheeses.local/stilton](http://cheeses.local/stilton/)
* [cheeses.local/cheddar](http://cheeses.local/cheddar/)
* [cheeses.local/wensleydale](http://cheeses.local/wensleydale/)
- [cheeses.minikube/stilton](http://cheeses.minikube/stilton/)
- [cheeses.minikube/cheddar](http://cheeses.minikube/cheddar/)
- [cheeses.minikube/wensleydale](http://cheeses.minikube/wensleydale/)
## Disable passing the Host header
## Specifying Routing Priorities
By default Træfik will pass the incoming Host header on to the upstream resource.
There are times however where you may not want this to be the case.
For example if your service is of the ExternalName type.
Sometimes you need to specify priority for ingress routes, especially when handling wildcard routes.
This can be done by adding the `traefik.frontend.priority` annotation, i.e.:
### Disable entirely
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wildcard-cheeses
annotations:
traefik.frontend.priority: "1"
spec:
rules:
- host: *.minikube
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
kind: Ingress
metadata:
name: specific-cheeses
annotations:
traefik.frontend.priority: "2"
spec:
rules:
- host: specific.minikube
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
```
Note that priority values must be quoted to avoid numeric interpretation (which are illegal for annotations).
## Forwarding to ExternalNames
When specifying an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors),
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
## Disable passing the Host Header
By default Træfik will pass the incoming Host header to the upstream resource.
However, there are times when you may not want this to be the case. For example, if your service is of the ExternalName type.
### Disable globally
Add the following to your TOML configuration file:
Add the following to your toml config:
```toml
disablePassHostHeaders = true
```
### Disable per ingress
### Disable per Ingress
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader`
annotation on your ingress to `false`.
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader` annotation on your ingress to `"false"`.
Here is an example definition:
Here is an example ingress definition:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
@@ -549,6 +751,7 @@ spec:
```
And an example service definition:
```yaml
apiVersion: v1
kind: Service
@@ -562,19 +765,39 @@ spec:
externalName: static.otherdomain.com
```
If you were to visit example.com/static the request would then be passed onto
static.otherdomain.com/static and static.otherdomain.com would receive the
request with the Host header being static.otherdomain.com.
If you were to visit `example.com/static` the request would then be passed on to `static.otherdomain.com/static`, and `static.otherdomain.com` would receive the request with the Host header being `static.otherdomain.com`.
Note: The per ingress annotation overides whatever the global value is set to.
So you could set `disablePassHostHeaders` to `true` in your toml file and then enable passing
the host header per ingress if you wanted.
!!! note
The per-ingress annotation overrides whatever the global value is set to.
So you could set `disablePassHostHeaders` to `true` in your TOML configuration file and then enable passing the host header per ingress if you wanted.
## Excluding an ingress from Træfik
## Partitioning the Ingress object space
You can control which ingress Træfik cares about by using the `kubernetes.io/ingress.class` annotation.
By default if the annotation is not set at all Træfik will include the ingress.
If the annotation is set to anything other than traefik or a blank string Træfik will ignore it.
By default, Træfik processes every Ingress objects it observes. At times, however, it may be desirable to ignore certain objects. The following sub-sections describe common use cases and how they can be handled with Træfik.
### Between Træfik and other Ingress controller implementations
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)
Sometimes Træfik runs along other Ingress controller implementations. One such example is when both Træfik and a cloud provider Ingress controller are active.
The `kubernetes.io/ingress.class` annotation can be attached to any Ingress object in order to control whether Træfik should handle it.
If the annotation is missing, contains an empty value, or the value `traefik`, then the Træfik controller will take responsibility and process the associated Ingress object. If the annotation contains any other value (usually the name of a different Ingress controller), Træfik will ignore the object.
### Between multiple Træfik Deployments
Sometimes multiple Træfik Deployments are supposed to run concurrently. For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
For such cases, it is advisable to classify Ingress objects through a label and configure the `labelSelector` option per each Træfik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a `traffic-type: internal` label while objects designated for external traffic receive a `traffic-type: external` label. The label selectors on the Træfik Deployments would then be `traffic-type=internal` and `traffic-type=external`, respectively.
## Production advice
### Resource limitations
The examples shown deliberately do not specify any [resource limitations](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) as there is no one size fits all.
In a production environment, however, it is important to set proper bounds, especially with regards to CPU:
- too strict and Traefik will be throttled while serving requests (as Kubernetes imposes hard quotas)
- too loose and Traefik may waste resources not available for other containers
When in doubt, you should measure your resource needs, and adjust requests and limits accordingly.

View File

@@ -1,7 +1,6 @@
# Key-value store configuration
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be sorted in a Key-value store.
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be stored in a Key-value store.
This section explains how to launch Træfik using a configuration loaded from a Key-value store.
@@ -9,20 +8,23 @@ Træfik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
# Static configuration in Key-value store
## Static configuration in Key-value store
We will see the steps to set it up with an easy example.
Note that we could do the same with any other Key-value Store.
We will see the steps to set it up with an easy example.
## docker-compose file for Consul
!!! note
We could do the same with any other Key-value Store.
The Træfik global configuration will be getted from a [Consul](https://consul.io) store.
### docker-compose file for Consul
First we have to launch Consul in a container.
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [emilevauge/whoamI](https://github.com/emilevauge/whoamI) :
The Træfik global configuration will be retrieved from a [Consul](https://consul.io) store.
First we have to launch Consul in a container.
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [emilevauge/whoamI](https://github.com/emilevauge/whoamI) :
```yaml
consul:
@@ -37,27 +39,27 @@ consul:
- "8301"
- "8301/udp"
- "8302"
- "8302/udp"
- "8302/udp"
whoami1:
image: emilevauge/whoami
whoami2:
image: emilevauge/whoami
whoami3:
image: emilevauge/whoami
whoami4:
image: emilevauge/whoami
```
## Upload the configuration in the Key-value store
### Upload the configuration in the Key-value store
We should now fill the store with the Træfik global configuration, as we do with a [TOML file configuration](/toml).
We should now fill the store with the Træfik global configuration, as we do with a [TOML file configuration](/toml).
To do that, we can send the Key-value pairs via [curl commands](https://www.consul.io/intro/getting-started/kv.html) or via the [Web UI](https://www.consul.io/intro/getting-started/ui.html).
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
Please refer to the [store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
Here is the toml configuration we would like to store in the Key-value Store :
@@ -68,59 +70,68 @@ logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.api]
address = ":8081"
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "integration/fixtures/https/snitest.com.cert"
KeyFile = "integration/fixtures/https/snitest.com.key"
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = """-----BEGIN CERTIFICATE-----
certFile = """-----BEGIN CERTIFICATE-----
<cert file content>
-----END CERTIFICATE-----"""
KeyFile = """-----BEGIN CERTIFICATE-----
keyFile = """-----BEGIN CERTIFICATE-----
<key file content>
-----END CERTIFICATE-----"""
[entryPoints.other-https]
address = ":4443"
[entryPoints.other-https.tls]
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
[web]
address = ":8081"
[api]
entrypoint = "api"
```
And there, the same global configuration in the Key-value Store (using `prefix = "traefik"`):
And there, the same global configuration in the Key-value Store (using `prefix = "traefik"`):
| Key | Value |
|-----------------------------------------------------------|---------------------------------------------------------------|
| `/traefik/loglevel` | `DEBUG` |
| `/traefik/defaultentrypoints/0` | `http` |
| `/traefik/defaultentrypoints/1` | `https` |
| `/traefik/entrypoints/api/address` | `:8081` |
| `/traefik/entrypoints/http/address` | `:80` |
| `/traefik/entrypoints/https/address` | `:443` |
| `/traefik/entrypoints/https/tls/certificates/0/certfile` | `integration/fixtures/https/snitest.com.cert` |
| `/traefik/entrypoints/https/tls/certificates/0/keyfile` | `integration/fixtures/https/snitest.com.key` |
| `/traefik/entrypoints/https/tls/certificates/1/certfile` | `--BEGIN CERTIFICATE--<cert file content>--END CERTIFICATE--` |
| `/traefik/entrypoints/https/tls/certificates/1/keyfile` | `--BEGIN CERTIFICATE--<key file content>--END CERTIFICATE--` |
| `/traefik/entrypoints/other-https/address` | `:4443` |
| `/traefik/consul/endpoint` | `127.0.0.1:8500` |
| `/traefik/consul/watch` | `true` |
| `/traefik/consul/prefix` | `traefik` |
| `/traefik/web/address` | `:8081` |
| `/traefik/api/entrypoint` | `api` |
In case you are setting key values manually:
- Remember to specify the indexes (`0`,`1`, `2`, ... ) under prefixes `/traefik/defaultentrypoints/` and `/traefik/entrypoints/https/tls/certificates/` in order to match the global configuration structure.
- Be careful to give the correct IP address and port on the key `/traefik/consul/endpoint`.
Note that we can either give path to certificate file or directly the file content itself.
## Launch Træfik
### Launch Træfik
We will now launch Træfik in a container.
We use CLI flags to setup the connection between Træfik and Consul.
All the rest of the global configuration is stored in Consul.
@@ -135,11 +146,57 @@ traefik:
- "8080:8080"
```
NB : Be careful to give the correct IP address and port in the flag `--consul.endpoint`.
!!! warning
Be careful to give the correct IP address and port in the flag `--consul.endpoint`.
## TLS support
### Consul ACL Token support
To specify a Consul ACL token for Traefik, we have to set a System Environment variable named `CONSUL_HTTP_TOKEN` prior to starting Traefik.
This variable must be initialized with the ACL token value.
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
If a Consul ACL is used to restrict Træfik read/write access, one of the following configurations is needed.
- HCL format :
```
key "traefik" {
policy = "write"
},
session "" {
policy = "write"
}
```
- JSON format :
```json
{
"key": {
"traefik": {
"policy": "write"
}
},
"session": {
"": {
"policy": "write"
}
}
}
```
### TLS support
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
- `--consul.endpoint=https://[consul-host]:[consul-ssl-port]`
### TLS support with client certificates
So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) support TLS connections with client certificates.
So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) support TLS connections.
To set it up, we should enable [consul security](https://www.consul.io/docs/internals/security.html) (or [etcd security](https://coreos.com/etcd/docs/latest/security.html)).
Then, we have to provide CA, Cert and Key to Træfik using `consul` flags :
@@ -147,7 +204,7 @@ Then, we have to provide CA, Cert and Key to Træfik using `consul` flags :
- `--consul.tls`
- `--consul.tls.ca=path/to/the/file`
- `--consul.tls.cert=path/to/the/file`
- `--consul.tls.key=path/to/the/file`
- `--consul.tls.key=path/to/the/file`
Or etcd flags :
@@ -156,17 +213,21 @@ Or etcd flags :
- `--etcd.tls.cert=path/to/the/file`
- `--etcd.tls.key=path/to/the/file`
Note that we can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
!! note
We can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
Remember the command `traefik --help` to display the updated list of flags.
# Dynamic configuration in Key-value store
Following our example, we will provide backends/frontends rules to Træfik.
## Dynamic configuration in Key-value store
Note that this section is independent of the way Træfik got its static configuration.
It means that the static configuration can either come from the same Key-value store or from any other sources.
Following our example, we will provide backends/frontends rules and HTTPS certificates to Træfik.
!!! note
This section is independent of the way Træfik got its static configuration.
It means that the static configuration can either come from the same Key-value store or from any other sources.
### Key-value storage structure
## Key-value storage structure
Here is the toml configuration we would like to store in the store :
```toml
@@ -176,7 +237,7 @@ Here is the toml configuration we would like to store in the store :
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
@@ -185,10 +246,10 @@ Here is the toml configuration we would like to store in the store :
weight = 1
[backends.backend2]
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
amount = 10
extractorfunc = "request.host"
[backends.backend2.LoadBalancer]
method = "drr"
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
@@ -211,10 +272,25 @@ Here is the toml configuration we would like to store in the store :
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
rule = "Path:/test"
[[tls]]
entryPoints = ["https"]
[tls.certificate]
certFile = "path/to/your.cert"
keyFile = "path/to/your.key"
[[tls]]
entryPoints = ["https","other-https"]
[tls.certificate]
certFile = """-----BEGIN CERTIFICATE-----
<cert file content>
-----END CERTIFICATE-----"""
keyFile = """-----BEGIN CERTIFICATE-----
<key file content>
-----END CERTIFICATE-----"""
```
And there, the same dynamic configuration in a KV Store (using `prefix = "traefik"`):
And there, the same dynamic configuration in a KV Store (using `prefix = "traefik"`):
- backend 1
@@ -257,13 +333,37 @@ And there, the same dynamic configuration in a KV Store (using `prefix = "traefi
| `/traefik/frontends/frontend2/entrypoints` | `http,https` |
| `/traefik/frontends/frontend2/routes/test_2/rule` | `PathPrefix:/test` |
## Atomic configuration changes
- certificate 1
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
| Key | Value |
|---------------------------------------|--------------------|
| `/traefik/tls/1/entrypoints` | `https` |
| `/traefik/tls/1/certificate/certfile` | `path/to/your.cert`|
| `/traefik/tls/1/certificate/keyfile` | `path/to/your.key` |
Note that only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
- certificate 2
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically. As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag. To solve this problem, Træfik supports a special key called `/traefik/alias`. If set, Træfik use the value as an alternative key prefix.
| Key | Value |
|---------------------------------------|-----------------------|
| `/traefik/tls/2/entrypoints` | `https,other-https` |
| `/traefik/tls/2/certificate/certfile` | `<cert file content>` |
| `/traefik/tls/2/certificate/certfile` | `<key file content>` |
### Atomic configuration changes
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
!!! note
Only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically.
As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
To solve this problem, Træfik supports a special key called `/traefik/alias`.
If set, Træfik use the value as an alternative key prefix.
!!! note
The field `useAPIV3` allows using Etcd V3 API which should support updating multiple keys atomically with Etcd.
Etcd API V2 is deprecated and, in the future, Træfik will support API V3 by default.
Given the key structure below, Træfik will use the `http://172.17.0.2:80` as its only backend (frontend keys have been omitted for brevity).
@@ -273,7 +373,9 @@ Given the key structure below, Træfik will use the `http://172.17.0.2:80` as it
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
When an atomic configuration change is required, you may write a new configuration at an alternative prefix. Here, although the `/traefik_configurations/2/...` keys have been set, the old configuration is still active because the `/traefik/alias` key still points to `/traefik_configurations/1`:
When an atomic configuration change is required, you may write a new configuration at an alternative prefix.
Here, although the `/traefik_configurations/2/...` keys have been set, the old configuration is still active because the `/traefik/alias` key still points to `/traefik_configurations/1`:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
@@ -281,11 +383,13 @@ When an atomic configuration change is required, you may write a new configurati
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically. Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://172.17.0.4:80` hosts while no traffic is sent to the `172.17.0.2:80` host:
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically.
Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://172.17.0.4:80` hosts while no traffic is sent to the `172.17.0.2:80` host:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
@@ -293,26 +397,32 @@ Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` co
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
Note that Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
!!! note
Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
# Store configuration in Key-value store
## Store configuration in Key-value store
!!! note
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfik).
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfk).
The static Træfik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
```bash
$ traefik storeconfig [flags] ...
traefik storeconfig [flags] ...
```
This command is here only to automate the [process which upload the configuration into the Key-value store](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store).
Træfik will not start but the [static configuration](/basics/#static-trfk-configuration) will be uploaded into the Key-value store.
Træfik will not start but the [static configuration](/basics/#static-trfik-configuration) will be uploaded into the Key-value store.
If you configured ACME (Let's Encrypt), your registration account and your certificates will also be uploaded.
To upload your ACME certificates to the KV store, get your traefik TOML file and add the new `storage` option in the `acme` section:
If you configured a file backend `[file]`, all your dynamic configuration (backends, frontends...) will be uploaded to the Key-value store.
To upload your ACME certificates to the KV store, get your Traefik TOML file and add the new `storage` option in the `acme` section:
```toml
[acme]
@@ -326,4 +436,4 @@ Then remove the line `storageFile = "acme.json"` from your TOML config file.
That's it!
![](http://i.giphy.com/ujUdrdpX7Ok5W.gif)
![](https://i.giphy.com/ujUdrdpX7Ok5W.gif)

View File

@@ -1,9 +1,8 @@
# Marathon
This guide explains how to integrate Marathon and operate the cluster in a reliable way from Traefik's standpoint.
# Host detection
## Host detection
Marathon offers multiple ways to run (Docker-containerized) applications, the most popular ones being
@@ -13,20 +12,43 @@ Marathon offers multiple ways to run (Docker-containerized) applications, the mo
Traefik tries to detect the configured mode and route traffic to the right IP addresses. It is possible to force using task hosts with the `forceTaskHostname` option.
Given the complexity of the subject, it is possible that the heuristic fails. Apart from filing an issue and waiting for the feature request / bug report to get addressed, one workaround for such situations is to customize the Marathon template file to the individual needs. (Note that this does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.)
Given the complexity of the subject, it is possible that the heuristic fails.
Apart from filing an issue and waiting for the feature request / bug report to get addressed, one workaround for such situations is to customize the Marathon template file to the individual needs.
# Port detection
!!! note
This does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.
Traefik also attempts to determine the right port (which is a [non-trivial matter in Marathon](https://mesosphere.github.io/marathon/docs/ports.html)). Following is the order by which Traefik tries to identify the port (the first one that yields a positive result will be used):
## Port detection
Traefik also attempts to determine the right port (which is a [non-trivial matter in Marathon](https://mesosphere.github.io/marathon/docs/ports.html)).
Following is the order by which Traefik tries to identify the port (the first one that yields a positive result will be used):
1. A arbitrary port specified through the `traefik.port` label.
1. The task port (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `portDefinitions` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `ipAddressPerTask` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
# Achieving high availability
## Applications with multiple ports
## Scenarios
Some Marathon applications may expose multiple ports. Traefik supports creating one so-called _service_ per port using [specific labels](/configuration/backends/marathon#service-level).
For instance, assume that a Marathon application exposes a web API on port 80 and an admin interface on port 8080. It would then be possible to make each service available by specifying the following Marathon labels:
```
traefik.web.port=80
```
```
traefik.admin.port=8080
```
(Note that the service names `web` and `admin` can be chosen arbitrarily.)
Technically, Traefik will create one pair of frontend and backend configurations for each service.
## Achieving high availability
### Scenarios
There are three scenarios where the availability of a Marathon application could be impaired along with the risk of losing or failing requests:
@@ -34,21 +56,30 @@ There are three scenarios where the availability of a Marathon application could
- During the shutdown phase when Traefik still routes requests to the backend while the backend is already terminating.
- During a failure of the application when Traefik has not yet identified the backend as being erroneous.
The first two scenarios are common with every rolling upgrade of an application (i.e., a new version release or configuration update).
The first two scenarios are common with every rolling upgrade of an application (i.e. a new version release or configuration update).
The following sub-sections describe how to resolve or mitigate each scenario.
### Startup
#### Startup
In general, it is possible to define [readiness checks](https://mesosphere.github.io/marathon/docs/readiness-checks.html) (available since Marathon version 1.1) per application and have Marathon take these into account during the startup phase. The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range. As long as the check keeps failing, Marathon will not proceed with the deployment (within the configured upgrade stategy bounds).
It is possible to define [readiness checks](https://mesosphere.github.io/marathon/docs/readiness-checks.html) (available since Marathon version 1.1) per application and have Marathon take these into account during the startup phase.
Unfortunately, Traefik does not respect the result of the readiness check yet. Support is expected to land in a not-too-distant future release of Traefik, however, as being tracked by [issue 1559](https://github.com/containous/traefik/issues/1559).
The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range.
As long as the check keeps failing, Marathon will not proceed with the deployment (within the configured upgrade strategy bounds).
A current mitigation strategy is to enable [retries](http://docs.traefik.io/toml/#retry-configuration) and make sure that a sufficient number of healthy application tasks exist so that one retry will likely hit one of those. Apart from its probabilistic nature, the workaround comes at the price of increased latency.
Beginning with version 1.4, Traefik respects readiness check results if the Traefik option is set and checks are configured on the applications accordingly.
### Shutdown
!!! note
Due to the way readiness check results are currently exposed by the Marathon API, ready tasks may be taken into rotation with a small delay.
It is on the order of one readiness check timeout interval (as configured on the application specifiation) and guarantees that non-ready tasks do not receive traffic prematurely.
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence (i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration). Specifically, each termination handler should install a signal handler listening for a SIGTERM signal and implement the following steps on signal reception:
If readiness checks are not possible, a current mitigation strategy is to enable [retries](/configuration/commons#retry-configuration) and make sure that a sufficient number of healthy application tasks exist so that one retry will likely hit one of those.
Apart from its probabilistic nature, the workaround comes at the price of increased latency.
#### Shutdown
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence (i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration).
Specifically, each termination handler should install a signal handler listening for a SIGTERM signal and implement the following steps on signal reception:
1. Disable Keep-Alive HTTP connections.
1. Keep accepting HTTP requests for a certain period of time.
@@ -58,39 +89,57 @@ It is possible to install a [termination handler](https://mesosphere.github.io/m
Traefik already ignores Marathon tasks whose state does not match `TASK_RUNNING`; since terminating tasks transition into the `TASK_KILLING` and eventually `TASK_KILLED` state, there is nothing further that needs to be done on Traefik's end.
How long HTTP requests should continue to be accepted in step 2 depends on how long Traefik needs to receive and process the Marathon configuration update. Under regular operational conditions, it should be on the order of seconds, with 10 seconds possibly being a good default value.
How long HTTP requests should continue to be accepted in step 2 depends on how long Traefik needs to receive and process the Marathon configuration update.
Under regular operational conditions, it should be on the order of seconds, with 10 seconds possibly being a good default value.
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy. Paired with termination handlers, they would cover for those cases where either the termination sequence or Traefik cannot complete their part of the orchestration process in time.
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy.
Paired with termination handlers, they would cover for those cases where either the termination sequence or Traefik cannot complete their part of the orchestration process in time.
### Failure
#### Failure
A failing application always happens unexpectedly, and hence, it is very difficult or even impossible to rule out the adversal effects categorically. Failure reasons vary broadly and could stretch from unacceptable slowness, a task crash, or a network split.
A failing application always happens unexpectedly, and hence, it is very difficult or even impossible to rule out the adversal effects categorically.
Failure reasons vary broadly and could stretch from unacceptable slowness, a task crash, or a network split.
There are two mitigaton efforts:
1. Configure [Marathon health checks](https://mesosphere.github.io/marathon/docs/health-checks.html) on each application.
1. Configure Traefik health checks (possibly via the `traefik.backend.healthcheck.*` labels) and make sure they probe with proper frequency.
The Marathon health check makes sure that applications once deemed dysfunctional are being rescheduled to different slaves. However, they might take a while to get triggered and the follow-up processes to complete. For that reason, the Treafik health check provides an additional check that responds more rapidly and does not require a configuration reload to happen. Additionally, it protects from cases that the Marathon health check may not be able to cover, such as a network split.
The Marathon health check makes sure that applications once deemed dysfunctional are being rescheduled to different slaves.
However, they might take a while to get triggered and the follow-up processes to complete.
## (Non-)Alternatives
For that reason, the Treafik health check provides an additional check that responds more rapidly and does not require a configuration reload to happen.
Additionally, it protects from cases that the Marathon health check may not be able to cover, such as a network split.
There are a few alternatives of varying quality that are frequently asked for. The remaining section is going to explore them along with a benefit/cost trade-off.
### (Non-)Alternatives
### Reusing Marathon health checks
There are a few alternatives of varying quality that are frequently asked for.
The remaining section is going to explore them along with a benefit/cost trade-off.
#### Reusing Marathon health checks
It may seem obvious to reuse the Marathon health checks as a signal to Traefik whether an application should be taken into load-balancing rotation or not.
Apart from the increased latency a failing health check may have, a major problem with this is is that Marathon does not persist the health check results. Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure. Re-elections do not only happen during regular maintenance work (often requiring rolling upgrades of the Marathon nodes) but also when the Marathon leader fails spontaneously). As such, there is no way to handle this situation deterministically.
Apart from the increased latency a failing health check may have, a major problem with this is is that Marathon does not persist the health check results.
Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure.
Re-elections do not only happen during regular maintenance work (often requiring rolling upgrades of the Marathon nodes) but also when the Marathon leader fails spontaneously.
As such, there is no way to handle this situation deterministically.
Finally, Marathon health checks are not mandatory (the default is to use the task state as reported by Mesos), so requiring them for Traefik would raise the entry barrier for Marathon users.
Traefik used to use the health check results but moved away from it as [users reported the dramatic consequences](https://github.com/containous/traefik/issues/653).
Traefik used to use the health check results as a strict requirement but moved away from it as [users reported the dramatic consequences](https://github.com/containous/traefik/issues/653).
If health check results are known to exist, however, they will be used to signal task availability.
### Draining
#### Draining
Another common approach is to let a proxy drain backends that are supposed to shut down. That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
Another common approach is to let a proxy drain backends that are supposed to shut down.
That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
On the plus side, this would not require any modifications to the application in question. However, implementing this fully within Traefik seems like a non-trivial undertaking. Additionally, the approach is less flexible compared to a custom termination handler since only the latter allows for the implementation of custom termination sequences that go beyond simple request draining (e.g., persisting a snapshot state to disk prior to terminating).
On the plus side, this would not require any modifications to the application in question.
However, implementing this fully within Traefik seems like a non-trivial undertaking.
Additionally, the approach is less flexible compared to a custom termination handler since only the latter allows for the implementation of custom termination sequences that go beyond simple request draining (e.g., persisting a snapshot state to disk prior to terminating).
The feature is currently not implemented; a request for draining in general is at [issue 41](https://github.com/containous/traefik/issues/41).

Some files were not shown because too many files have changed in this diff Show More