mirror of
https://github.com/altlinux/gpupdate.git
synced 2025-03-14 16:58:25 +03:00
Compare commits
526 Commits
0.9.11.1-a
...
master
Author | SHA1 | Date | |
---|---|---|---|
|
a3398e0307 | ||
|
c7192773fd | ||
|
93bcac5f19 | ||
|
967687497c | ||
|
3797993209 | ||
|
04831c4dbd | ||
|
316c0881a9 | ||
|
22d0c87538 | ||
|
2c66ad9bc1 | ||
|
5fe0b6f418 | ||
|
829825060b | ||
|
463620ff25 | ||
|
ab632a8177 | ||
|
5c47ebb6c5 | ||
|
6a840674ca | ||
|
a6f6b021fa | ||
|
0f4066e0f0 | ||
|
030e69cb86 | ||
|
5f94fad90b | ||
|
156918ad3b | ||
|
6df5a5754f | ||
|
dda57ed179 | ||
|
99595c85d3 | ||
|
e25c5844a9 | ||
|
8e1a76552f | ||
|
1f6776912d | ||
|
3e889622b1 | ||
|
1c827d4533 | ||
|
ce660afcbd | ||
|
5b1a928291 | ||
|
a77a6e3c6f | ||
|
25a784fa2e | ||
|
6378c8c78b | ||
|
9ad7440c8b | ||
|
2a5642a76d | ||
dbff83050b | |||
ed1b2aa39e | |||
|
02701136c0 | ||
|
408d221c3d | ||
|
67a02a4623 | ||
7a0af6ab9b | |||
|
ce6e49443f | ||
|
433d312c0f | ||
|
2ec68dd95a | ||
|
3990f876a4 | ||
|
1f541914cd | ||
|
dc054008fd | ||
|
aa4bf9a7c8 | ||
|
99a6e85ccf | ||
|
79ef884f7d | ||
|
0abc5b0282 | ||
dce52c4d9c | |||
|
4d5969a5fa | ||
|
3263a4cfd3 | ||
|
0685b9e492 | ||
|
7188c70a77 | ||
|
2edc5c326c | ||
|
39b92ce763 | ||
|
620010e1ab | ||
|
b87e8b218f | ||
|
df0f806035 | ||
|
7e8657939f | ||
|
a879d5ad52 | ||
|
c097769681 | ||
|
a85158ce3c | ||
|
f79b283574 | ||
|
b791f3d5eb | ||
|
b16460309a | ||
|
40cf97989e | ||
|
71eeb1d5a0 | ||
|
f45fc7092d | ||
|
e537b3846a | ||
|
64581f60d2 | ||
|
1436ee201e | ||
|
0051e001a8 | ||
|
d4eb4263fa | ||
|
a99ed2db2a | ||
|
8bc4375339 | ||
|
f24038b288 | ||
|
96ec5cc690 | ||
|
e88278fb47 | ||
|
4be89029aa | ||
|
b981744d75 | ||
|
760a1d8b90 | ||
|
cb035fd56e | ||
|
e56293e768 | ||
|
0c0f7d223b | ||
|
3c09737aa7 | ||
|
0027b5aa96 | ||
df8984dd65 | |||
|
5f8c75e27c | ||
|
03b031734a | ||
77c0d60b7d | |||
51b744f94b | |||
cdd9d84037 | |||
|
4de1946e32 | ||
|
73759857b3 | ||
|
b3e222ae55 | ||
|
8a2c9554f7 | ||
|
862b3b358b | ||
|
0d2c70da35 | ||
|
2953e4b0c6 | ||
|
c8585ac932 | ||
|
981d883ed0 | ||
|
3ddd9462ea | ||
|
ab79f169e8 | ||
5a3ba30910 | |||
|
d554b1fdf9 | ||
|
3960c4b094 | ||
|
5f178651f7 | ||
|
674e1d176b | ||
|
afe6ef04d4 | ||
|
fa98fef5a3 | ||
|
c6c34accff | ||
|
dba6a58c6a | ||
|
a02969c686 | ||
|
e040bbbd69 | ||
|
1775bfa08c | ||
|
165f4bfc83 | ||
|
316f5d1e49 | ||
|
150f3441fd | ||
|
769b520d47 | ||
|
517ed6d56b | ||
|
40635f9a01 | ||
|
2eb6e0c632 | ||
|
710b78b79f | ||
|
f308539a5a | ||
|
ca8cb9ce78 | ||
|
3c7d45cd52 | ||
|
6e77d54aa3 | ||
|
3c72786bd8 | ||
|
8a36e01fbb | ||
|
32cb959f0b | ||
|
3fb24dbd99 | ||
|
b737c9f0aa | ||
|
48d94ae046 | ||
|
4ed05cb481 | ||
|
cddc7d70fb | ||
|
64c305c544 | ||
|
4ee10c1560 | ||
|
5e5c5d45a6 | ||
|
56ee1334af | ||
|
de5ef65c16 | ||
|
453934621d | ||
|
2132c3676f | ||
|
e9adb9b298 | ||
|
3e3957d693 | ||
|
554147b57f | ||
|
6b632e851c | ||
|
3e99bfcb60 | ||
|
2c48b3a6a4 | ||
|
2e22d7abc9 | ||
|
e645fa4e86 | ||
|
cdcac9e4db | ||
|
d3a316c1c0 | ||
|
f081ec6454 | ||
|
60d6996db2 | ||
|
ea52e9671b | ||
|
92df692559 | ||
|
3b4f92997e | ||
|
98d02a4da0 | ||
|
eb951cbd5e | ||
|
9ce68f2acc | ||
|
54239c339c | ||
|
2b108e2029 | ||
|
2a21983b13 | ||
|
b6e84b3d9e | ||
|
bb314fb553 | ||
|
28718e8ad6 | ||
|
2857cfb899 | ||
8717e1b9a3 | |||
d3c9b95331 | |||
|
4d6a5d750c | ||
|
84e1340362 | ||
5ee05df574 | |||
|
2a993f0400 | ||
|
b878b7e1b3 | ||
|
c57d1bac9e | ||
b9b5239448 | |||
aae2776790 | |||
|
a20aa841d6 | ||
|
8c7819d96f | ||
|
3d9473f979 | ||
|
01f48be853 | ||
|
1638098fd4 | ||
|
047e5459af | ||
|
5baa4245e3 | ||
ec6b9f7887 | |||
22d0d23b89 | |||
fd3a32e8e1 | |||
|
9e849e8fe3 | ||
|
d65f3ed942 | ||
|
31298be840 | ||
|
5c889fd57e | ||
|
4e2874c972 | ||
|
63e50ac2df | ||
|
ad2a87e20d | ||
|
e9c3a4262a | ||
|
b5706ec6e1 | ||
|
61e7350429 | ||
|
c9a274fc79 | ||
|
127c9f7183 | ||
|
a27f8ba5dd | ||
|
fafe2c34b4 | ||
|
9c91ddc7ba | ||
|
1f02ed650b | ||
|
fc47df4649 | ||
|
42b8bdb82a | ||
|
2a174edeef | ||
|
9b8529b39b | ||
|
062ff742c3 | ||
|
1764560c49 | ||
|
b439e04a2f | ||
|
e413f95633 | ||
|
675f37ab85 | ||
|
9932c682ef | ||
|
018b30cdc4 | ||
|
249eb69ade | ||
|
1ab8c7aee0 | ||
|
400a5fab7d | ||
|
e7851e88b3 | ||
|
0761637666 | ||
|
dda4d987cb | ||
|
609ec0e8b8 | ||
|
c0b28a0655 | ||
|
78aad11e06 | ||
|
59bebbc45e | ||
|
e92656add0 | ||
|
5d24579d2f | ||
|
ce284b61be | ||
|
7a8118ac63 | ||
|
18d8e73acd | ||
|
58235cb1a1 | ||
|
e0d88cc076 | ||
|
c8b0927090 | ||
|
a4a79d8c99 | ||
|
408609fa58 | ||
|
6efebfad89 | ||
12865b0b43 | |||
9117dddcee | |||
1e267f5cb6 | |||
62ed015ea9 | |||
3e6b7cd040 | |||
209eb84d6d | |||
7f3b47a23c | |||
08ba87c8d8 | |||
f2a45a2a6d | |||
9c544adc94 | |||
a225c9aa7f | |||
51c8711da6 | |||
54eb4188a7 | |||
|
89d5e36d6c | ||
|
6cd5ab4ee2 | ||
|
0c913c68e3 | ||
|
12d746a1dc | ||
|
0a25f3a1d6 | ||
|
1eaab893c8 | ||
|
05ea872831 | ||
|
d0506dba29 | ||
|
dd28587b20 | ||
|
1a288c84f5 | ||
|
cadc3eda52 | ||
|
8d3e6691d4 | ||
|
cb54fa5d78 | ||
|
53ffc072f0 | ||
|
7a59bcb65b | ||
|
b81a727cd4 | ||
|
11b33dd148 | ||
|
1ccc18a31f | ||
|
9a3afeebdf | ||
|
0720471cca | ||
|
dd43ddaad6 | ||
|
6fc059aaac | ||
|
8cfb6f0bb3 | ||
|
ddcdc322f8 | ||
|
4ee52f06d6 | ||
|
603efc2deb | ||
|
9fc5007590 | ||
|
a6210f8b29 | ||
|
175f244a5f | ||
|
0d4ce533bc | ||
|
8e22235df2 | ||
|
0519d2703c | ||
|
1ca9b006e1 | ||
|
8cc5a8904b | ||
|
70cdef2e71 | ||
|
3baffeb12d | ||
|
a0d9dc585f | ||
|
388125415b | ||
|
14c7e5db21 | ||
|
582a85df88 | ||
|
18ddc54626 | ||
|
6bad9a331d | ||
|
16b5747620 | ||
|
47015ec312 | ||
|
666c88bdf1 | ||
|
bd5262353b | ||
|
e1d5712b83 | ||
|
bcb9108424 | ||
|
82bb88ca34 | ||
|
518685f361 | ||
|
39e3d15fa8 | ||
|
7a755bbb3e | ||
|
41260df1a1 | ||
|
0d1b60158a | ||
|
b244df8f2d | ||
|
e48ca4fc8e | ||
|
82d52d1c9f | ||
|
e6a51d02fb | ||
|
28e2d9c94b | ||
|
60137feed0 | ||
|
a86c49e471 | ||
|
8c5d0bbb06 | ||
|
c26fbf8042 | ||
|
83e70d5e7a | ||
|
c383b8df9b | ||
|
fc810c3362 | ||
|
7e225c837a | ||
|
b053544512 | ||
|
9b4527d334 | ||
|
3794ffa5be | ||
|
fe68f0cca8 | ||
|
d83cf4d29d | ||
|
47dc1df796 | ||
|
5d2fb3f719 | ||
|
3fded83c75 | ||
|
aeab315c3d | ||
|
446fa532db | ||
|
ac2190809a | ||
|
66bae5a1af | ||
|
4f41c64c98 | ||
|
729f916646 | ||
|
1b150e21c7 | ||
459993d133 | |||
7ee065309b | |||
22c4f97a15 | |||
|
e62b366cf2 | ||
|
fbdd8cc79a | ||
|
8fddb3494a | ||
|
4b3e621650 | ||
|
4a2842b872 | ||
|
682797fb90 | ||
|
12bd7a5b51 | ||
|
0674340f74 | ||
|
5486bcfcef | ||
|
d935557c4c | ||
|
c6b6cdfff3 | ||
2d7144c1b4 | |||
|
4cca8b241a | ||
|
a50f8c0d04 | ||
|
8c4ce9f8a6 | ||
|
bb1183c471 | ||
db74303e73 | |||
|
ced9d35ec4 | ||
|
d84b754292 | ||
|
7507c558ba | ||
9fb411c2e2 | |||
|
b8dc00443f | ||
179b16baa4 | |||
209e4e3128 | |||
2fb59a1b7c | |||
d82cfcfe89 | |||
220313a1fb | |||
|
38378440ff | ||
debe48c06b | |||
b84715cfe4 | |||
abad246ab2 | |||
|
5bc8309abd | ||
|
a18e1a6cce | ||
8420f50f9c | |||
07662349ca | |||
|
a1281d3ac0 | ||
|
5c0fc9bed0 | ||
|
78815c5ecd | ||
|
7a0571278f | ||
|
7e666043be | ||
|
e733c346b3 | ||
|
7e26d8397c | ||
|
b0d3ab2384 | ||
d744cf8f6e | |||
443b410dfa | |||
|
721c66b20d | ||
9fbe8f76be | |||
3c95c0c84b | |||
|
ed42f3cf6a | ||
5dabd2c259 | |||
1f32d4efae | |||
|
5c809a2d5a | ||
|
bec19cf69e | ||
|
583b47ae7c | ||
|
264cedd342 | ||
|
de6db7ad2b | ||
17c8aef19f | |||
e402d399e9 | |||
5258880419 | |||
3fd6d9558e | |||
d26290a720 | |||
f1800a834f | |||
93806b342d | |||
17ea444bcb | |||
fc0495abd0 | |||
c9da82376a | |||
ae9ced2794 | |||
6c231c8b4d | |||
6461aa6836 | |||
5eeba1e73a | |||
ca4399b9b5 | |||
377aa07b9f | |||
38d1f0e571 | |||
|
04651494be | ||
|
4c7e69f7f6 | ||
|
51f4b3aa18 | ||
|
beb555bdf2 | ||
|
bb55c38e21 | ||
|
5df3c6f468 | ||
|
7edaa4afe7 | ||
|
486e035649 | ||
|
51bd701b2d | ||
|
de0635952f | ||
|
21b4ced721 | ||
|
2567bb9c45 | ||
|
a4db4d9cd0 | ||
|
8cdc84aef6 | ||
|
8b82278934 | ||
|
4b4adbf3e1 | ||
|
0e6c3bb6aa | ||
|
fa315bb599 | ||
|
d54cd790b1 | ||
|
c729b8a6d6 | ||
|
142d6eda50 | ||
|
ae8dd798ab | ||
|
8121eb8d6f | ||
|
be15051ba5 | ||
|
7f7a154e1b | ||
|
72c34a7475 | ||
|
abc3a3f609 | ||
|
ce2d1c6e05 | ||
|
58cff92891 | ||
|
6bcd916203 | ||
|
c924adc4b0 | ||
|
9e1760ae9d | ||
|
1a90996259 | ||
|
11768248e4 | ||
|
34d7124a46 | ||
|
c5c80b9091 | ||
|
1b3d046d05 | ||
|
5c2e4fe356 | ||
|
ff5645ef73 | ||
|
3fb3f2e857 | ||
|
f75c79cbeb | ||
|
43c8031da5 | ||
|
4f1c2f288e | ||
|
26908178d3 | ||
|
fe63894ad8 | ||
|
1bf898f1d0 | ||
|
2c71b5e53a | ||
|
601e8b1072 | ||
|
2c15d1cea0 | ||
|
52fc6ea4de | ||
|
3621e80055 | ||
|
d9191e47fa | ||
|
87d873862a | ||
|
9dc833a970 | ||
|
45bf77a64a | ||
|
5be7cc14b0 | ||
|
1f0e417ff1 | ||
|
1d31c72946 | ||
|
eb7538249f | ||
|
0dacf2f657 | ||
|
13f1529306 | ||
|
3b2d0c0af2 | ||
|
aea8f6ed0a | ||
|
322f28baa7 | ||
|
3860bf6b74 | ||
|
abcc660118 | ||
|
b7e61e4ab8 | ||
|
ca50d7f73b | ||
|
d9f3bd3b8c | ||
|
b4e50c2ef8 | ||
|
e46d717af8 | ||
|
83c0395ee4 | ||
|
eef4823e56 | ||
|
4100edcacf | ||
|
89e72eeaff | ||
|
ce54bae087 | ||
|
bbbde0c46a | ||
|
a43f47abd4 | ||
|
60ab746ce3 | ||
|
418d182726 | ||
|
ccb3dd53a8 | ||
|
bb0beb4a92 | ||
|
dda3ca452b | ||
|
0d54a2a0c8 | ||
|
c1a4e67ba3 | ||
|
b10dde3b21 | ||
|
c7b632fbb8 | ||
|
a00366650a | ||
|
a10beac915 | ||
|
d409d68052 | ||
|
5fdefaecc0 | ||
|
0e3d3598f1 | ||
|
556a8f833c | ||
|
a17dd4a9b4 | ||
|
681c4828a6 | ||
|
e670c03026 | ||
|
5bd64352f1 | ||
|
56b7186c15 | ||
|
249d3a6caa | ||
|
7b6cb64d58 | ||
|
da71aaf0dd | ||
|
d35dd5433d | ||
|
cb6bc1f280 | ||
|
3d79315470 | ||
|
077d67c417 | ||
|
77b6ffb81a | ||
|
e4a41e9d07 | ||
|
0460f64b47 | ||
|
477a99c703 | ||
|
385e9ae02f | ||
|
18a7426863 | ||
|
3f2176659a | ||
|
72e756c778 | ||
|
bb340112d5 | ||
|
fe4a5fa78c | ||
|
88efbfe3e3 | ||
|
6b0cfbe2b5 |
22
completions/gpoa
Normal file
22
completions/gpoa
Normal file
@ -0,0 +1,22 @@
|
||||
_gpoa()
|
||||
{
|
||||
local cur prev words cword split
|
||||
_init_completion -s || return
|
||||
|
||||
case $prev in
|
||||
--dc)
|
||||
_filedir
|
||||
return
|
||||
;;
|
||||
--loglevel)
|
||||
COMPREPLY=($(compgen -W '0 1 2 3 4 5' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=($(compgen -W '--dc --nodomain --noupdate --noplugins --list-backends --loglevel --help --force' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
complete -F _gpoa gpoa
|
27
completions/gpupdate
Normal file
27
completions/gpupdate
Normal file
@ -0,0 +1,27 @@
|
||||
_gpupdate()
|
||||
{
|
||||
local cur prev words cword split
|
||||
_init_completion -s || return
|
||||
|
||||
case $prev in
|
||||
-u|--user)
|
||||
_filedir
|
||||
return
|
||||
;;
|
||||
-t|--target)
|
||||
COMPREPLY=($(compgen -W 'ALL USER COMPUTER' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
-l|--loglevel)
|
||||
COMPREPLY=($(compgen -W '0 1 2 3 4 5' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=($(compgen -W '--user --target --loglevel --system --help --force' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
complete -F _gpupdate gpupdate
|
||||
|
18
completions/gpupdate-setup
Normal file
18
completions/gpupdate-setup
Normal file
@ -0,0 +1,18 @@
|
||||
_gpupdate-setup()
|
||||
{
|
||||
local cur prev words cword split
|
||||
_init_completion -s || return
|
||||
|
||||
case $prev in
|
||||
set-backend)
|
||||
COMPREPLY=($(compgen -W 'local samba' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=($(compgen -W 'list list-backends status enable disable update write set-backend default-policy active-policy active-backend' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
complete -F _gpupdate-setup gpupdate-setup
|
1
dist/gpupdate-scripts-run-user.service
vendored
1
dist/gpupdate-scripts-run-user.service
vendored
@ -1,6 +1,5 @@
|
||||
[Unit]
|
||||
Description=Run Group Policy scripts for a user
|
||||
After=gpupdate-user.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
|
2
dist/gpupdate-user.timer
vendored
2
dist/gpupdate-user.timer
vendored
@ -2,7 +2,7 @@
|
||||
Description=Run gpupdate-user every hour
|
||||
|
||||
[Timer]
|
||||
OnStartupSec=1
|
||||
OnStartupSec=60min
|
||||
OnUnitActiveSec=60min
|
||||
|
||||
[Install]
|
||||
|
2
dist/gpupdate.timer
vendored
2
dist/gpupdate.timer
vendored
@ -2,7 +2,7 @@
|
||||
Description=Run gpupdate every hour
|
||||
|
||||
[Timer]
|
||||
OnStartupSec=1
|
||||
OnStartupSec=60min
|
||||
OnUnitActiveSec=60min
|
||||
|
||||
[Install]
|
||||
|
9
dist/system-policy-gpupdate
vendored
9
dist/system-policy-gpupdate
vendored
@ -2,11 +2,12 @@
|
||||
session [success=2 perm_denied=ignore default=die] pam_localuser.so
|
||||
session substack gpupdate-remote-policy
|
||||
session [default=1] pam_permit.so
|
||||
session [default=6] pam_permit.so
|
||||
session [default=7] pam_permit.so
|
||||
session [success=1 default=ignore] pam_succeed_if.so user ingroup users quiet
|
||||
session [default=4] pam_permit.so
|
||||
session [default=5] pam_permit.so
|
||||
session [success=1 default=ignore] pam_succeed_if.so uid >= 500 quiet
|
||||
session [default=2] pam_permit.so
|
||||
session [default=3] pam_permit.so
|
||||
session [success=1 default=ignore] pam_succeed_if.so service = systemd-user quiet
|
||||
-session required pam_oddjob_gpupdate.so
|
||||
session optional pam_env.so user_readenv=1 conffile=/etc/gpupdate/environment user_envfile=.gpupdate_environment
|
||||
session required pam_permit.so
|
||||
session required pam_permit.so
|
@ -45,6 +45,9 @@ Don't run plugins.
|
||||
.TP
|
||||
\fB--loglevel \fILOGLEVEL\fP
|
||||
Set logging verbosity from 0 to 5.
|
||||
.TP
|
||||
\fB--force\fP
|
||||
Force GPT download.
|
||||
.
|
||||
.SH FILES
|
||||
\fB/usr/sbin/gpoa\fR utility uses \fB/usr/share/local-policy/default\fR
|
||||
@ -55,8 +58,10 @@ All data is located in \fB/var/cache/gpupdate\fR. Also domain GPTs are
|
||||
taken from Samba's \fB/var/cache/samba\fR.
|
||||
.
|
||||
The settings read from Samba are stored in
|
||||
\fB/var/cache/gpupdate/registry.sqlite\fR and "Local Policy" settings
|
||||
read from \fB/usr/local/share/local-policy/default\fR are converted
|
||||
Dconf. Machine policies are stored in the \fB/etc/dconf/db/policy.d/policy.ini\fR file,
|
||||
user policies are stored in the \fB/etc/dconf/db/policy<UID>.d/policy<UID>.ini\fR file
|
||||
(where UID is the user ID in the system)."Local Policy" settings
|
||||
read from \fB/usr/share/local-policy/\fR are converted
|
||||
into GPT and stored as \fB/var/cache/gpupdate/local-policy\fR.
|
||||
.SH "SEE ALSO"
|
||||
gpupdate(1)
|
||||
|
@ -43,6 +43,9 @@ Show help.
|
||||
.TP
|
||||
\fB--user \fIusername\fR
|
||||
Run \fBgpupdate\fP for \fIusername\fP.
|
||||
.TP
|
||||
\fB--force\fP
|
||||
Force GPT download.
|
||||
.
|
||||
.SS "EXIT CODES"
|
||||
.TP
|
||||
|
@ -22,6 +22,9 @@ from .samba_backend import samba_backend
|
||||
from .nodomain_backend import nodomain_backend
|
||||
from util.logging import log
|
||||
from util.config import GPConfig
|
||||
from util.util import get_uid_by_username, touch_file
|
||||
from util.paths import get_dconf_config_file
|
||||
from storage.dconf_registry import Dconf_registry, create_dconf_ini_file, add_preferences_to_global_registry_dict
|
||||
|
||||
def backend_factory(dc, username, is_machine, no_domain = False):
|
||||
'''
|
||||
@ -59,3 +62,14 @@ def backend_factory(dc, username, is_machine, no_domain = False):
|
||||
|
||||
return back
|
||||
|
||||
def save_dconf(username, is_machine, nodomain=None):
|
||||
if is_machine:
|
||||
uid = None
|
||||
else:
|
||||
uid = get_uid_by_username(username) if not is_machine else None
|
||||
target_file = get_dconf_config_file(uid)
|
||||
touch_file(target_file)
|
||||
Dconf_registry.apply_template(uid)
|
||||
add_preferences_to_global_registry_dict(username, is_machine)
|
||||
Dconf_registry.update_dict_to_previous()
|
||||
create_dconf_ini_file(target_file,Dconf_registry.global_registry_dict, uid, nodomain)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,18 +16,14 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
||||
from .applier_backend import applier_backend
|
||||
from storage import registry_factory
|
||||
from gpt.gpt import gpt, get_local_gpt
|
||||
from gpt.gpt import get_local_gpt
|
||||
from util.util import (
|
||||
get_machine_name
|
||||
)
|
||||
from util.sid import get_sid
|
||||
import util.preg
|
||||
from util.logging import slogm
|
||||
|
||||
class nodomain_backend(applier_backend):
|
||||
|
||||
@ -35,7 +31,7 @@ class nodomain_backend(applier_backend):
|
||||
domain = None
|
||||
machine_name = get_machine_name()
|
||||
machine_sid = get_sid(domain, machine_name, True)
|
||||
self.storage = registry_factory('registry')
|
||||
self.storage = registry_factory()
|
||||
self.storage.set_info('domain', domain)
|
||||
self.storage.set_info('machine_name', machine_name)
|
||||
self.storage.set_info('machine_sid', machine_sid)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,32 +18,35 @@
|
||||
|
||||
import os
|
||||
# Facility to determine GPTs for user
|
||||
from samba.gpclass import check_safe_path
|
||||
try:
|
||||
from samba.gpclass import check_safe_path
|
||||
except ImportError:
|
||||
from samba.gp.gpclass import check_safe_path
|
||||
|
||||
from .applier_backend import applier_backend
|
||||
from storage import cache_factory, registry_factory
|
||||
from storage import registry_factory
|
||||
from gpt.gpt import gpt, get_local_gpt
|
||||
from gpt.gpo_dconf_mapping import GpoInfoDconf
|
||||
from util.util import (
|
||||
get_machine_name,
|
||||
is_machine_name
|
||||
get_machine_name
|
||||
)
|
||||
from util.kerberos import (
|
||||
machine_kinit
|
||||
, machine_kdestroy
|
||||
)
|
||||
from util.sid import get_sid
|
||||
import util.preg
|
||||
from util.logging import log
|
||||
|
||||
class samba_backend(applier_backend):
|
||||
__user_policy_mode_key = 'Software\\Policies\\Microsoft\\Windows\\System\\UserPolicyMode'
|
||||
__user_policy_mode_key = '/SOFTWARE/Policies/Microsoft/Windows/System/UserPolicyMode'
|
||||
__user_policy_mode_key_win = '/Software/Policies/Microsoft/Windows/System/UserPolicyMode'
|
||||
|
||||
def __init__(self, sambacreds, username, domain, is_machine):
|
||||
self.cache_path = '/var/cache/gpupdate/creds/krb5cc_{}'.format(os.getpid())
|
||||
self.__kinit_successful = machine_kinit(self.cache_path)
|
||||
if not self.__kinit_successful:
|
||||
raise Exception('kinit is not successful')
|
||||
self.storage = registry_factory('registry')
|
||||
self.storage = registry_factory()
|
||||
self.storage.set_info('domain', domain)
|
||||
machine_name = get_machine_name()
|
||||
machine_sid = get_sid(domain, machine_name, is_machine)
|
||||
@ -58,13 +61,13 @@ class samba_backend(applier_backend):
|
||||
else:
|
||||
self.sid = get_sid(self.storage.get_info('domain'), self.username)
|
||||
|
||||
self.cache = cache_factory('regpol_cache')
|
||||
self.gpo_names = cache_factory('gpo_names')
|
||||
|
||||
# Samba objects - LoadParm() and CredentialsOptions()
|
||||
self.sambacreds = sambacreds
|
||||
|
||||
self.cache_dir = self.sambacreds.get_cache_dir()
|
||||
self.gpo_cache_part ='gpo_cache'
|
||||
self._cached = False
|
||||
self.storage.set_info('cache_dir', os.path.join(self.cache_dir, self.gpo_cache_part))
|
||||
logdata = dict({'cachedir': self.cache_dir})
|
||||
log('D7', logdata)
|
||||
|
||||
@ -78,9 +81,11 @@ class samba_backend(applier_backend):
|
||||
is possible to work with user's part of GPT. This value is
|
||||
checked only if working for user's SID.
|
||||
'''
|
||||
upm = self.storage.get_hklm_entry(self.__user_policy_mode_key)
|
||||
if upm and upm.data:
|
||||
upm = int(upm.data)
|
||||
upm_key = self.storage.get_key_value(self.__user_policy_mode_key)
|
||||
upm_win_key = self.storage.get_key_value(self.__user_policy_mode_key_win)
|
||||
upm = upm_key if upm_key else upm_win_key
|
||||
if upm:
|
||||
upm = int(upm)
|
||||
if upm < 0 or upm > 2:
|
||||
upm = 0
|
||||
else:
|
||||
@ -101,8 +106,6 @@ class samba_backend(applier_backend):
|
||||
raise exc
|
||||
|
||||
if self._is_machine_username:
|
||||
self.storage.wipe_hklm()
|
||||
self.storage.wipe_user(self.storage.get_info('machine_sid'))
|
||||
for gptobj in machine_gpts:
|
||||
try:
|
||||
gptobj.merge_machine()
|
||||
@ -120,7 +123,6 @@ class samba_backend(applier_backend):
|
||||
except Exception as exc:
|
||||
log('F3')
|
||||
raise exc
|
||||
self.storage.wipe_user(self.sid)
|
||||
|
||||
# Merge user settings if UserPolicyMode set accordingly
|
||||
# and user settings (for HKCU) are exist.
|
||||
@ -140,6 +142,7 @@ class samba_backend(applier_backend):
|
||||
if policy_mode > 0:
|
||||
for gptobj in machine_gpts:
|
||||
try:
|
||||
gptobj.sid = self.sid
|
||||
gptobj.merge_user()
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
@ -150,10 +153,15 @@ class samba_backend(applier_backend):
|
||||
'''
|
||||
Check if there is SYSVOL path for GPO assigned
|
||||
'''
|
||||
self._cached = False
|
||||
if not gpo.file_sys_path:
|
||||
# GPO named "Local Policy" has no entry by its nature so
|
||||
# no reason to print warning.
|
||||
if 'Local Policy' != gpo.name:
|
||||
if gpo.display_name in self.storage._dict_gpo_name_version_cache.keys():
|
||||
gpo.file_sys_path = self.storage._dict_gpo_name_version_cache.get(gpo.display_name, {}).get('correct_path')
|
||||
self._cached = True
|
||||
return True
|
||||
elif 'Local Policy' != gpo.name:
|
||||
logdata = dict({'gponame': gpo.name})
|
||||
log('W4', logdata)
|
||||
return False
|
||||
@ -168,11 +176,18 @@ class samba_backend(applier_backend):
|
||||
log('D46')
|
||||
for gpo in gpos:
|
||||
if self._check_sysvol_present(gpo):
|
||||
path = check_safe_path(gpo.file_sys_path).upper()
|
||||
slogdata = dict({'sysvol_path': gpo.file_sys_path, 'gpo_name': gpo.display_name, 'gpo_path': path})
|
||||
log('D30', slogdata)
|
||||
gpt_abspath = os.path.join(self.cache_dir, 'gpo_cache', path)
|
||||
obj = gpt(gpt_abspath, sid)
|
||||
if not self._cached:
|
||||
path = check_safe_path(gpo.file_sys_path).upper()
|
||||
slogdata = dict({'sysvol_path': gpo.file_sys_path, 'gpo_name': gpo.display_name, 'gpo_path': path})
|
||||
log('D30', slogdata)
|
||||
gpt_abspath = os.path.join(self.cache_dir, self.gpo_cache_part, path)
|
||||
else:
|
||||
gpt_abspath = gpo.file_sys_path
|
||||
log('D211', {'sysvol_path': gpo.file_sys_path, 'gpo_name': gpo.display_name})
|
||||
if self._is_machine_username:
|
||||
obj = gpt(gpt_abspath, sid, None, GpoInfoDconf(gpo))
|
||||
else:
|
||||
obj = gpt(gpt_abspath, sid, self.username, GpoInfoDconf(gpo))
|
||||
obj.set_name(gpo.display_name)
|
||||
gpts.append(obj)
|
||||
else:
|
||||
@ -188,9 +203,9 @@ def upm2str(upm_num):
|
||||
result = 'Not configured'
|
||||
|
||||
if upm_num in [1, '1']:
|
||||
result = 'Replace'
|
||||
|
||||
if upm_num in [2, '2']:
|
||||
result = 'Merge'
|
||||
|
||||
if upm_num in [2, '2']:
|
||||
result = 'Replace'
|
||||
|
||||
return result
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,43 +18,41 @@
|
||||
|
||||
from abc import ABC
|
||||
|
||||
import logging
|
||||
from util.logging import slogm
|
||||
|
||||
def check_experimental_enabled(storage):
|
||||
experimental_enable_flag = 'Software\\BaseALT\\Policies\\GPUpdate\\GlobalExperimental'
|
||||
flag = storage.get_hklm_entry(experimental_enable_flag)
|
||||
experimental_enable_flag = '/Software/BaseALT/Policies/GPUpdate/GlobalExperimental'
|
||||
flag = storage.get_key_value(experimental_enable_flag)
|
||||
|
||||
result = False
|
||||
|
||||
if flag and '1' == flag.data:
|
||||
if flag and '1' == str(flag):
|
||||
result = True
|
||||
|
||||
return result
|
||||
|
||||
def check_windows_mapping_enabled(storage):
|
||||
windows_mapping_enable_flag = 'Software\\BaseALT\\Policies\\GPUpdate\\WindowsPoliciesMapping'
|
||||
flag = storage.get_hklm_entry(windows_mapping_enable_flag)
|
||||
windows_mapping_enable_flag = '/Software/BaseALT/Policies/GPUpdate/WindowsPoliciesMapping'
|
||||
flag = storage.get_key_value(windows_mapping_enable_flag)
|
||||
|
||||
result = True
|
||||
|
||||
if flag and '0' == flag.data:
|
||||
flag = str(flag)
|
||||
if flag and '0' == flag:
|
||||
result = False
|
||||
|
||||
return result
|
||||
|
||||
def check_module_enabled(storage, module_name):
|
||||
gpupdate_module_enable_branch = 'Software\\BaseALT\\Policies\\GPUpdate'
|
||||
gpupdate_module_flag = '{}\\{}'.format(gpupdate_module_enable_branch, module_name)
|
||||
flag = storage.get_hklm_entry(gpupdate_module_flag)
|
||||
gpupdate_module_enable_branch = '/Software/BaseALT/Policies/GPUpdate'
|
||||
gpupdate_module_flag = '{}/{}'.format(gpupdate_module_enable_branch, module_name)
|
||||
flag = storage.get_key_value(gpupdate_module_flag)
|
||||
|
||||
result = None
|
||||
|
||||
if flag:
|
||||
if '1' == flag.data:
|
||||
flag = str(flag)
|
||||
if flag and flag!='None':
|
||||
if '1' == flag:
|
||||
result = True
|
||||
if '0' == flag.data:
|
||||
result = False
|
||||
else:
|
||||
result = False
|
||||
|
||||
return result
|
||||
|
||||
|
@ -17,9 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import subprocess
|
||||
import threading
|
||||
import logging
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
def control_subst(preg_name):
|
||||
'''
|
||||
@ -101,14 +99,14 @@ class control:
|
||||
if status == None:
|
||||
logdata = dict()
|
||||
logdata['control'] = self.control_name
|
||||
logdata['inpossible values'] = self.self.control_value
|
||||
logdata['inpossible values'] = self.control_value
|
||||
log('E42', logdata)
|
||||
return
|
||||
elif type(self.control_value) == str:
|
||||
if self.control_value not in self.possible_values:
|
||||
logdata = dict()
|
||||
logdata['control'] = self.control_name
|
||||
logdata['inpossible values'] = self.self.control_value
|
||||
logdata['inpossible values'] = self.control_value
|
||||
log('E59', logdata)
|
||||
return
|
||||
status = self.control_value
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,27 +17,40 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from os.path import isfile
|
||||
from util.logging import slogm
|
||||
import logging
|
||||
|
||||
from gpt.envvars import (
|
||||
from util.arguments import (
|
||||
FileAction
|
||||
, action_letter2enum
|
||||
)
|
||||
from util.windows import expand_windows_var
|
||||
from util.util import (
|
||||
get_homedir,
|
||||
homedir_exists
|
||||
)
|
||||
from util.util import get_homedir
|
||||
from util.logging import log
|
||||
|
||||
class Envvar:
|
||||
__envvar_file_path = '/etc/gpupdate/environment'
|
||||
__envvar_file_path_user = '/.gpupdate_environment'
|
||||
|
||||
def __init__(self, envvars, username=''):
|
||||
self.username = username
|
||||
self.envvars = envvars
|
||||
if self.username == 'root':
|
||||
self.envvar_file_path = '/etc/gpupdate/environment'
|
||||
self.envvar_file_path = Envvar.__envvar_file_path
|
||||
else:
|
||||
self.envvar_file_path = get_homedir(self.username) + '/.gpupdate_environment'
|
||||
self.envvar_file_path = get_homedir(self.username) + Envvar.__envvar_file_path_user
|
||||
|
||||
@staticmethod
|
||||
def clear_envvar_file(username = False):
|
||||
if username:
|
||||
file_path = get_homedir(username) + Envvar.__envvar_file_path_user
|
||||
else:
|
||||
file_path = Envvar.__envvar_file_path
|
||||
|
||||
try:
|
||||
with open(file_path, 'w') as file:
|
||||
file.write('')
|
||||
log('D215', {'path':file_path})
|
||||
except Exception as exc:
|
||||
log('D216', {'path': file_path, 'exc': exc})
|
||||
|
||||
def _open_envvar_file(self):
|
||||
fd = None
|
||||
@ -92,6 +105,8 @@ class Envvar:
|
||||
value = value.replace('\\', '/')
|
||||
exist_line = None
|
||||
for line in lines:
|
||||
if line == '\n':
|
||||
continue
|
||||
if line.split()[0] == name:
|
||||
exist_line = line
|
||||
break
|
||||
|
@ -17,7 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from gpt.folders import (
|
||||
from util.arguments import (
|
||||
FileAction
|
||||
, action_letter2enum
|
||||
)
|
||||
@ -28,88 +28,154 @@ from pathlib import Path
|
||||
from util.windows import expand_windows_var
|
||||
from util.util import get_homedir
|
||||
from util.exceptions import NotUNCPathError
|
||||
from util.paths import UNCPath
|
||||
import fnmatch
|
||||
|
||||
class Files_cp:
|
||||
def __init__(self, file_obj, file_cache ,username=None):
|
||||
def __init__(self, file_obj, file_cache, exe_check, username=None):
|
||||
self.file_cache = file_cache
|
||||
self.exe_check = exe_check
|
||||
targetPath = expand_windows_var(file_obj.targetPath, username).replace('\\', '/')
|
||||
self.targetPath = check_target_path(targetPath, username)
|
||||
if not self.targetPath:
|
||||
return
|
||||
self.fromPath = (expand_windows_var(file_obj.fromPath, username).replace('\\', '/')
|
||||
if file_obj.fromPath else None)
|
||||
self.isTargetPathDirectory = False
|
||||
self.action = action_letter2enum(file_obj.action)
|
||||
self.readOnly = str2bool(file_obj.readOnly)
|
||||
self.archive = str2bool(file_obj.archive)
|
||||
self.hidden = str2bool(file_obj.hidden)
|
||||
self.suppress = str2bool(file_obj.suppress)
|
||||
self.executable = str2bool(file_obj.executable)
|
||||
self.username = username
|
||||
self.fromPathFiles = self.get_list_files()
|
||||
self.fromPathFiles = list()
|
||||
if self.fromPath:
|
||||
if targetPath[-1] == '/' or self.is_pattern(Path(self.fromPath).name):
|
||||
self.isTargetPathDirectory = True
|
||||
self.get_list_files()
|
||||
self.act()
|
||||
|
||||
def get_target_file(self, targetPath, fromPath):
|
||||
def get_target_file(self, targetPath:Path, fromFile:str) -> Path:
|
||||
try:
|
||||
if fromPath and targetPath.is_dir():
|
||||
if self.hidden:
|
||||
return targetPath.joinpath('.' + fromPath.name)
|
||||
if fromFile:
|
||||
fromFileName = Path(fromFile).name
|
||||
if self.isTargetPathDirectory:
|
||||
targetPath.mkdir(parents = True, exist_ok = True)
|
||||
else:
|
||||
return targetPath.joinpath(fromPath.name)
|
||||
targetPath.parent.mkdir(parents = True, exist_ok = True)
|
||||
targetPath = targetPath.parent
|
||||
fromFileName = self.targetPath.name
|
||||
if self.hidden:
|
||||
return targetPath.joinpath('.' + fromFileName)
|
||||
else:
|
||||
return targetPath.joinpath(fromFileName)
|
||||
|
||||
else:
|
||||
if not self.hidden:
|
||||
return targetPath
|
||||
else:
|
||||
return targetPath.parent.joinpath('.' + targetPath.name)
|
||||
|
||||
except Exception as exc:
|
||||
logdata = dict({'exc': exc})
|
||||
logdata = dict()
|
||||
logdata['targetPath'] = targetPath
|
||||
logdata['fromFile'] = fromFile
|
||||
logdata['exc'] = exc
|
||||
log('D163', logdata)
|
||||
|
||||
def set_read_only(self, targetFile):
|
||||
if self.readOnly:
|
||||
shutil.os.chmod(targetFile, int('444', base = 8))
|
||||
return None
|
||||
|
||||
def copy_target_file(self, targetFile:Path, fromFile:str):
|
||||
try:
|
||||
uri_path = UNCPath(fromFile)
|
||||
self.file_cache.store(fromFile, targetFile)
|
||||
except NotUNCPathError as exc:
|
||||
fromFilePath = Path(fromFile)
|
||||
if fromFilePath.exists():
|
||||
targetFile.write_bytes(fromFilePath.read_bytes())
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['targetFile'] = targetFile
|
||||
logdata['fromFile'] = fromFile
|
||||
logdata['exc'] = exc
|
||||
log('W15', logdata)
|
||||
|
||||
def set_exe_file(self, targetFile, fromFile):
|
||||
if self.executable:
|
||||
return True
|
||||
if Path(fromFile).suffix in self.exe_check.get_list_markers():
|
||||
targetPath = targetFile.parent
|
||||
for i in self.exe_check.get_list_paths():
|
||||
if targetPath == Path(i):
|
||||
return True
|
||||
return False
|
||||
|
||||
def set_mod_file(self, targetFile, fromFile):
|
||||
if not targetFile.is_file():
|
||||
return
|
||||
if self.set_exe_file(targetFile, fromFile):
|
||||
if self.readOnly:
|
||||
shutil.os.chmod(targetFile, 0o555)
|
||||
else:
|
||||
shutil.os.chmod(targetFile, 0o755)
|
||||
else:
|
||||
shutil.os.chmod(targetFile, int('664', base = 8))
|
||||
if self.readOnly:
|
||||
shutil.os.chmod(targetFile, 0o444)
|
||||
else:
|
||||
shutil.os.chmod(targetFile, 0o644)
|
||||
|
||||
def _create_action(self):
|
||||
for fromPath in self.fromPathFiles:
|
||||
logdata = dict()
|
||||
for fromFile in self.fromPathFiles:
|
||||
targetFile = None
|
||||
|
||||
try:
|
||||
targetFile = self.get_target_file(self.targetPath, fromPath)
|
||||
if not targetFile.exists():
|
||||
targetFile.write_bytes(fromPath.read_bytes())
|
||||
targetFile = self.get_target_file(self.targetPath, fromFile)
|
||||
if targetFile and not targetFile.exists():
|
||||
self.copy_target_file(targetFile, fromFile)
|
||||
if self.username:
|
||||
shutil.chown(targetFile, self.username)
|
||||
self.set_read_only(targetFile)
|
||||
self.set_mod_file(targetFile, fromFile)
|
||||
logdata['File'] = targetFile
|
||||
log('D191', logdata)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
logdata['fromPath'] = fromPath
|
||||
logdata['fromPath'] = fromFile
|
||||
logdata['targetPath'] = self.targetPath
|
||||
logdata['targetFile'] = targetFile
|
||||
log('D164', logdata)
|
||||
|
||||
def _delete_action(self):
|
||||
targetFile = Path(self.targetPath)
|
||||
try:
|
||||
if targetFile.exists():
|
||||
targetFile.unlink()
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
logdata['targetPath'] = self.targetPath
|
||||
logdata['targetFile'] = targetFile
|
||||
log('D165', logdata)
|
||||
list_target = [self.targetPath.name]
|
||||
if self.is_pattern(self.targetPath.name) and self.targetPath.parent.exists() and self.targetPath.parent.is_dir():
|
||||
list_target = fnmatch.filter([str(x.name) for x in self.targetPath.parent.iterdir() if x.is_file()], self.targetPath.name)
|
||||
logdata = dict()
|
||||
for targetFile in list_target:
|
||||
targetFile = self.targetPath.parent.joinpath(targetFile)
|
||||
try:
|
||||
if targetFile.exists():
|
||||
targetFile.unlink()
|
||||
logdata['File'] = targetFile
|
||||
log('D193', logdata)
|
||||
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
logdata['targetPath'] = self.targetPath
|
||||
logdata['targetFile'] = targetFile
|
||||
log('D165', logdata)
|
||||
|
||||
def _update_action(self):
|
||||
for fromPath in self.fromPathFiles:
|
||||
targetFile = self.get_target_file(self.targetPath, fromPath)
|
||||
logdata = dict()
|
||||
for fromFile in self.fromPathFiles:
|
||||
targetFile = self.get_target_file(self.targetPath, fromFile)
|
||||
try:
|
||||
targetFile.write_bytes(fromPath.read_bytes())
|
||||
self.copy_target_file(targetFile, fromFile)
|
||||
if self.username:
|
||||
shutil.chown(self.targetPath, self.username)
|
||||
self.set_read_only(targetFile)
|
||||
self.set_mod_file(targetFile, fromFile)
|
||||
logdata['File'] = targetFile
|
||||
log('D192', logdata)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
logdata['fromPath'] = self.fromPath
|
||||
logdata['targetPath'] = self.targetPath
|
||||
@ -127,70 +193,76 @@ class Files_cp:
|
||||
self._delete_action()
|
||||
self._create_action()
|
||||
|
||||
def is_pattern(self, name):
|
||||
if name.find('*') != -1 or name.find('?') != -1:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def get_list_files(self):
|
||||
ls_all_files = list()
|
||||
logdata = dict()
|
||||
logdata['targetPath'] = self.targetPath
|
||||
if self.fromPath and self.fromPath.split('/')[-1] != '*':
|
||||
logdata['targetPath'] = str(self.targetPath)
|
||||
fromFilePath = Path(self.fromPath)
|
||||
if not self.is_pattern(fromFilePath.name):
|
||||
self.fromPathFiles.append(self.fromPath)
|
||||
else:
|
||||
fromPathDir = self.fromPath[:self.fromPath.rfind('/')]
|
||||
|
||||
try:
|
||||
self.file_cache.store(self.fromPath)
|
||||
fromPath = Path(self.file_cache.get(self.fromPath))
|
||||
ls_all_files.append(fromPath)
|
||||
uri_path = UNCPath(fromPathDir)
|
||||
ls_files = self.file_cache.get_ls_smbdir(fromPathDir)
|
||||
if ls_files:
|
||||
filtered_ls_files = fnmatch.filter(ls_files, fromFilePath.name)
|
||||
if filtered_ls_files:
|
||||
self.fromPathFiles = [fromPathDir + '/' + file_s for file_s in filtered_ls_files]
|
||||
except NotUNCPathError as exc:
|
||||
fromPath = Path(self.fromPath)
|
||||
if fromPath.exists():
|
||||
ls_all_files.append(fromPath)
|
||||
except Exception as exc:
|
||||
logdata['fromPath'] = self.fromPath
|
||||
logdata['exc'] = exc
|
||||
log('W13', logdata)
|
||||
elif self.fromPath and len(self.fromPath.split('/')) > 2:
|
||||
ls_files = self.file_cache.get_ls_smbdir(self.fromPath[:-1])
|
||||
if ls_files:
|
||||
ls_from_paths = [self.fromPath[:-1] + file_s for file_s in ls_files]
|
||||
for from_path in ls_from_paths:
|
||||
try:
|
||||
self.file_cache.store(from_path)
|
||||
fromPath = Path(self.file_cache.get(from_path))
|
||||
ls_all_files.append(fromPath)
|
||||
except Exception as exc:
|
||||
logdata['fromPath'] = self.fromPath
|
||||
logdata['exc'] = exc
|
||||
log('W13', logdata)
|
||||
else:
|
||||
try:
|
||||
fromLocalPath = Path(self.fromPath[:-1])
|
||||
if fromLocalPath.is_dir():
|
||||
ls = [fromFile for fromFile in fromLocalPath.iterdir() if fromFile.is_file()]
|
||||
for fromPath in ls:
|
||||
ls_all_files.append(fromPath)
|
||||
exact_path = Path(fromPathDir)
|
||||
if exact_path.is_dir():
|
||||
self.fromPathFiles = [str(fromFile) for fromFile in exact_path.iterdir() if fromFile.is_file()]
|
||||
except Exception as exc:
|
||||
logdata['fromPath'] = self.fromPath
|
||||
logdata['exc'] = exc
|
||||
log('W13', logdata)
|
||||
else:
|
||||
fromPath = Path(self.fromPath) if self.fromPath else None
|
||||
ls_all_files.append(fromPath)
|
||||
return ls_all_files
|
||||
log('W3316', logdata)
|
||||
except Exception as exc:
|
||||
logdata['fromPath'] = self.fromPath
|
||||
logdata['exc'] = exc
|
||||
log('W3317', logdata)
|
||||
|
||||
def check_target_path(path_to_check, username = None):
|
||||
'''
|
||||
Function for checking the correctness of the path
|
||||
'''
|
||||
if not path_to_check:
|
||||
return None
|
||||
|
||||
checking = Path(path_to_check)
|
||||
if checking.is_dir():
|
||||
if username and path_to_check == '/':
|
||||
return Path(get_homedir(username))
|
||||
return checking
|
||||
#Check for path directory without '/something' suffix
|
||||
elif (len(path_to_check.split('/')) > 2
|
||||
and Path(path_to_check.replace(path_to_check.split('/')[-1], '')).is_dir()):
|
||||
return checking
|
||||
elif username:
|
||||
target_path = Path(get_homedir(username))
|
||||
res = target_path.joinpath(path_to_check
|
||||
if path_to_check[0] != '/'
|
||||
else path_to_check[1:])
|
||||
return res
|
||||
else:
|
||||
return False
|
||||
rootpath = Path('/')
|
||||
if username:
|
||||
rootpath = Path(get_homedir(username))
|
||||
|
||||
return rootpath.joinpath(checking)
|
||||
|
||||
class Execution_check():
|
||||
|
||||
__etension_marker_key_name = 'ExtensionMarker'
|
||||
__marker_usage_path_key_name = 'MarkerUsagePath'
|
||||
__hklm_branch = 'Software\\BaseALT\\Policies\\GroupPolicies\\Files'
|
||||
|
||||
def __init__(self, storage):
|
||||
etension_marker_branch = '{}\\{}%'.format(self.__hklm_branch, self.__etension_marker_key_name)
|
||||
marker_usage_path_branch = '{}\\{}%'.format(self.__hklm_branch, self.__marker_usage_path_key_name)
|
||||
self.etension_marker = storage.filter_hklm_entries(etension_marker_branch)
|
||||
self.marker_usage_path = storage.filter_hklm_entries(marker_usage_path_branch)
|
||||
self.list_paths = list()
|
||||
self.list_markers = list()
|
||||
for marker in self.etension_marker:
|
||||
self.list_markers.append(marker.data)
|
||||
for usage_path in self.marker_usage_path:
|
||||
self.list_paths.append(usage_path.data)
|
||||
|
||||
def get_list_paths(self):
|
||||
return self.list_paths
|
||||
|
||||
def get_list_markers(self):
|
||||
return self.list_markers
|
||||
|
@ -20,7 +20,7 @@
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
from gpt.folders import (
|
||||
from util.arguments import (
|
||||
FileAction
|
||||
, action_letter2enum
|
||||
)
|
||||
@ -36,21 +36,26 @@ def remove_dir_tree(path, delete_files=False, delete_folder=False, delete_sub_fo
|
||||
content.remove(entry)
|
||||
if entry.is_dir() and delete_sub_folders:
|
||||
content.remove(entry)
|
||||
remove_dir_tree(entry, delete_files, delete_folder, delete_sub_folders)
|
||||
content.extend(remove_dir_tree(entry, delete_files, delete_folder, delete_sub_folders))
|
||||
|
||||
if delete_folder and not content:
|
||||
path.rmdir()
|
||||
|
||||
return content
|
||||
|
||||
def str2bool(boolstr):
|
||||
if boolstr and boolstr.lower() in ['true', 'yes', '1']:
|
||||
if isinstance(boolstr, bool):
|
||||
return boolstr
|
||||
elif boolstr and boolstr.lower() in ['true', 'yes', '1']:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class Folder:
|
||||
def __init__(self, folder_object, username=None):
|
||||
folder_path = expand_windows_var(folder_object.path, username).replace('\\', '/')
|
||||
folder_path = expand_windows_var(folder_object.path, username).replace('\\', '/').replace('//', '/')
|
||||
if username:
|
||||
folder_path = folder_path.replace(get_homedir(username), '')
|
||||
self.folder_path = Path(get_homedir(username)).joinpath(folder_path if folder_path [0] != '/' else folder_path [1:])
|
||||
else:
|
||||
self.folder_path = Path(folder_path)
|
||||
@ -58,18 +63,26 @@ class Folder:
|
||||
self.delete_files = str2bool(folder_object.delete_files)
|
||||
self.delete_folder = str2bool(folder_object.delete_folder)
|
||||
self.delete_sub_folders = str2bool(folder_object.delete_sub_folders)
|
||||
self.hidden_folder = str2bool(folder_object.hidden_folder)
|
||||
|
||||
def _create_action(self):
|
||||
self.folder_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def _delete_action(self):
|
||||
if self.folder_path.exists():
|
||||
if self.action == FileAction.REPLACE:
|
||||
self.delete_folder = True
|
||||
remove_dir_tree(self.folder_path,
|
||||
self.delete_files,
|
||||
self.delete_folder,
|
||||
self.delete_sub_folders)
|
||||
|
||||
def act(self):
|
||||
if self.hidden_folder == True and str(self.folder_path.name)[0] != '.':
|
||||
path_components = list(self.folder_path.parts)
|
||||
path_components[-1] = '.' + path_components[-1]
|
||||
new_folder_path = Path(*path_components)
|
||||
self.folder_path = new_folder_path
|
||||
if self.action == FileAction.CREATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.UPDATE:
|
||||
|
@ -18,10 +18,9 @@
|
||||
|
||||
import configparser
|
||||
import os
|
||||
import logging
|
||||
from gi.repository import Gio, GLib
|
||||
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
class system_gsetting:
|
||||
def __init__(self, schema, path, value, lock, helper_function=None):
|
||||
|
@ -18,16 +18,15 @@
|
||||
|
||||
|
||||
|
||||
from gpt.folders import (
|
||||
from util.arguments import (
|
||||
FileAction
|
||||
, action_letter2enum
|
||||
)
|
||||
from util.logging import log
|
||||
from pathlib import Path
|
||||
import configparser
|
||||
from util.windows import expand_windows_var
|
||||
from util.util import get_homedir
|
||||
|
||||
from util.gpoa_ini_parsing import GpoaConfigObj
|
||||
|
||||
|
||||
class Ini_file:
|
||||
@ -42,53 +41,55 @@ class Ini_file:
|
||||
self.action = action_letter2enum(ini_obj.action)
|
||||
self.key = ini_obj.property
|
||||
self.value = ini_obj.value
|
||||
self.config = configparser.ConfigParser()
|
||||
self.act()
|
||||
|
||||
def _create_action(self):
|
||||
if self.section not in self.config:
|
||||
self.config[self.section] = dict()
|
||||
|
||||
self.config[self.section][self.key] = self.value
|
||||
|
||||
with self.path.open("w", encoding="utf-8") as configfile:
|
||||
self.config.write(configfile)
|
||||
|
||||
|
||||
|
||||
def _delete_action(self):
|
||||
if not self.path.exists():
|
||||
return
|
||||
|
||||
if not self.section:
|
||||
self.path.unlink()
|
||||
return
|
||||
if not self.key:
|
||||
self.config.remove_section(self.section)
|
||||
elif self.section in self.config:
|
||||
self.config.remove_option(self.section, self.key)
|
||||
|
||||
with self.path.open("w", encoding="utf-8") as configfile:
|
||||
self.config.write(configfile)
|
||||
|
||||
|
||||
def act(self):
|
||||
try:
|
||||
self.config.read(self.path)
|
||||
self.config = GpoaConfigObj(str(self.path), unrepr=False)
|
||||
except Exception as exc:
|
||||
logdata = {'exc': exc}
|
||||
log('D176', logdata)
|
||||
return
|
||||
if self.action == FileAction.CREATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.UPDATE:
|
||||
self._delete_action()
|
||||
self._create_action()
|
||||
if self.action == FileAction.DELETE:
|
||||
self._delete_action()
|
||||
if self.action == FileAction.REPLACE:
|
||||
self._delete_action()
|
||||
self._create_action()
|
||||
|
||||
self.act()
|
||||
|
||||
def _create_action(self):
|
||||
if self.path.is_dir():
|
||||
return
|
||||
if self.section not in self.config:
|
||||
self.config[self.section] = dict()
|
||||
|
||||
self.config[self.section][self.key] = self.value
|
||||
self.config.write()
|
||||
|
||||
|
||||
def _delete_action(self):
|
||||
if not self.path.exists() or self.path.is_dir():
|
||||
return
|
||||
if not self.section:
|
||||
self.path.unlink()
|
||||
return
|
||||
if self.section in self.config:
|
||||
if not self.key:
|
||||
self.config.pop(self.section)
|
||||
elif self.key in self.config[self.section]:
|
||||
self.config[self.section].pop(self.key)
|
||||
self.config.write()
|
||||
|
||||
|
||||
def act(self):
|
||||
try:
|
||||
if self.action == FileAction.CREATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.UPDATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.DELETE:
|
||||
self._delete_action()
|
||||
if self.action == FileAction.REPLACE:
|
||||
self._create_action()
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['action'] = self.action
|
||||
logdata['exc'] = exc
|
||||
log('W23', logdata)
|
||||
|
||||
|
||||
def check_path(path_to_check, username = None):
|
||||
'''
|
||||
|
90
gpoa/frontend/appliers/netshare.py
Normal file
90
gpoa/frontend/appliers/netshare.py
Normal file
@ -0,0 +1,90 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import subprocess
|
||||
|
||||
from util.arguments import (
|
||||
FileAction
|
||||
, action_letter2enum
|
||||
)
|
||||
from util.logging import log
|
||||
from util.windows import expand_windows_var
|
||||
|
||||
|
||||
class Networkshare:
|
||||
|
||||
def __init__(self, networkshare_obj, username = None):
|
||||
self.net_full_cmd = ['/usr/bin/net', 'usershare']
|
||||
self.net_cmd_check = ['/usr/bin/net', 'usershare', 'list']
|
||||
self.cmd = list()
|
||||
self.name = networkshare_obj.name
|
||||
self.path = expand_windows_var(networkshare_obj.path, username).replace('\\', '/') if networkshare_obj.path else None
|
||||
|
||||
self.action = action_letter2enum(networkshare_obj.action)
|
||||
self.allRegular = networkshare_obj.allRegular
|
||||
self.comment = networkshare_obj.comment
|
||||
self.limitUsers = networkshare_obj.limitUsers
|
||||
self.abe = networkshare_obj.abe
|
||||
self._guest = 'guest_ok=y'
|
||||
self.acl = 'Everyone:'
|
||||
self.act()
|
||||
|
||||
def check_list_net(self):
|
||||
try:
|
||||
res = subprocess.check_output(self.net_cmd_check, encoding='utf-8')
|
||||
return res
|
||||
except Exception as exc:
|
||||
return exc
|
||||
|
||||
def _run_net_full_cmd(self):
|
||||
logdata = dict()
|
||||
try:
|
||||
res = subprocess.check_output(self.net_full_cmd, stderr=subprocess.DEVNULL, encoding='utf-8')
|
||||
if res:
|
||||
logdata['cmd'] = self.net_full_cmd
|
||||
logdata['answer'] = res
|
||||
log('D190', logdata)
|
||||
except Exception as exc:
|
||||
logdata['cmd'] = self.net_full_cmd
|
||||
logdata['exc'] = exc
|
||||
log('D182', logdata)
|
||||
|
||||
|
||||
def _create_action(self):
|
||||
self.net_full_cmd.append('add')
|
||||
self.net_full_cmd.append(self.name)
|
||||
self.net_full_cmd.append(self.path)
|
||||
self.net_full_cmd.append(self.comment)
|
||||
self.net_full_cmd.append(self.acl + 'F')
|
||||
self.net_full_cmd.append(self._guest)
|
||||
self._run_net_full_cmd()
|
||||
|
||||
def _delete_action(self):
|
||||
self.net_full_cmd.append('delete')
|
||||
self.net_full_cmd.append(self.name)
|
||||
self._run_net_full_cmd()
|
||||
|
||||
def act(self):
|
||||
if self.action == FileAction.CREATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.UPDATE:
|
||||
self._create_action()
|
||||
if self.action == FileAction.DELETE:
|
||||
self._delete_action()
|
||||
if self.action == FileAction.REPLACE:
|
||||
self._create_action()
|
@ -18,9 +18,8 @@
|
||||
|
||||
import os
|
||||
import jinja2
|
||||
import logging
|
||||
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
class polkit:
|
||||
__template_path = '/usr/share/gpupdate/templates'
|
||||
@ -38,7 +37,19 @@ class polkit:
|
||||
else:
|
||||
self.outfile = os.path.join(self.__policy_dir, '{}.rules'.format(self.template_name))
|
||||
|
||||
def _is_empty(self):
|
||||
for key, item in self.args.items():
|
||||
if key == 'User':
|
||||
continue
|
||||
elif item:
|
||||
return False
|
||||
return True
|
||||
|
||||
def generate(self):
|
||||
if self._is_empty():
|
||||
if os.path.isfile(self.outfile):
|
||||
os.remove(self.outfile)
|
||||
return
|
||||
try:
|
||||
template = self.__template_environment.get_template(self.infilename)
|
||||
text = template.render(**self.args)
|
||||
|
@ -17,9 +17,8 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import dbus
|
||||
import logging
|
||||
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
class systemd_unit:
|
||||
def __init__(self, unit_name, state):
|
||||
@ -38,6 +37,9 @@ class systemd_unit:
|
||||
if self.desired_state == 1:
|
||||
self.manager.UnmaskUnitFiles([self.unit_name], dbus.Boolean(False))
|
||||
self.manager.EnableUnitFiles([self.unit_name], dbus.Boolean(False), dbus.Boolean(True))
|
||||
if self.unit_name == 'gpupdate.service':
|
||||
if self.manager.GetUnitFileState(dbus.String(self.unit_name)) == 'enabled':
|
||||
return
|
||||
self.manager.StartUnit(self.unit_name, 'replace')
|
||||
logdata = dict()
|
||||
logdata['unit'] = self.unit_name
|
||||
@ -49,9 +51,13 @@ class systemd_unit:
|
||||
service_state = self._get_state()
|
||||
|
||||
if not service_state in ['active', 'activating']:
|
||||
logdata = dict()
|
||||
logdata['unit'] = self.unit_name
|
||||
log('E46', logdata)
|
||||
service_timer_name = self.unit_name.replace(".service", ".timer")
|
||||
self.unit = self.manager.LoadUnit(dbus.String(service_timer_name))
|
||||
service_state = self._get_state()
|
||||
if not service_state in ['active', 'activating']:
|
||||
logdata = dict()
|
||||
logdata['unit'] = self.unit_name
|
||||
log('E46', logdata)
|
||||
else:
|
||||
self.manager.StopUnit(self.unit_name, 'replace')
|
||||
self.manager.DisableUnitFiles([self.unit_name], dbus.Boolean(False))
|
||||
@ -62,7 +68,7 @@ class systemd_unit:
|
||||
|
||||
service_state = self._get_state()
|
||||
|
||||
if not service_state in ['stopped']:
|
||||
if not service_state in ['stopped', 'deactivating', 'inactive']:
|
||||
logdata = dict()
|
||||
logdata['unit'] = self.unit_name
|
||||
log('E46', logdata)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -24,13 +24,13 @@ from .applier_frontend import (
|
||||
import json
|
||||
import os
|
||||
from util.logging import log
|
||||
from util.util import is_machine_name
|
||||
from util.util import is_machine_name, string_to_literal_eval
|
||||
|
||||
class chromium_applier(applier_frontend):
|
||||
__module_name = 'ChromiumApplier'
|
||||
__module_enabled = True
|
||||
__module_experimental = False
|
||||
__registry_branch = 'Software\\Policies\\Google\\Chrome'
|
||||
__registry_branch = 'Software/Policies/Google/Chrome'
|
||||
__managed_policies_path = '/etc/chromium/policies/managed'
|
||||
__recommended_policies_path = '/etc/chromium/policies/recommended'
|
||||
|
||||
@ -39,8 +39,7 @@ class chromium_applier(applier_frontend):
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self._is_machine_name = is_machine_name(self.username)
|
||||
chromium_filter = '{}%'.format(self.__registry_branch)
|
||||
self.chromium_keys = self.storage.filter_hklm_entries(chromium_filter)
|
||||
self.chromium_keys = self.storage.filter_hklm_entries(self.__registry_branch)
|
||||
|
||||
self.policies_json = dict()
|
||||
|
||||
@ -65,7 +64,7 @@ class chromium_applier(applier_frontend):
|
||||
#Replacing all nested dictionaries with a list
|
||||
dict_item_to_list = (
|
||||
lambda target_dict :
|
||||
{key:[*val.values()] if type(val) == dict else val for key,val in target_dict.items()}
|
||||
{key:[*val.values()] if type(val) == dict else string_to_literal_eval(val) for key,val in target_dict.items()}
|
||||
)
|
||||
os.makedirs(self.__managed_policies_path, exist_ok=True)
|
||||
with open(destfile, 'w') as f:
|
||||
@ -98,48 +97,73 @@ class chromium_applier(applier_frontend):
|
||||
'''
|
||||
List of keys resulting from parsing chrome.admx with parsing_chrom_admx_intvalues.py
|
||||
'''
|
||||
valuename_typeint = (['DefaultCookiesSetting',
|
||||
'DefaultFileHandlingGuardSetting',
|
||||
valuename_typeint = (['DefaultClipboardSetting',
|
||||
'DefaultCookiesSetting',
|
||||
'DefaultFileSystemReadGuardSetting',
|
||||
'DefaultFileSystemWriteGuardSetting',
|
||||
'DefaultGeolocationSetting',
|
||||
'DefaultImagesSetting',
|
||||
'DefaultInsecureContentSetting',
|
||||
'DefaultJavaScriptJitSetting',
|
||||
'DefaultJavaScriptSetting',
|
||||
'DefaultPopupsSetting',
|
||||
'DefaultLocalFontsSetting',
|
||||
'DefaultNotificationsSetting',
|
||||
'DefaultGeolocationSetting',
|
||||
'DefaultPopupsSetting',
|
||||
'DefaultSensorsSetting',
|
||||
'DefaultWebBluetoothGuardSetting',
|
||||
'DefaultWebUsbGuardSetting',
|
||||
'DefaultSerialGuardSetting',
|
||||
'LegacySameSiteCookieBehaviorEnabled',
|
||||
'ProxyServerMode',
|
||||
'DefaultThirdPartyStoragePartitioningSetting',
|
||||
'DefaultWebBluetoothGuardSetting',
|
||||
'DefaultWebHidGuardSetting',
|
||||
'DefaultWebUsbGuardSetting',
|
||||
'DefaultWindowManagementSetting',
|
||||
'DefaultMediaStreamSetting',
|
||||
'PrintRasterizationMode',
|
||||
'DefaultPluginsSetting',
|
||||
'DefaultKeygenSetting',
|
||||
'ChromeFrameRendererSettings',
|
||||
'SafeBrowsingProtectionLevel',
|
||||
'PasswordProtectionWarningTrigger',
|
||||
'SafeBrowsingProtectionLevel_recommended',
|
||||
'RestoreOnStartup',
|
||||
'RestoreOnStartup_recommended',
|
||||
'DefaultWindowPlacementSetting',
|
||||
'ProxyServerMode',
|
||||
'ExtensionManifestV2Availability',
|
||||
'ExtensionUnpublishedAvailability',
|
||||
'CreateThemesSettings',
|
||||
'DevToolsGenAiSettings',
|
||||
'GenAILocalFoundationalModelSettings',
|
||||
'HelpMeWriteSettings',
|
||||
'TabOrganizerSettings',
|
||||
'BrowserSwitcherParsingMode',
|
||||
'CloudAPAuthEnabled',
|
||||
'AdsSettingForIntrusiveAdsSites',
|
||||
'AmbientAuthenticationInPrivateModesEnabled',
|
||||
'BatterySaverModeAvailability',
|
||||
'BrowserSignin',
|
||||
'ChromeVariations',
|
||||
'DeveloperToolsAvailability',
|
||||
'DownloadRestrictions',
|
||||
'DownloadRestrictions_recommended',
|
||||
'ForceYouTubeRestrict',
|
||||
'HeadlessMode',
|
||||
'IncognitoModeAvailability',
|
||||
'IntranetRedirectBehavior',
|
||||
'LensOverlaySettings',
|
||||
'MemorySaverModeSavings',
|
||||
'NetworkPredictionOptions',
|
||||
'NetworkPredictionOptions_recommended',
|
||||
'ProfilePickerOnStartupAvailability',
|
||||
'ProfileReauthPrompt',
|
||||
'RelaunchNotification',
|
||||
'SafeSitesFilterBehavior'])
|
||||
'SafeSitesFilterBehavior',
|
||||
'ToolbarAvatarLabelSettings',
|
||||
'UserAgentReduction',
|
||||
'BatterySaverModeAvailability_recommended',
|
||||
'DownloadRestrictions_recommended',
|
||||
'NetworkPredictionOptions_recommended',
|
||||
'PrintPostScriptMode',
|
||||
'PrintRasterizationMode',
|
||||
'ChromeFrameRendererSettings',
|
||||
'DefaultFileHandlingGuardSetting',
|
||||
'DefaultKeygenSetting',
|
||||
'DefaultPluginsSetting',
|
||||
'LegacySameSiteCookieBehaviorEnabled',
|
||||
'ForceMajorVersionToMinorPositionInUserAgent',
|
||||
'PasswordProtectionWarningTrigger',
|
||||
'SafeBrowsingProtectionLevel',
|
||||
'SafeBrowsingProtectionLevel_recommended',
|
||||
'RestoreOnStartup',
|
||||
'RestoreOnStartup_recommended'])
|
||||
return valuename_typeint
|
||||
|
||||
|
||||
@ -152,7 +176,7 @@ class chromium_applier(applier_frontend):
|
||||
'''
|
||||
Parse registry path string and leave key parameters
|
||||
'''
|
||||
parts = hivekeyname.replace(self.__registry_branch, '').split('\\')
|
||||
parts = hivekeyname.replace(self.__registry_branch, '').split('/')
|
||||
return parts
|
||||
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,20 +16,19 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import fileinput
|
||||
import jinja2
|
||||
import os
|
||||
import pwd
|
||||
import subprocess
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import string
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from gpt.drives import json2drive
|
||||
from util.util import get_homedir
|
||||
from util.logging import slogm, log
|
||||
from util.util import get_homedir, get_uid_by_username
|
||||
from util.logging import log
|
||||
|
||||
def storage_get_drives(storage, sid):
|
||||
drives = storage.get_drives(sid)
|
||||
@ -50,45 +49,210 @@ def add_line_if_missing(filename, ins_line):
|
||||
f.write(ins_line + '\n')
|
||||
f.flush()
|
||||
|
||||
def remove_chars_before_colon(input_string):
|
||||
if ":" in input_string:
|
||||
colon_index = input_string.index(":")
|
||||
result_string = input_string[colon_index + 1:]
|
||||
return result_string
|
||||
else:
|
||||
return input_string
|
||||
|
||||
def remove_escaped_quotes(input_string):
|
||||
result_string = input_string.replace('"', '').replace("'", '')
|
||||
return result_string
|
||||
|
||||
|
||||
class Drive_list:
|
||||
__alphabet = string.ascii_uppercase
|
||||
def __init__(self):
|
||||
self.dict_drives = dict()
|
||||
|
||||
def __get_letter(self, letter):
|
||||
slice_letters = set(self.__alphabet[self.__alphabet.find(letter) + 1:]) - set(self.dict_drives.keys())
|
||||
free_letters = sorted(slice_letters)
|
||||
if free_letters:
|
||||
return free_letters[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def append(self, drive:dict):
|
||||
cur_dir = drive['dir']
|
||||
if cur_dir not in set(self.dict_drives.keys()):
|
||||
if drive['action'] == 'D':
|
||||
return
|
||||
self.dict_drives[cur_dir] = drive
|
||||
return
|
||||
|
||||
else:
|
||||
if drive['action'] == 'C':
|
||||
if drive['useLetter'] == '1':
|
||||
return
|
||||
else:
|
||||
new_dir = self.__get_letter(cur_dir)
|
||||
if not new_dir:
|
||||
return
|
||||
drive['dir'] = new_dir
|
||||
self.dict_drives[new_dir] = drive
|
||||
return
|
||||
|
||||
if drive['action'] == 'U':
|
||||
self.dict_drives[cur_dir]['thisDrive'] = drive['thisDrive']
|
||||
self.dict_drives[cur_dir]['allDrives'] = drive['allDrives']
|
||||
self.dict_drives[cur_dir]['label'] = drive['label']
|
||||
self.dict_drives[cur_dir]['persistent'] = drive['persistent']
|
||||
self.dict_drives[cur_dir]['useLetter'] = drive['useLetter']
|
||||
return
|
||||
|
||||
if drive['action'] == 'R':
|
||||
self.dict_drives[cur_dir] = drive
|
||||
return
|
||||
if drive['action'] == 'D':
|
||||
if drive['useLetter'] == '1':
|
||||
self.dict_drives.pop(cur_dir, None)
|
||||
else:
|
||||
keys_set = set(self.dict_drives.keys())
|
||||
slice_letters = set(self.__alphabet[self.__alphabet.find(cur_dir):])
|
||||
for letter_dir in (keys_set & slice_letters):
|
||||
self.dict_drives.pop(letter_dir, None)
|
||||
|
||||
def __call__(self):
|
||||
return list(self.dict_drives.values())
|
||||
|
||||
def len(self):
|
||||
return len(self.dict_drives)
|
||||
|
||||
class cifs_applier(applier_frontend):
|
||||
def __init__(self, storage):
|
||||
pass
|
||||
__module_name = 'CIFSApplier'
|
||||
__module_enabled = True
|
||||
__module_experimental = False
|
||||
__dir4clean = '/etc/auto.master.gpupdate.d'
|
||||
|
||||
def __init__(self, storage, sid):
|
||||
self.clear_directory_auto_dir()
|
||||
self.applier_cifs = cifs_applier_user(storage, sid, None)
|
||||
self.__module_enabled = check_enabled(
|
||||
storage
|
||||
, self.__module_name
|
||||
, self.__module_experimental
|
||||
)
|
||||
def clear_directory_auto_dir(self):
|
||||
path = Path(self.__dir4clean)
|
||||
|
||||
for item in path.iterdir():
|
||||
try:
|
||||
if item.is_file() or item.is_symlink():
|
||||
item.unlink()
|
||||
except Exception as exc:
|
||||
log('W37', {'exc': exc})
|
||||
log('D231')
|
||||
|
||||
def apply(self):
|
||||
pass
|
||||
if self.__module_enabled:
|
||||
log('D179')
|
||||
self.applier_cifs._admin_context_apply()
|
||||
else:
|
||||
log('D180')
|
||||
|
||||
class cifs_applier_user(applier_frontend):
|
||||
__module_name = 'CIFSApplierUser'
|
||||
__module_enabled = False
|
||||
__module_experimental = True
|
||||
__module_enabled = True
|
||||
__module_experimental = False
|
||||
__auto_file = '/etc/auto.master'
|
||||
__auto_dir = '/etc/auto.master.gpupdate.d'
|
||||
__template_path = '/usr/share/gpupdate/templates'
|
||||
__template_mountpoints = 'autofs_mountpoints.j2'
|
||||
__template_identity = 'autofs_identity.j2'
|
||||
__template_auto = 'autofs_auto.j2'
|
||||
__template_mountpoints_hide = 'autofs_mountpoints_hide.j2'
|
||||
__template_auto_hide = 'autofs_auto_hide.j2'
|
||||
__enable_home_link = '/Software/BaseALT/Policies/GPUpdate/DriveMapsHome'
|
||||
__enable_home_link_user = '/Software/BaseALT/Policies/GPUpdate/DriveMapsHomeUser'
|
||||
__name_dir = '/Software/BaseALT/Policies/GPUpdate'
|
||||
__name_link_prefix = '/Software/BaseALT/Policies/GPUpdate/DriveMapsHomeDisableNet'
|
||||
__name_link_prefix_user = '/Software/BaseALT/Policies/GPUpdate/DriveMapsHomeDisableNetUser'
|
||||
__key_link_prefix = 'DriveMapsHomeDisableNet'
|
||||
__key_link_prefix_user = 'DriveMapsHomeDisableNetUser'
|
||||
__timeout_user_key = '/Software/BaseALT/Policies/GPUpdate/TimeoutAutofsUser'
|
||||
__timeout_key = '/Software/BaseALT/Policies/GPUpdate/TimeoutAutofs'
|
||||
__cifsacl_key = '/Software/BaseALT/Policies/GPUpdate/CifsaclDisable'
|
||||
__target_mountpoint = '/media/gpupdate'
|
||||
__target_mountpoint_user = '/run/media'
|
||||
__mountpoint_dirname = 'drives.system'
|
||||
__mountpoint_dirname_user = 'drives'
|
||||
__key_cifs_previous_value = 'Previous/Software/BaseALT/Policies/GPUpdate'
|
||||
__key_preferences = 'Software/BaseALT/Policies/Preferences/'
|
||||
__key_preferences_previous = 'Previous/Software/BaseALT/Policies/Preferences/'
|
||||
__name_value = 'DriveMapsName'
|
||||
__name_value_user = 'DriveMapsNameUser'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self.state_home_link = False
|
||||
self.state_home_link_user = False
|
||||
self.dict_registry_machine = self.storage.get_dictionary_from_dconf_file_db()
|
||||
self.homedir = ''
|
||||
name_dir = self.__name_dir[1:]
|
||||
|
||||
self.home = get_homedir(username)
|
||||
if username:
|
||||
self.dict_registry_user = self.storage.get_dictionary_from_dconf_file_db(get_uid_by_username(username))
|
||||
self.home = self.__target_mountpoint_user + '/' + username
|
||||
self.state_home_link = self.storage.check_enable_key(self.__enable_home_link)
|
||||
self.state_home_link_disable_net = self.storage.check_enable_key(self.__name_link_prefix)
|
||||
self.state_home_link_disable_net_user = self.storage.check_enable_key(self.__name_link_prefix_user)
|
||||
|
||||
self.state_home_link_user = self.storage.check_enable_key(self.__enable_home_link_user)
|
||||
self.timeout = self.storage.get_entry(self.__timeout_user_key)
|
||||
dirname = self.storage.get_entry(self.__name_dir + '/' + self.__name_value_user)
|
||||
dirname_system_from_machine = self.dict_registry_machine.get(name_dir, dict()).get(self.__name_value, None)
|
||||
self.__mountpoint_dirname_user = dirname.data if dirname and dirname.data else self.__mountpoint_dirname_user
|
||||
self.__mountpoint_dirname = dirname_system_from_machine if dirname_system_from_machine else self.__mountpoint_dirname
|
||||
mntTarget = self.__mountpoint_dirname_user
|
||||
|
||||
self.keys_cifs_previous_values_user = self.dict_registry_user.get(self.__key_cifs_previous_value,dict())
|
||||
self.keys_cifs_values_user = self.dict_registry_user.get(name_dir,dict())
|
||||
self.keys_the_preferences_previous_values_user = self.dict_registry_user.get((self.__key_preferences_previous+self.username),dict()).get('Drives', dict())
|
||||
self.keys_the_preferences_values_user = self.dict_registry_user.get((self.__key_preferences+self.username),dict()).get('Drives', dict())
|
||||
|
||||
else:
|
||||
self.home = self.__target_mountpoint
|
||||
self.timeout = self.storage.get_entry(self.__timeout_key)
|
||||
dirname_system = self.storage.get_entry(self.__name_dir + '/' + self.__name_value)
|
||||
self.__mountpoint_dirname = dirname_system.data if dirname_system and dirname_system.data else self.__mountpoint_dirname
|
||||
mntTarget = self.__mountpoint_dirname
|
||||
|
||||
self.keys_cifs_previous_values_machine = self.dict_registry_machine.get(self.__key_cifs_previous_value,dict())
|
||||
self.keys_cifs_values_machine = self.dict_registry_machine.get(name_dir,dict())
|
||||
self.keys_the_preferences_previous_values = self.dict_registry_machine.get((self.__key_preferences_previous+'Machine'),dict()).get('Drives', dict())
|
||||
self.keys_the_preferences_values = self.dict_registry_machine.get((self.__key_preferences+'Machine'),dict()).get('Drives', dict())
|
||||
self.cifsacl_disable = self.storage.get_entry(self.__cifsacl_key, preg=False)
|
||||
|
||||
self.mntTarget = mntTarget.translate(str.maketrans({" ": r"\ "}))
|
||||
conf_file = '{}.conf'.format(sid)
|
||||
conf_hide_file = '{}_hide.conf'.format(sid)
|
||||
autofs_file = '{}.autofs'.format(sid)
|
||||
autofs_hide_file = '{}_hide.autofs'.format(sid)
|
||||
cred_file = '{}.creds'.format(sid)
|
||||
|
||||
self.auto_master_d = Path(self.__auto_dir)
|
||||
|
||||
self.user_config = self.auto_master_d / conf_file
|
||||
self.user_config_hide = self.auto_master_d / conf_hide_file
|
||||
if os.path.exists(self.user_config.resolve()):
|
||||
self.user_config.unlink()
|
||||
if os.path.exists(self.user_config_hide.resolve()):
|
||||
self.user_config_hide.unlink()
|
||||
self.user_autofs = self.auto_master_d / autofs_file
|
||||
self.user_autofs_hide = self.auto_master_d / autofs_hide_file
|
||||
if os.path.exists(self.user_autofs.resolve()):
|
||||
self.user_autofs.unlink()
|
||||
if os.path.exists(self.user_autofs_hide.resolve()):
|
||||
self.user_autofs_hide.unlink()
|
||||
self.user_creds = self.auto_master_d / cred_file
|
||||
|
||||
self.mount_dir = Path(os.path.join(self.home, 'net'))
|
||||
|
||||
self.mount_dir = Path(os.path.join(self.home))
|
||||
self.drives = storage_get_drives(self.storage, self.sid)
|
||||
|
||||
self.template_loader = jinja2.FileSystemLoader(searchpath=self.__template_path)
|
||||
@ -98,6 +262,9 @@ class cifs_applier_user(applier_frontend):
|
||||
self.template_indentity = self.template_env.get_template(self.__template_identity)
|
||||
self.template_auto = self.template_env.get_template(self.__template_auto)
|
||||
|
||||
self.template_mountpoints_hide = self.template_env.get_template(self.__template_mountpoints_hide)
|
||||
self.template_auto_hide = self.template_env.get_template(self.__template_auto_hide)
|
||||
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
@ -105,46 +272,82 @@ class cifs_applier_user(applier_frontend):
|
||||
)
|
||||
|
||||
|
||||
def is_mount_point_dirname(self):
|
||||
if self.username:
|
||||
return self.mount_dir.joinpath(self.__mountpoint_dirname_user).is_mount()
|
||||
else:
|
||||
return self.mount_dir.joinpath(self.__mountpoint_dirname).is_mount()
|
||||
|
||||
def is_changed_keys(self):
|
||||
if self.username:
|
||||
return (self.keys_cifs_previous_values_user.get(self.__name_value_user) != self.keys_cifs_values_user.get(self.__name_value_user) or
|
||||
self.keys_the_preferences_previous_values_user != self.keys_the_preferences_values_user)
|
||||
else:
|
||||
return (self.keys_cifs_previous_values_machine.get(self.__name_value) != self.keys_cifs_values_machine.get(self.__name_value) or
|
||||
self.keys_the_preferences_previous_values != self.keys_the_preferences_values)
|
||||
|
||||
def user_context_apply(self):
|
||||
'''
|
||||
Nothing to implement.
|
||||
'''
|
||||
pass
|
||||
|
||||
def __admin_context_apply(self):
|
||||
def _admin_context_apply(self):
|
||||
# Create /etc/auto.master.gpupdate.d directory
|
||||
self.auto_master_d.mkdir(parents=True, exist_ok=True)
|
||||
# Create user's destination mount directory
|
||||
self.mount_dir.mkdir(parents=True, exist_ok=True)
|
||||
uid = pwd.getpwnam(self.username).pw_uid if self.username else None
|
||||
if uid:
|
||||
os.chown(self.mount_dir, uid=uid, gid=-1)
|
||||
self.mount_dir.chmod(0o700)
|
||||
|
||||
# Add pointer to /etc/auto.master.gpiupdate.d in /etc/auto.master
|
||||
auto_destdir = '+dir:{}'.format(self.__auto_dir)
|
||||
add_line_if_missing(self.__auto_file, auto_destdir)
|
||||
|
||||
# Collect data for drive settings
|
||||
drive_list = list()
|
||||
drive_list = Drive_list()
|
||||
for drv in self.drives:
|
||||
drive_settings = dict()
|
||||
drive_settings['dir'] = drv.dir
|
||||
drive_settings['login'] = drv.login
|
||||
drive_settings['password'] = drv.password
|
||||
drive_settings['path'] = drv.path.replace('\\', '/')
|
||||
drive_settings['path'] = remove_chars_before_colon(drv.path.replace('\\', '/'))
|
||||
drive_settings['action'] = drv.action
|
||||
drive_settings['thisDrive'] = drv.thisDrive
|
||||
drive_settings['allDrives'] = drv.allDrives
|
||||
drive_settings['label'] = remove_escaped_quotes(drv.label)
|
||||
drive_settings['persistent'] = drv.persistent
|
||||
drive_settings['useLetter'] = drv.useLetter
|
||||
drive_settings['username'] = self.username
|
||||
drive_settings['cifsacl'] = False if self.cifsacl_disable else True
|
||||
|
||||
drive_list.append(drive_settings)
|
||||
|
||||
if len(drive_list) > 0:
|
||||
if drive_list.len() > 0:
|
||||
mount_settings = dict()
|
||||
mount_settings['drives'] = drive_list
|
||||
mount_settings['drives'] = drive_list()
|
||||
mount_text = self.template_mountpoints.render(**mount_settings)
|
||||
|
||||
mount_text_hide = self.template_mountpoints_hide.render(**mount_settings)
|
||||
|
||||
with open(self.user_config.resolve(), 'w') as f:
|
||||
f.truncate()
|
||||
f.write(mount_text)
|
||||
f.flush()
|
||||
|
||||
with open(self.user_config_hide.resolve(), 'w') as f:
|
||||
f.truncate()
|
||||
f.write(mount_text_hide)
|
||||
f.flush()
|
||||
|
||||
autofs_settings = dict()
|
||||
autofs_settings['home_dir'] = self.home
|
||||
autofs_settings['mntTarget'] = self.mntTarget
|
||||
autofs_settings['mount_file'] = self.user_config.resolve()
|
||||
autofs_settings['timeout'] = self.timeout.data if self.timeout and self.timeout.data else 120
|
||||
|
||||
autofs_text = self.template_auto.render(**autofs_settings)
|
||||
|
||||
with open(self.user_autofs.resolve(), 'w') as f:
|
||||
@ -152,13 +355,123 @@ class cifs_applier_user(applier_frontend):
|
||||
f.write(autofs_text)
|
||||
f.flush()
|
||||
|
||||
autofs_settings['mount_file'] = self.user_config_hide.resolve()
|
||||
autofs_text = self.template_auto_hide.render(**autofs_settings)
|
||||
with open(self.user_autofs_hide.resolve(), 'w') as f:
|
||||
f.truncate()
|
||||
f.write(autofs_text)
|
||||
f.flush()
|
||||
|
||||
if self.is_changed_keys() or (self.drives and not self.is_mount_point_dirname()):
|
||||
self.restart_autofs()
|
||||
|
||||
if self.username:
|
||||
self.update_drivemaps_home_links()
|
||||
|
||||
def restart_autofs(self):
|
||||
try:
|
||||
subprocess.check_call(['/bin/systemctl', 'restart', 'autofs'])
|
||||
except Exception as exc:
|
||||
log('E74', {'exc': exc})
|
||||
|
||||
|
||||
def unlink_symlink(self, symlink:Path, previous=None):
|
||||
try:
|
||||
if symlink.exists() and symlink.is_symlink() and symlink.owner() == 'root':
|
||||
symlink.unlink()
|
||||
elif symlink.is_symlink() and not symlink.exists():
|
||||
symlink.unlink()
|
||||
elif previous:
|
||||
symlink.unlink()
|
||||
except:
|
||||
pass
|
||||
|
||||
def del_previous_link(self, previous_value_link , mountpoint_dirname, prefix):
|
||||
d_previous = Path(self.homedir + ('/' if prefix else '/net.') + previous_value_link)
|
||||
if d_previous.name != mountpoint_dirname:
|
||||
dHide_previous = Path(self.homedir + ('/.' if prefix else '/.net.') + previous_value_link)
|
||||
self.unlink_symlink(d_previous, True)
|
||||
self.unlink_symlink(dHide_previous, True)
|
||||
|
||||
def update_drivemaps_home_links(self):
|
||||
if self.state_home_link_disable_net:
|
||||
prefix = ''
|
||||
else:
|
||||
prefix = 'net.'
|
||||
if self.state_home_link_disable_net_user:
|
||||
prefix_user = ''
|
||||
else:
|
||||
prefix_user = 'net.'
|
||||
|
||||
previous_value_link = self.keys_cifs_previous_values_machine.get(self.__name_value, self.__mountpoint_dirname)
|
||||
previous_state_home_link_disable_net_user = self.keys_cifs_previous_values_user.get(self.__key_link_prefix_user)
|
||||
previous_state_home_link_disable_net = self.keys_cifs_previous_values_user.get(self.__key_link_prefix)
|
||||
previous_value_link_user = self.keys_cifs_previous_values_user.get(self.__name_value_user, self.__mountpoint_dirname_user)
|
||||
|
||||
self.homedir = get_homedir(self.username)
|
||||
|
||||
dUser = Path(self.homedir + '/' + prefix_user + self.__mountpoint_dirname_user)
|
||||
dUserHide = Path(self.homedir + '/.' + prefix_user + self.__mountpoint_dirname_user)
|
||||
dMachine = Path(self.homedir+'/' + prefix + self.__mountpoint_dirname)
|
||||
dMachineHide = Path(self.homedir+'/.' + prefix + self.__mountpoint_dirname)
|
||||
|
||||
if self.state_home_link_user:
|
||||
dUserMountpoint = Path(self.home).joinpath(self.__mountpoint_dirname_user)
|
||||
dUserMountpointHide = Path(self.home).joinpath('.' + self.__mountpoint_dirname_user)
|
||||
self.del_previous_link(previous_value_link_user, dUser.name, previous_state_home_link_disable_net_user)
|
||||
if not dUser.exists() and dUserMountpoint.exists():
|
||||
try:
|
||||
os.symlink(dUserMountpoint, dUser, True)
|
||||
except Exception as exc:
|
||||
log('D194', {'exc': exc})
|
||||
elif dUser.is_symlink() and not dUserMountpoint.exists():
|
||||
self.unlink_symlink(dUser)
|
||||
|
||||
if not dUserHide.exists() and dUserMountpointHide.exists():
|
||||
try:
|
||||
os.symlink(dUserMountpointHide, dUserHide, True)
|
||||
except Exception as exc:
|
||||
log('D196', {'exc': exc})
|
||||
elif dUserHide.is_symlink() and not dUserMountpointHide.exists():
|
||||
self.unlink_symlink(dUserHide)
|
||||
else:
|
||||
self.del_previous_link(previous_value_link_user, dUser.name, previous_state_home_link_disable_net_user)
|
||||
self.unlink_symlink(dUser)
|
||||
self.unlink_symlink(dUserHide)
|
||||
|
||||
|
||||
if self.state_home_link:
|
||||
dMachineMountpoint = Path(self.__target_mountpoint).joinpath(self.__mountpoint_dirname)
|
||||
dMachineMountpointHide = Path(self.__target_mountpoint).joinpath('.' + self.__mountpoint_dirname)
|
||||
self.del_previous_link(previous_value_link, dMachine.name, previous_state_home_link_disable_net)
|
||||
|
||||
if not dMachine.exists() and dMachineMountpoint.exists():
|
||||
try:
|
||||
os.symlink(dMachineMountpoint, dMachine, True)
|
||||
except Exception as exc:
|
||||
log('D195', {'exc': exc})
|
||||
elif dMachine.is_symlink() and not dMachineMountpoint.exists():
|
||||
self.unlink_symlink(dMachine)
|
||||
|
||||
if not dMachineHide.exists() and dMachineMountpointHide.exists():
|
||||
try:
|
||||
os.symlink(dMachineMountpointHide, dMachineHide, True)
|
||||
except Exception as exc:
|
||||
log('D197', {'exc': exc})
|
||||
elif dMachineHide.is_symlink() and not dMachineMountpointHide.exists():
|
||||
self.unlink_symlink(dMachineHide)
|
||||
else:
|
||||
self.del_previous_link(previous_value_link, dMachine.name, previous_state_home_link_disable_net)
|
||||
self.unlink_symlink(dMachine)
|
||||
self.unlink_symlink(dMachineHide)
|
||||
|
||||
|
||||
|
||||
|
||||
def admin_context_apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D146')
|
||||
self.__admin_context_apply()
|
||||
self._admin_context_apply()
|
||||
else:
|
||||
log('D147')
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -21,19 +21,18 @@ from .applier_frontend import (
|
||||
, check_enabled
|
||||
)
|
||||
from .appliers.control import control
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
import logging
|
||||
|
||||
class control_applier(applier_frontend):
|
||||
__module_name = 'ControlApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
_registry_branch = 'Software\\BaseALT\\Policies\\Control'
|
||||
_registry_branch = 'Software/BaseALT/Policies/Control'
|
||||
|
||||
def __init__(self, storage):
|
||||
self.storage = storage
|
||||
self.control_settings = self.storage.filter_hklm_entries('Software\\BaseALT\\Policies\\Control%')
|
||||
self.control_settings = self.storage.filter_hklm_entries(self._registry_branch)
|
||||
self.controls = list()
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
@ -43,7 +42,7 @@ class control_applier(applier_frontend):
|
||||
|
||||
def run(self):
|
||||
for setting in self.control_settings:
|
||||
valuename = setting.hive_key.rpartition('\\')[2]
|
||||
valuename = setting.hive_key.rpartition('/')[2]
|
||||
try:
|
||||
self.controls.append(control(valuename, int(setting.data)))
|
||||
logdata = dict()
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,19 +16,15 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import json
|
||||
|
||||
import cups
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from gpt.printers import json2printer
|
||||
from util.rpm import is_rpm_installed
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
def storage_get_printers(storage, sid):
|
||||
'''
|
||||
@ -81,8 +77,12 @@ class cups_applier(applier_frontend):
|
||||
if not is_rpm_installed('cups'):
|
||||
log('W9')
|
||||
return
|
||||
|
||||
self.cups_connection = cups.Connection()
|
||||
try:
|
||||
self.cups_connection = cups.Connection()
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc', exc]
|
||||
log('W20', logdata)
|
||||
self.printers = storage_get_printers(self.storage, self.storage.get_info('machine_sid'))
|
||||
|
||||
if self.printers:
|
||||
@ -111,7 +111,7 @@ class cups_applier_user(applier_frontend):
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
, self.__module_enabled
|
||||
, self.__module_experimental
|
||||
)
|
||||
|
||||
def user_context_apply(self):
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -21,9 +21,8 @@ from .applier_frontend import (
|
||||
, check_enabled
|
||||
)
|
||||
from .appliers.envvar import Envvar
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
import logging
|
||||
|
||||
class envvar_applier(applier_frontend):
|
||||
__module_name = 'EnvvarsApplier'
|
||||
@ -34,7 +33,8 @@ class envvar_applier(applier_frontend):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.envvars = self.storage.get_envvars(self.sid)
|
||||
#self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_enabled)
|
||||
Envvar.clear_envvar_file()
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
@ -54,12 +54,10 @@ class envvar_applier_user(applier_frontend):
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self.envvars = self.storage.get_envvars(self.sid)
|
||||
#self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
Envvar.clear_envvar_file(username)
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
|
||||
def admin_context_apply(self):
|
||||
pass
|
||||
|
||||
def user_context_apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D136')
|
||||
ev = Envvar(self.envvars, self.username)
|
||||
@ -67,3 +65,6 @@ class envvar_applier_user(applier_frontend):
|
||||
else:
|
||||
log('D137')
|
||||
|
||||
def user_context_apply(self):
|
||||
pass
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from .appliers.file_cp import Files_cp
|
||||
from .appliers.file_cp import Files_cp, Execution_check
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
@ -33,6 +33,7 @@ class file_applier(applier_frontend):
|
||||
|
||||
def __init__(self, storage, file_cache, sid):
|
||||
self.storage = storage
|
||||
self.exe_check = Execution_check(storage)
|
||||
self.sid = sid
|
||||
self.file_cache = file_cache
|
||||
self.files = self.storage.get_files(self.sid)
|
||||
@ -40,7 +41,7 @@ class file_applier(applier_frontend):
|
||||
|
||||
def run(self):
|
||||
for file in self.files:
|
||||
Files_cp(file, self.file_cache)
|
||||
Files_cp(file, self.file_cache, self.exe_check)
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
@ -59,6 +60,7 @@ class file_applier_user(applier_frontend):
|
||||
self.file_cache = file_cache
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self.exe_check = Execution_check(storage)
|
||||
self.files = self.storage.get_files(self.sid)
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
@ -68,7 +70,7 @@ class file_applier_user(applier_frontend):
|
||||
|
||||
def run(self):
|
||||
for file in self.files:
|
||||
Files_cp(file, self.file_cache, self.username)
|
||||
Files_cp(file, self.file_cache, self.exe_check, self.username)
|
||||
|
||||
def admin_context_apply(self):
|
||||
if self.__module_enabled:
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -33,15 +33,14 @@ from .applier_frontend import (
|
||||
, check_enabled
|
||||
)
|
||||
from util.logging import log
|
||||
from util.util import is_machine_name
|
||||
from util.util import is_machine_name, try_dict_to_literal_eval
|
||||
|
||||
class firefox_applier(applier_frontend):
|
||||
__module_name = 'FirefoxApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software\\Policies\\Mozilla\\Firefox\\'
|
||||
__firefox_installdir1 = '/usr/lib64/firefox/distribution'
|
||||
__firefox_installdir2 = '/etc/firefox/policies'
|
||||
__registry_branch = 'Software/Policies/Mozilla/Firefox'
|
||||
__firefox_policies = '/etc/firefox/policies'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
@ -50,8 +49,7 @@ class firefox_applier(applier_frontend):
|
||||
self._is_machine_name = is_machine_name(self.username)
|
||||
self.policies = dict()
|
||||
self.policies_json = dict({ 'policies': self.policies })
|
||||
firefox_filter = '{}%'.format(self.__registry_branch)
|
||||
self.firefox_keys = self.storage.filter_hklm_entries(firefox_filter)
|
||||
self.firefox_keys = self.storage.filter_hklm_entries(self.__registry_branch)
|
||||
self.policies_gen = dict()
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
@ -59,78 +57,16 @@ class firefox_applier(applier_frontend):
|
||||
, self.__module_experimental
|
||||
)
|
||||
|
||||
def get_boolean(self,data):
|
||||
if data in ['0', 'false', None, 'none', 0]:
|
||||
return False
|
||||
if data in ['1', 'true', 1]:
|
||||
return True
|
||||
|
||||
def get_parts(self, hivekeyname):
|
||||
'''
|
||||
Parse registry path string and leave key parameters
|
||||
'''
|
||||
parts = hivekeyname.replace(self.__registry_branch, '').split('\\')
|
||||
return parts
|
||||
|
||||
def create_dict(self, firefox_keys):
|
||||
'''
|
||||
Collect dictionaries from registry keys into a general dictionary
|
||||
'''
|
||||
counts = dict()
|
||||
for it_data in firefox_keys:
|
||||
branch = counts
|
||||
try:
|
||||
if type(it_data.data) is bytes:
|
||||
it_data.data = it_data.data.decode(encoding='utf-16').replace('\x00','')
|
||||
#Cases when it is necessary to create nested dictionaries
|
||||
if it_data.valuename != it_data.data:
|
||||
parts = self.get_parts(it_data.hive_key)
|
||||
#creating a nested dictionary from elements
|
||||
for part in parts[:-1]:
|
||||
branch = branch.setdefault(part, {})
|
||||
#dictionary key value initialization
|
||||
if it_data.type == 4:
|
||||
branch[parts[-1]] = self.get_boolean(it_data.data)
|
||||
else:
|
||||
branch[parts[-1]] = str(it_data.data).replace('\\', '/')
|
||||
#Cases when it is necessary to create lists in a dictionary
|
||||
else:
|
||||
parts = self.get_parts(it_data.keyname)
|
||||
for part in parts[:-1]:
|
||||
branch = branch.setdefault(part, {})
|
||||
if branch.get(parts[-1]) is None:
|
||||
branch[parts[-1]] = list()
|
||||
if it_data.type == 4:
|
||||
branch[parts[-1]].append(self.get_boolean(it_data.data))
|
||||
else:
|
||||
if os.path.isdir(str(it_data.data).replace('\\', '/')):
|
||||
branch[parts[-1]].append(str(it_data.data).replace('\\', '/'))
|
||||
else:
|
||||
branch[parts[-1]].append(str(it_data.data))
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['Exception'] = exc
|
||||
logdata['keyname'] = it_data.keyname
|
||||
log('W14', logdata)
|
||||
|
||||
self.policies_json = {'policies': dict_item_to_list(counts)}
|
||||
|
||||
def machine_apply(self):
|
||||
'''
|
||||
Write policies.json to Firefox installdir.
|
||||
Write policies.json to Firefox.
|
||||
'''
|
||||
self.create_dict(self.firefox_keys)
|
||||
destfile = os.path.join(self.__firefox_installdir1, 'policies.json')
|
||||
excp = ['SOCKSVersion']
|
||||
self.policies_json = create_dict(self.firefox_keys, self.__registry_branch, excp)
|
||||
|
||||
os.makedirs(self.__firefox_installdir1, exist_ok=True)
|
||||
with open(destfile, 'w') as f:
|
||||
json.dump(self.policies_json, f)
|
||||
logdata = dict()
|
||||
logdata['destfile'] = destfile
|
||||
log('D91', logdata)
|
||||
|
||||
destfile = os.path.join(self.__firefox_installdir2, 'policies.json')
|
||||
os.makedirs(self.__firefox_installdir2, exist_ok=True)
|
||||
destfile = os.path.join(self.__firefox_policies, 'policies.json')
|
||||
os.makedirs(self.__firefox_policies, exist_ok=True)
|
||||
with open(destfile, 'w') as f:
|
||||
json.dump(self.policies_json, f)
|
||||
logdata = dict()
|
||||
@ -160,6 +96,9 @@ def dict_item_to_list(dictionary:dict) -> dict:
|
||||
'''
|
||||
Replacing dictionaries with numeric keys with a List
|
||||
'''
|
||||
if '' in dictionary:
|
||||
dictionary = dictionary.pop('')
|
||||
|
||||
for key,val in dictionary.items():
|
||||
if type(val) == dict:
|
||||
if key_dict_is_digit(val):
|
||||
@ -167,3 +106,65 @@ def dict_item_to_list(dictionary:dict) -> dict:
|
||||
else:
|
||||
dict_item_to_list(dictionary[key])
|
||||
return dictionary
|
||||
|
||||
def clean_data_firefox(data):
|
||||
return data.replace("'", '\"')
|
||||
|
||||
|
||||
|
||||
def create_dict(firefox_keys, registry_branch, excp=list()):
|
||||
'''
|
||||
Collect dictionaries from registry keys into a general dictionary
|
||||
'''
|
||||
get_boolean = lambda data: data in ['1', 'true', 'True', True, 1] if isinstance(data, (str, int)) else False
|
||||
get_parts = lambda hivekey, registry: hivekey.replace(registry, '').split('/')
|
||||
counts = dict()
|
||||
for it_data in firefox_keys:
|
||||
branch = counts
|
||||
try:
|
||||
if type(it_data.data) is bytes:
|
||||
it_data.data = it_data.data.decode(encoding='utf-16').replace('\x00','')
|
||||
json_data = try_dict_to_literal_eval(it_data.data)
|
||||
if json_data:
|
||||
it_data.data = json_data
|
||||
it_data.type = 7
|
||||
else:
|
||||
if it_data.type == 1:
|
||||
it_data.data = clean_data_firefox(it_data.data)
|
||||
#Cases when it is necessary to create nested dictionaries
|
||||
if it_data.valuename != it_data.data:
|
||||
parts = get_parts(it_data.hive_key, registry_branch)
|
||||
#creating a nested dictionary from elements
|
||||
for part in parts[:-1]:
|
||||
branch = branch.setdefault(part, {})
|
||||
#dictionary key value initialization
|
||||
if it_data.type == 4:
|
||||
if it_data.valuename in excp:
|
||||
branch[parts[-1]] = int(it_data.data)
|
||||
else:
|
||||
branch[parts[-1]] = get_boolean(it_data.data)
|
||||
elif it_data.type == 7:
|
||||
branch[parts[-1]] = it_data.data
|
||||
else:
|
||||
branch[parts[-1]] = str(it_data.data).replace('\\', '/')
|
||||
#Cases when it is necessary to create lists in a dictionary
|
||||
else:
|
||||
parts = get_parts(it_data.keyname, registry_branch)
|
||||
for part in parts[:-1]:
|
||||
branch = branch.setdefault(part, {})
|
||||
if branch.get(parts[-1]) is None:
|
||||
branch[parts[-1]] = list()
|
||||
if it_data.type == 4:
|
||||
branch[parts[-1]].append(get_boolean(it_data.data))
|
||||
else:
|
||||
if os.path.isdir(str(it_data.data).replace('\\', '/')):
|
||||
branch[parts[-1]].append(str(it_data.data).replace('\\', '/'))
|
||||
else:
|
||||
branch[parts[-1]].append(str(it_data.data))
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['Exception'] = exc
|
||||
logdata['keyname'] = it_data.keyname
|
||||
log('W14', logdata)
|
||||
|
||||
return {'policies': dict_item_to_list(counts)}
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,10 +17,9 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,7 +16,6 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -26,6 +26,7 @@ from .polkit_applier import (
|
||||
)
|
||||
from .systemd_applier import systemd_applier
|
||||
from .firefox_applier import firefox_applier
|
||||
from .thunderbird_applier import thunderbird_applier
|
||||
from .chromium_applier import chromium_applier
|
||||
from .cups_applier import cups_applier
|
||||
from .package_applier import (
|
||||
@ -45,7 +46,9 @@ from .folder_applier import (
|
||||
folder_applier
|
||||
, folder_applier_user
|
||||
)
|
||||
from .cifs_applier import cifs_applier_user
|
||||
from .cifs_applier import (
|
||||
cifs_applier_user
|
||||
, cifs_applier)
|
||||
from .ntp_applier import ntp_applier
|
||||
from .envvar_applier import (
|
||||
envvar_applier
|
||||
@ -66,6 +69,15 @@ from .ini_applier import (
|
||||
, ini_applier_user
|
||||
)
|
||||
|
||||
from .kde_applier import (
|
||||
kde_applier
|
||||
, kde_applier_user
|
||||
)
|
||||
from .laps_applier import laps_applier
|
||||
|
||||
from .networkshare_applier import networkshare_applier
|
||||
from .yandex_browser_applier import yandex_browser_applier
|
||||
|
||||
from util.sid import get_sid
|
||||
from util.users import (
|
||||
is_root,
|
||||
@ -118,12 +130,12 @@ class frontend_manager:
|
||||
'''
|
||||
|
||||
def __init__(self, username, is_machine):
|
||||
self.storage = registry_factory('registry')
|
||||
self.username = determine_username(username)
|
||||
self.storage = registry_factory('dconf', username=self.username)
|
||||
self.is_machine = is_machine
|
||||
self.process_uname = get_process_user()
|
||||
self.sid = get_sid(self.storage.get_info('domain'), self.username, is_machine)
|
||||
self.file_cache = fs_file_cache('file_cache')
|
||||
self.file_cache = fs_file_cache('file_cache', self.username)
|
||||
|
||||
self.machine_appliers = dict()
|
||||
self.user_appliers = dict()
|
||||
@ -133,22 +145,34 @@ class frontend_manager:
|
||||
self._init_user_appliers()
|
||||
|
||||
def _init_machine_appliers(self):
|
||||
self.machine_appliers['laps_applier'] = laps_applier(self.storage)
|
||||
self.machine_appliers['control'] = control_applier(self.storage)
|
||||
self.machine_appliers['polkit'] = polkit_applier(self.storage)
|
||||
self.machine_appliers['systemd'] = systemd_applier(self.storage)
|
||||
self.machine_appliers['firefox'] = firefox_applier(self.storage, self.sid, self.username)
|
||||
self.machine_appliers['thunderbird'] = thunderbird_applier(self.storage, self.sid, self.username)
|
||||
self.machine_appliers['chromium'] = chromium_applier(self.storage, self.sid, self.username)
|
||||
self.machine_appliers['yandex_browser'] = yandex_browser_applier(self.storage, self.sid, self.username)
|
||||
self.machine_appliers['shortcuts'] = shortcut_applier(self.storage)
|
||||
self.machine_appliers['gsettings'] = gsettings_applier(self.storage, self.file_cache)
|
||||
try:
|
||||
self.machine_appliers['cifs'] = cifs_applier(self.storage, self.sid)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['applier_name'] = 'cifs'
|
||||
logdata['msg'] = str(exc)
|
||||
log('E24', logdata)
|
||||
self.machine_appliers['cups'] = cups_applier(self.storage)
|
||||
self.machine_appliers['firewall'] = firewall_applier(self.storage)
|
||||
self.machine_appliers['folders'] = folder_applier(self.storage, self.sid)
|
||||
self.machine_appliers['package'] = package_applier(self.storage)
|
||||
self.machine_appliers['ntp'] = ntp_applier(self.storage)
|
||||
self.machine_appliers['envvar'] = envvar_applier(self.storage, self.sid)
|
||||
self.machine_appliers['networkshare'] = networkshare_applier(self.storage, self.sid)
|
||||
self.machine_appliers['scripts'] = scripts_applier(self.storage, self.sid)
|
||||
self.machine_appliers['files'] = file_applier(self.storage, self.file_cache, self.sid)
|
||||
self.machine_appliers['ini'] = ini_applier(self.storage, self.sid)
|
||||
self.machine_appliers['kde'] = kde_applier(self.storage)
|
||||
self.machine_appliers['package'] = package_applier(self.storage)
|
||||
|
||||
def _init_user_appliers(self):
|
||||
# User appliers are expected to work with user-writable
|
||||
@ -163,12 +187,14 @@ class frontend_manager:
|
||||
logdata['applier_name'] = 'cifs'
|
||||
logdata['msg'] = str(exc)
|
||||
log('E25', logdata)
|
||||
self.user_appliers['package'] = package_applier_user(self.storage, self.sid, self.username)
|
||||
self.user_appliers['polkit'] = polkit_applier_user(self.storage, self.sid, self.username)
|
||||
self.user_appliers['envvar'] = envvar_applier_user(self.storage, self.sid, self.username)
|
||||
self.user_appliers['networkshare'] = networkshare_applier(self.storage, self.sid, self.username)
|
||||
self.user_appliers['scripts'] = scripts_applier_user(self.storage, self.sid, self.username)
|
||||
self.user_appliers['files'] = file_applier_user(self.storage, self.file_cache, self.sid, self.username)
|
||||
self.user_appliers['ini'] = ini_applier_user(self.storage, self.sid, self.username)
|
||||
self.user_appliers['kde'] = kde_applier_user(self.storage, self.sid, self.username, self.file_cache)
|
||||
self.user_appliers['package'] = package_applier_user(self.storage, self.sid, self.username)
|
||||
|
||||
def machine_apply(self):
|
||||
'''
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2021 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,15 +16,13 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import logging
|
||||
from util.exceptions import NotUNCPathError
|
||||
import os
|
||||
import pwd
|
||||
import subprocess
|
||||
|
||||
from gi.repository import (
|
||||
Gio
|
||||
, GLib
|
||||
)
|
||||
from gi.repository import Gio
|
||||
from storage.dconf_registry import Dconf_registry
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
@ -35,7 +33,7 @@ from .appliers.gsettings import (
|
||||
system_gsettings,
|
||||
user_gsettings
|
||||
)
|
||||
from util.logging import slogm ,log
|
||||
from util.logging import log
|
||||
|
||||
def uri_fetch(schema, path, value, cache):
|
||||
'''
|
||||
@ -48,6 +46,8 @@ def uri_fetch(schema, path, value, cache):
|
||||
logdata['src'] = value
|
||||
try:
|
||||
retval = cache.get(value)
|
||||
if not retval:
|
||||
retval = ''
|
||||
logdata['dst'] = retval
|
||||
log('D90', logdata)
|
||||
except Exception as exc:
|
||||
@ -59,14 +59,14 @@ class gsettings_applier(applier_frontend):
|
||||
__module_name = 'GSettingsApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\GSettings\\'
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\gsettings\\'
|
||||
__registry_locks_branch = 'Software\\BaseALT\\Policies\\GSettingsLocks\\'
|
||||
__wallpaper_entry = 'Software\\BaseALT\\Policies\\GSettings\\org.mate.background.picture-filename'
|
||||
__vino_authentication_methods_entry = 'Software\\BaseALT\\Policies\\GSettings\\org.gnome.Vino.authentication-methods'
|
||||
__wallpaper_entry = 'Software/BaseALT/Policies/gsettings/org.mate.background.picture-filename'
|
||||
__vino_authentication_methods_entry = 'Software/BaseALT/Policies/gsettings/org.gnome.Vino.authentication-methods'
|
||||
__global_schema = '/usr/share/glib-2.0/schemas'
|
||||
__override_priority_file = 'zzz_policy.gschema.override'
|
||||
__override_old_file = '0_policy.gschema.override'
|
||||
__windows_settings = dict()
|
||||
|
||||
|
||||
def __init__(self, storage, file_cache):
|
||||
self.storage = storage
|
||||
@ -108,13 +108,13 @@ class gsettings_applier(applier_frontend):
|
||||
|
||||
# Get all configured gsettings locks
|
||||
for lock in self.gsettings_locks:
|
||||
valuename = lock.hive_key.rpartition('\\')[2]
|
||||
valuename = lock.hive_key.rpartition('/')[2]
|
||||
self.locks[valuename] = int(lock.data)
|
||||
|
||||
# Calculate all configured gsettings
|
||||
for setting in self.gsettings_keys:
|
||||
helper = None
|
||||
valuename = setting.hive_key.rpartition('\\')[2]
|
||||
valuename = setting.hive_key.rpartition('/')[2]
|
||||
rp = valuename.rpartition('.')
|
||||
schema = rp[0]
|
||||
path = rp[2]
|
||||
@ -137,10 +137,7 @@ class gsettings_applier(applier_frontend):
|
||||
log('E48')
|
||||
|
||||
# Update desktop configuration system backend
|
||||
try:
|
||||
proc = subprocess.run(args=['/usr/bin/dconf', "update"], capture_output=True, check=True)
|
||||
except Exception as exc:
|
||||
log('E49')
|
||||
Dconf_registry.dconf_update()
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
@ -184,9 +181,9 @@ class gsettings_applier_user(applier_frontend):
|
||||
__module_name = 'GSettingsApplierUser'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\GSettings\\'
|
||||
__wallpaper_entry = 'Software\\BaseALT\\Policies\\GSettings\\org.mate.background.picture-filename'
|
||||
__vino_authentication_methods_entry = 'Software\\BaseALT\\Policies\\GSettings\\org.gnome.Vino.authentication-methods'
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\gsettings\\'
|
||||
__wallpaper_entry = 'Software/BaseALT/Policies/gsettings/org.mate.background.picture-filename'
|
||||
__vino_authentication_methods_entry = 'Software/BaseALT/Policies/gsettings/org.gnome.Vino.authentication-methods'
|
||||
|
||||
def __init__(self, storage, file_cache, sid, username):
|
||||
self.storage = storage
|
||||
@ -204,25 +201,25 @@ class gsettings_applier_user(applier_frontend):
|
||||
mapping = [
|
||||
# Disable or enable screen saver
|
||||
GSettingsMapping(
|
||||
'Software\\Policies\\Microsoft\\Windows\\Control Panel\\Desktop\\ScreenSaveActive'
|
||||
'Software/Policies/Microsoft/Windows/Control Panel/Desktop/ScreenSaveActive'
|
||||
, 'org.mate.screensaver'
|
||||
, 'idle-activation-enabled'
|
||||
)
|
||||
# Timeout in seconds for screen saver activation. The value of zero effectively disables screensaver start
|
||||
, GSettingsMapping(
|
||||
'Software\\Policies\\Microsoft\\Windows\\Control Panel\\Desktop\\ScreenSaveTimeOut'
|
||||
'Software/Policies/Microsoft/Windows/Control Panel/Desktop/ScreenSaveTimeOut'
|
||||
, 'org.mate.session'
|
||||
, 'idle-delay'
|
||||
)
|
||||
# Enable or disable password protection for screen saver
|
||||
, GSettingsMapping(
|
||||
'Software\\Policies\\Microsoft\\Windows\\Control Panel\\Desktop\\ScreenSaverIsSecure'
|
||||
'Software/Policies/Microsoft/Windows/Control Panel/Desktop/ScreenSaverIsSecure'
|
||||
, 'org.mate.screensaver'
|
||||
, 'lock-enabled'
|
||||
)
|
||||
# Specify image which will be used as a wallpaper
|
||||
, GSettingsMapping(
|
||||
'Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\Wallpaper'
|
||||
'Software/Microsoft/Windows/CurrentVersion/Policies/System/Wallpaper'
|
||||
, 'org.mate.background'
|
||||
, 'picture-filename'
|
||||
)
|
||||
@ -251,15 +248,6 @@ class gsettings_applier_user(applier_frontend):
|
||||
return uri_fetch(schema, path, value, self.file_cache)
|
||||
|
||||
def run(self):
|
||||
#for setting in self.gsettings_keys:
|
||||
# valuename = setting.hive_key.rpartition('\\')[2]
|
||||
# rp = valuename.rpartition('.')
|
||||
# schema = rp[0]
|
||||
# path = rp[2]
|
||||
# self.gsettings.append(user_gsetting(schema, path, setting.data))
|
||||
|
||||
|
||||
# Calculate all mapped gsettings if mapping enabled
|
||||
if self.__windows_mapping_enabled:
|
||||
log('D83')
|
||||
self.windows_mapping_append()
|
||||
@ -268,7 +256,7 @@ class gsettings_applier_user(applier_frontend):
|
||||
|
||||
# Calculate all configured gsettings
|
||||
for setting in self.gsettings_keys:
|
||||
valuename = setting.hive_key.rpartition('\\')[2]
|
||||
valuename = setting.hive_key.rpartition('/')[2]
|
||||
rp = valuename.rpartition('.')
|
||||
schema = rp[0]
|
||||
path = rp[2]
|
||||
@ -293,8 +281,10 @@ class gsettings_applier_user(applier_frontend):
|
||||
try:
|
||||
entry = self.__wallpaper_entry
|
||||
filter_result = self.storage.get_hkcu_entry(self.sid, entry)
|
||||
if filter_result:
|
||||
if filter_result and filter_result.data:
|
||||
self.file_cache.store(filter_result.data)
|
||||
except NotUNCPathError:
|
||||
...
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exception'] = str(exc)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,7 +16,6 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from .appliers.ini_file import Ini_file
|
||||
from .applier_frontend import (
|
||||
@ -68,11 +67,11 @@ class ini_applier_user(applier_frontend):
|
||||
Ini_file(inifile, self.username)
|
||||
|
||||
def admin_context_apply(self):
|
||||
pass
|
||||
|
||||
def user_context_apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D173')
|
||||
self.run()
|
||||
else:
|
||||
log('D174')
|
||||
|
||||
def user_context_apply(self):
|
||||
pass
|
||||
|
367
gpoa/frontend/kde_applier.py
Normal file
367
gpoa/frontend/kde_applier.py
Normal file
@ -0,0 +1,367 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .applier_frontend import applier_frontend, check_enabled
|
||||
from util.logging import log
|
||||
from util.util import get_homedir
|
||||
from util.exceptions import NotUNCPathError
|
||||
import os
|
||||
import subprocess
|
||||
import re
|
||||
import dbus
|
||||
import shutil
|
||||
|
||||
class kde_applier(applier_frontend):
|
||||
__module_name = 'KdeApplier'
|
||||
__module_experimental = True
|
||||
__module_enabled = False
|
||||
__hklm_branch = 'Software/BaseALT/Policies/KDE/'
|
||||
__hklm_lock_branch = 'Software/BaseALT/Policies/KDELocks/'
|
||||
|
||||
def __init__(self, storage):
|
||||
self.storage = storage
|
||||
self.locks_dict = {}
|
||||
self.locks_data_dict = {}
|
||||
self.all_kde_settings = {}
|
||||
kde_filter = '{}%'.format(self.__hklm_branch)
|
||||
locks_filter = '{}%'.format(self.__hklm_lock_branch)
|
||||
self.locks_settings = self.storage.filter_hklm_entries(locks_filter)
|
||||
self.kde_settings = self.storage.filter_hklm_entries(kde_filter)
|
||||
self.all_kde_settings = {}
|
||||
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage,
|
||||
self.__module_name,
|
||||
self.__module_experimental
|
||||
)
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D198')
|
||||
create_dict(self.kde_settings, self.all_kde_settings, self.locks_settings, self.locks_dict)
|
||||
apply(self.all_kde_settings, self.locks_dict)
|
||||
else:
|
||||
log('D199')
|
||||
|
||||
class kde_applier_user(applier_frontend):
|
||||
__module_name = 'KdeApplierUser'
|
||||
__module_experimental = True
|
||||
__module_enabled = False
|
||||
kde_version = None
|
||||
__hkcu_branch = 'Software/BaseALT/Policies/KDE'
|
||||
__hkcu_lock_branch = 'Software/BaseALT/Policies/KDELocks'
|
||||
__plasma_update_entry = 'Software/BaseALT/Policies/KDE/Plasma/Update'
|
||||
|
||||
def __init__(self, storage, sid=None, username=None, file_cache = None):
|
||||
self.storage = storage
|
||||
self.username = username
|
||||
self.sid = sid
|
||||
self.file_cache = file_cache
|
||||
self.locks_dict = {}
|
||||
self.locks_data_dict = {}
|
||||
self.all_kde_settings = {}
|
||||
kde_applier_user.kde_version = get_kde_version()
|
||||
kde_filter = '{}%'.format(self.__hkcu_branch)
|
||||
locks_filter = '{}%'.format(self.__hkcu_lock_branch)
|
||||
self.locks_settings = self.storage.filter_hkcu_entries(self.sid, locks_filter)
|
||||
self.plasma_update = self.storage.get_entry(self.__plasma_update_entry)
|
||||
self.plasma_update_flag = self.plasma_update.data if self.plasma_update is not None else 0
|
||||
self.kde_settings = self.storage.filter_hkcu_entries(self.sid, kde_filter)
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage,
|
||||
self.__module_name,
|
||||
self.__module_experimental
|
||||
)
|
||||
|
||||
def admin_context_apply(self):
|
||||
try:
|
||||
for setting in self.kde_settings:
|
||||
file_name = setting.keyname.split("/")[-2]
|
||||
if file_name == 'wallpaper':
|
||||
data = setting.data
|
||||
break
|
||||
self.file_cache.store(data)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
|
||||
def user_context_apply(self):
|
||||
'''
|
||||
Change settings applied in user context
|
||||
'''
|
||||
if self.__module_enabled:
|
||||
log('D200')
|
||||
create_dict(self.kde_settings, self.all_kde_settings, self.locks_settings, self.locks_dict, self.file_cache, self.username, self.plasma_update_flag)
|
||||
apply(self.all_kde_settings, self.locks_dict, self.username)
|
||||
else:
|
||||
log('D201')
|
||||
|
||||
dbus_methods_mapping = {
|
||||
'kscreenlockerrc': {
|
||||
'dbus_service': 'org.kde.screensaver',
|
||||
'dbus_path': '/ScreenSaver',
|
||||
'dbus_interface': 'org.kde.screensaver',
|
||||
'dbus_method': 'configure'
|
||||
},
|
||||
'wallpaper': {
|
||||
'dbus_service': 'org.freedesktop.systemd1',
|
||||
'dbus_path': '/org/freedesktop/systemd1',
|
||||
'dbus_interface': 'org.freedesktop.systemd1.Manager',
|
||||
'dbus_method': 'RestartUnit',
|
||||
'dbus_args': ['plasma-plasmashell.service', 'replace']
|
||||
}
|
||||
}
|
||||
|
||||
def get_kde_version():
|
||||
try:
|
||||
kinfo_path = shutil.which("kinfo", path="/usr/lib/kf5/bin:/usr/bin")
|
||||
if not kinfo_path:
|
||||
raise FileNotFoundError("Unable to find kinfo")
|
||||
output = subprocess.check_output([kinfo_path], text=True, env={'LANG':'C'})
|
||||
for line in output.splitlines():
|
||||
if "KDE Frameworks Version" in line:
|
||||
frameworks_version = line.split(":", 1)[1].strip()
|
||||
major_frameworks_version = int(frameworks_version.split(".")[0])
|
||||
return major_frameworks_version
|
||||
except:
|
||||
return None
|
||||
|
||||
|
||||
def create_dict(kde_settings, all_kde_settings, locks_settings, locks_dict, file_cache = None, username = None, plasmaupdate = False):
|
||||
for locks in locks_settings:
|
||||
locks_dict[locks.valuename] = locks.data
|
||||
for setting in kde_settings:
|
||||
try:
|
||||
file_name, section, value = setting.keyname.split("/")[-2], setting.keyname.split("/")[-1], setting.valuename
|
||||
data = setting.data
|
||||
if file_name == 'wallpaper':
|
||||
apply_for_wallpaper(data, file_cache, username, plasmaupdate)
|
||||
else:
|
||||
all_kde_settings.setdefault(file_name, {}).setdefault(section, {})[value] = data
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['file_name'] = file_name
|
||||
logdata['section'] = section
|
||||
logdata['value'] = value
|
||||
logdata['data'] = data
|
||||
logdata['exc'] = exc
|
||||
log('W16', logdata)
|
||||
|
||||
def apply(all_kde_settings, locks_dict, username = None):
|
||||
logdata = dict()
|
||||
modified_files = set()
|
||||
if username is None:
|
||||
system_path_settings = '/etc/xdg/'
|
||||
system_files = [
|
||||
"baloofilerc",
|
||||
"kcminputrc",
|
||||
"kded_device_automounterrc",
|
||||
"kdeglobals",
|
||||
"ksplashrc",
|
||||
"kwinrc",
|
||||
"plasma-localerc",
|
||||
"plasmarc",
|
||||
"powermanagementprofilesrc"
|
||||
]
|
||||
for file in system_files:
|
||||
file_to_remove = f'{system_path_settings}{file}'
|
||||
if os.path.exists(file_to_remove):
|
||||
os.remove(file_to_remove)
|
||||
for file_name, sections in all_kde_settings.items():
|
||||
file_path = f'{system_path_settings}{file_name}'
|
||||
with open(file_path, 'w') as file:
|
||||
for section, keys in sections.items():
|
||||
section = section.replace(')(', '][')
|
||||
file.write(f'[{section}]\n')
|
||||
for key, value in keys.items():
|
||||
lock = f"{file_name}.{section}.{key}".replace('][', ')(')
|
||||
if locks_dict.get(lock) == 1:
|
||||
file.write(f'{key}[$i]={value}\n')
|
||||
else:
|
||||
file.write(f'{key}={value}\n')
|
||||
file.write('\n')
|
||||
modified_files.add(file_name)
|
||||
else:
|
||||
for file_name, sections in all_kde_settings.items():
|
||||
path = f'{get_homedir(username)}/.config/{file_name}'
|
||||
if not os.path.exists(path):
|
||||
open(path, 'a').close()
|
||||
else:
|
||||
pass
|
||||
for section, keys in sections.items():
|
||||
for key, value in keys.items():
|
||||
value = str(value)
|
||||
lock = f"{file_name}.{section}.{key}"
|
||||
if lock in locks_dict and locks_dict[lock] == 1:
|
||||
command = [
|
||||
f'kwriteconfig{kde_applier_user.kde_version}',
|
||||
'--file', file_name,
|
||||
'--group', section,
|
||||
'--key', key +'/$i/',
|
||||
'--type', 'string',
|
||||
value
|
||||
]
|
||||
else:
|
||||
command = [
|
||||
f'kwriteconfig{kde_applier_user.kde_version}',
|
||||
'--file', file_name,
|
||||
'--group', section,
|
||||
'--key', key,
|
||||
'--type', 'string',
|
||||
value
|
||||
]
|
||||
try:
|
||||
clear_locks_settings(username, file_name, key)
|
||||
env_path = dict(os.environ)
|
||||
env_path["PATH"] = "/usr/lib/kf5/bin:/usr/bin"
|
||||
subprocess.run(command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env_path)
|
||||
except:
|
||||
logdata['command'] = command
|
||||
log('W22', logdata)
|
||||
new_content = []
|
||||
file_path = f'{get_homedir(username)}/.config/{file_name}'
|
||||
try:
|
||||
with open(file_path, 'r') as file:
|
||||
for line in file:
|
||||
line = line.replace('/$i/', '[$i]').replace(')(', '][')
|
||||
new_content.append(line)
|
||||
with open(file_path, 'w') as file:
|
||||
file.writelines(new_content)
|
||||
logdata['file'] = file_name
|
||||
log('D202', logdata)
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('W19', logdata)
|
||||
modified_files.add(file_name)
|
||||
for file_name in modified_files:
|
||||
call_dbus_method(file_name)
|
||||
|
||||
def clear_locks_settings(username, file_name, key):
|
||||
'''
|
||||
Method to remove old locked settings
|
||||
'''
|
||||
file_path = f'{get_homedir(username)}/.config/{file_name}'
|
||||
with open(file_path, 'r') as file:
|
||||
lines = file.readlines()
|
||||
with open(file_path, 'w') as file:
|
||||
for line in lines:
|
||||
if f'{key}[$i]=' not in line:
|
||||
file.write(line)
|
||||
for line in lines:
|
||||
if f'{key}[$i]=' in line:
|
||||
logdata = dict()
|
||||
logdata['line'] = line.strip()
|
||||
log('I10', logdata)
|
||||
|
||||
def apply_for_wallpaper(data, file_cache, username, plasmaupdate):
|
||||
'''
|
||||
Method to change wallpaper
|
||||
'''
|
||||
logdata = dict()
|
||||
path_to_wallpaper = f'{get_homedir(username)}/.config/plasma-org.kde.plasma.desktop-appletsrc'
|
||||
id_desktop = get_id_desktop(path_to_wallpaper)
|
||||
try:
|
||||
try:
|
||||
data = str(file_cache.get(data))
|
||||
except NotUNCPathError:
|
||||
data = str(data)
|
||||
|
||||
with open(path_to_wallpaper, 'r') as file:
|
||||
current_wallpaper = file.read()
|
||||
match = re.search(rf'\[Containments\]\[{id_desktop}\]\[Wallpaper\]\[org\.kde\.image\]\[General\]\s+Image=(.*)', current_wallpaper)
|
||||
if match:
|
||||
current_wallpaper_path = match.group(1)
|
||||
flag = (current_wallpaper_path == data)
|
||||
else:
|
||||
flag = False
|
||||
|
||||
os.environ["LANGUAGE"] = os.environ["LANG"].split(".")[0]
|
||||
os.environ["XDG_DATA_DIRS"] = "/usr/share/kf5:"
|
||||
#Variable for system detection of directories before files with .colors extension
|
||||
os.environ["DISPLAY"] = ":0"
|
||||
#Variable for command execution plasma-apply-colorscheme
|
||||
os.environ["XDG_RUNTIME_DIR"] = f"/run/user/{os.getuid()}"
|
||||
os.environ["PATH"] = "/usr/lib/kf5/bin:"
|
||||
os.environ["DBUS_SESSION_BUS_ADDRESS"] = f"unix:path=/run/user/{os.getuid()}/bus"#plasma-apply-wallpaperimage
|
||||
env_path = dict(os.environ)
|
||||
env_path["PATH"] = "/usr/lib/kf5/bin:/usr/bin"
|
||||
#environment variable for accessing binary files without hard links
|
||||
if not flag:
|
||||
if os.path.isfile(path_to_wallpaper):
|
||||
command = [
|
||||
f'kwriteconfig{kde_applier_user.kde_version}',
|
||||
'--file', 'plasma-org.kde.plasma.desktop-appletsrc',
|
||||
'--group', 'Containments',
|
||||
'--group', id_desktop,
|
||||
'--group', 'Wallpaper',
|
||||
'--group', 'org.kde.image',
|
||||
'--group', 'General',
|
||||
'--key', 'Image',
|
||||
'--type', 'string',
|
||||
data
|
||||
]
|
||||
try:
|
||||
subprocess.run(command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env_path)
|
||||
except:
|
||||
logdata['command'] = command
|
||||
log('E68', logdata)
|
||||
if plasmaupdate == 1:
|
||||
call_dbus_method("wallpaper")
|
||||
else:
|
||||
logdata['file'] = path_to_wallpaper
|
||||
log('W21', logdata)
|
||||
except OSError as exc:
|
||||
logdata['exc'] = exc
|
||||
log('W17', logdata)
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('E67', logdata)
|
||||
|
||||
def get_id_desktop(path_to_wallpaper):
|
||||
'''
|
||||
Method for getting desktop id. It is currently accepted that this number is one of the sections in the configuration file.
|
||||
'''
|
||||
pattern = r'\[Containments\]\[(\d+)\][^\[]*activityId=([^\s]+)'
|
||||
try:
|
||||
with open(path_to_wallpaper, 'r') as file:
|
||||
file_content = file.read()
|
||||
match = re.search(pattern, file_content)
|
||||
return match.group(1) if match else None
|
||||
except:
|
||||
return None
|
||||
|
||||
def call_dbus_method(file_name):
|
||||
'''
|
||||
Method to call D-Bus method based on the file name
|
||||
'''
|
||||
os.environ["DBUS_SESSION_BUS_ADDRESS"] = f"unix:path=/run/user/{os.getuid()}/bus"
|
||||
if file_name in dbus_methods_mapping:
|
||||
config = dbus_methods_mapping[file_name]
|
||||
try:
|
||||
session_bus = dbus.SessionBus()
|
||||
dbus_object = session_bus.get_object(config['dbus_service'], config['dbus_path'])
|
||||
dbus_iface = dbus.Interface(dbus_object, config['dbus_interface'])
|
||||
if 'dbus_args' in config:
|
||||
getattr(dbus_iface, config['dbus_method'])(*config['dbus_args'])
|
||||
else:
|
||||
getattr(dbus_iface, config['dbus_method'])()
|
||||
except dbus.exceptions.DBusException as e:
|
||||
logdata = dict({'error': str(exc)})
|
||||
log('E31', logdata)
|
||||
else:
|
||||
pass
|
695
gpoa/frontend/laps_applier.py
Normal file
695
gpoa/frontend/laps_applier.py
Normal file
@ -0,0 +1,695 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2025 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend,
|
||||
check_enabled
|
||||
)
|
||||
import struct
|
||||
from datetime import datetime, timedelta
|
||||
import dpapi_ng
|
||||
from util.util import remove_prefix_from_keys, check_local_user_exists
|
||||
from util.sid import WellKnown21RID
|
||||
import subprocess
|
||||
import ldb
|
||||
import string
|
||||
import secrets
|
||||
import os
|
||||
import psutil
|
||||
from util.logging import log
|
||||
import logging
|
||||
|
||||
class laps_applier(applier_frontend):
|
||||
"""
|
||||
LAPS (Local Administrator Password Solution) implementation for managing
|
||||
and automatically rotating administrator passwords.
|
||||
"""
|
||||
|
||||
# Time calculation constants
|
||||
|
||||
# Number of seconds between the Windows epoch (1601-01-01 00:00:00 UTC)
|
||||
# and the Unix epoch (1970-01-01 00:00:00 UTC).
|
||||
# Used to convert between Unix timestamps and Windows FileTime.
|
||||
_EPOCH_TIMESTAMP = 11644473600
|
||||
# Number of 100-nanosecond intervals per second.
|
||||
# Used to convert seconds to Windows FileTime format.
|
||||
_HUNDREDS_OF_NANOSECONDS = 10000000
|
||||
# Number of 100-nanosecond intervals in one day
|
||||
_DAY_FLOAT = 8.64e11
|
||||
|
||||
# Module configuration
|
||||
__module_name = 'LapsApplier'
|
||||
__module_experimental = True
|
||||
__module_enabled = False
|
||||
|
||||
# Registry paths
|
||||
_WINDOWS_REGISTRY_PATH = 'SOFTWARE/Microsoft/Windows/CurrentVersion/Policies/LAPS/'
|
||||
_ALT_REGISTRY_PATH = 'Software/BaseALT/Policies/Laps/'
|
||||
|
||||
# LDAP attributes
|
||||
_ATTR_ENCRYPTED_PASSWORD = 'msLAPS-EncryptedPassword'
|
||||
_ATTR_PASSWORD_EXPIRATION_TIME = 'msLAPS-PasswordExpirationTime'
|
||||
|
||||
# dconf key for password modification time
|
||||
_KEY_PASSWORD_LAST_MODIFIED = '/Software/BaseALT/Policies/Laps/PasswordLastModified/'
|
||||
|
||||
# Password complexity levels
|
||||
_PASSWORD_COMPLEXITY = {
|
||||
1: string.ascii_uppercase,
|
||||
2: string.ascii_letters,
|
||||
3: string.ascii_letters + string.digits,
|
||||
4: string.ascii_letters + string.digits + string.punctuation
|
||||
}
|
||||
|
||||
# Post-authentication actions
|
||||
_ACTION_NONE = 0
|
||||
_ACTION_CHANGE_PASSWORD = 1
|
||||
_ACTION_TERMINATE_SESSIONS = 3
|
||||
_ACTION_REBOOT = 5
|
||||
|
||||
def __init__(self, storage):
|
||||
"""
|
||||
Initialize the LAPS applier with configuration from registry.
|
||||
|
||||
Args:
|
||||
storage: Storage object containing registry entries and system information
|
||||
"""
|
||||
self.storage = storage
|
||||
|
||||
# Load registry configuration
|
||||
if not self._load_configuration():
|
||||
self.__module_enabled = False
|
||||
return
|
||||
|
||||
if not self._check_requirements():
|
||||
log('W29')
|
||||
self.__module_enabled = False
|
||||
return
|
||||
|
||||
# Initialize system connections and parameters
|
||||
self._initialize_system_parameters()
|
||||
|
||||
# Check if module is enabled in configuration
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage,
|
||||
self.__module_name,
|
||||
self.__module_experimental
|
||||
)
|
||||
|
||||
def _load_configuration(self):
|
||||
"""Load configuration settings from registry."""
|
||||
alt_keys = remove_prefix_from_keys(
|
||||
self.storage.filter_entries(self._ALT_REGISTRY_PATH),
|
||||
self._ALT_REGISTRY_PATH
|
||||
)
|
||||
windows_keys = remove_prefix_from_keys(
|
||||
self.storage.filter_entries(self._WINDOWS_REGISTRY_PATH),
|
||||
self._WINDOWS_REGISTRY_PATH
|
||||
)
|
||||
|
||||
# Combine configurations with BaseALT taking precedence
|
||||
self.config = windows_keys
|
||||
self.config.update(alt_keys)
|
||||
|
||||
# Extract commonly used configuration parameters
|
||||
self.backup_directory = self.config.get('BackupDirectory', None)
|
||||
self.encryption_enabled = self.config.get('ADPasswordEncryptionEnabled', 1)
|
||||
self.password_expiration_protection = self.config.get('PasswordExpirationProtectionEnabled', 1)
|
||||
self.password_age_days = self.config.get('PasswordAgeDays', 30)
|
||||
self.post_authentication_actions = self.config.get('PostAuthenticationActions', 3)
|
||||
self.post_authentication_reset_delay = self.config.get('PostAuthenticationResetDelay', 24)
|
||||
name = self.config.get('AdministratorAccountName', 'root')
|
||||
if check_local_user_exists(name):
|
||||
self.target_user = name
|
||||
else:
|
||||
log('W36')
|
||||
return False
|
||||
return True
|
||||
|
||||
def _check_requirements(self):
|
||||
"""
|
||||
Check if the necessary requirements are met for the module to operate.
|
||||
|
||||
Returns:
|
||||
bool: True if requirements are met, False otherwise
|
||||
"""
|
||||
if self.backup_directory != 2 and self.encryption_enabled == 1:
|
||||
logdata = dict()
|
||||
logdata['backup_directory'] = self.backup_directory
|
||||
logdata['encryption_enabled'] = self.encryption_enabled
|
||||
log('D223', logdata)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _initialize_system_parameters(self):
|
||||
"""Initialize system parameters and connections."""
|
||||
# Set up LDAP connections
|
||||
self.samdb = self.storage.get_info('samdb')
|
||||
self.domain_sid = self.samdb.get_domain_sid()
|
||||
self.domain_dn = self.samdb.domain_dn()
|
||||
self.computer_dn = self._get_computer_dn()
|
||||
self.admin_group_sid = f'{self.domain_sid}-{WellKnown21RID.DOMAIN_ADMINS.value}'
|
||||
|
||||
# Set up time parameters
|
||||
self.expiration_date = self._get_expiration_date()
|
||||
self.expiration_date_int = self._convert_to_filetime(self.expiration_date)
|
||||
self.current_time_int = self._convert_to_filetime(datetime.now())
|
||||
|
||||
# Get current system state
|
||||
self.expiration_time_attr = self._get_expiration_time_attr()
|
||||
self.pass_last_mod_int = self._read_dconf_pass_last_mod()
|
||||
self.encryption_principal = self._get_encryption_principal()
|
||||
self.last_login_hours_ago = self._get_last_login_hours_ago()
|
||||
|
||||
def _get_computer_dn(self):
|
||||
"""
|
||||
Get the Distinguished Name of the computer account.
|
||||
|
||||
Returns:
|
||||
str: Computer's distinguished name in LDAP
|
||||
"""
|
||||
machine_name = self.storage.get_info('machine_name')
|
||||
search_filter = f'(sAMAccountName={machine_name})'
|
||||
results = self.samdb.search(base=self.domain_dn, expression=search_filter, attrs=['dn'])
|
||||
return results[0]['dn']
|
||||
|
||||
def _get_encryption_principal(self):
|
||||
"""
|
||||
Get the encryption principal for password encryption.
|
||||
|
||||
Returns:
|
||||
str: SID of the encryption principal
|
||||
"""
|
||||
encryption_principal = self.config.get('ADPasswordEncryptionPrincipal', None)
|
||||
if not encryption_principal:
|
||||
return self.admin_group_sid
|
||||
|
||||
return self._verify_encryption_principal(encryption_principal)
|
||||
|
||||
def _verify_encryption_principal(self, principal_name):
|
||||
"""
|
||||
Verify the encryption principal exists and get its SID.
|
||||
|
||||
Args:
|
||||
principal_name: Principal name to verify
|
||||
|
||||
Returns:
|
||||
str: SID of the encryption principal if found, or admin group SID as fallback
|
||||
"""
|
||||
try:
|
||||
# Try to resolve as domain\\user format
|
||||
domain = self.storage.get_info('domain')
|
||||
username = f'{domain}\\{principal_name}'
|
||||
output = subprocess.check_output(['wbinfo', '-n', username])
|
||||
sid = output.split()[0].decode('utf-8')
|
||||
return sid
|
||||
except subprocess.CalledProcessError:
|
||||
# Try to resolve directly as SID
|
||||
try:
|
||||
output = subprocess.check_output(['wbinfo', '-s', principal_name])
|
||||
return principal_name
|
||||
except subprocess.CalledProcessError:
|
||||
# Fallback to admin group SID
|
||||
logdata = dict()
|
||||
logdata['principal_name'] = principal_name
|
||||
log('W30', logdata)
|
||||
return self.admin_group_sid
|
||||
|
||||
def _get_expiration_date(self, base_time=None):
|
||||
"""
|
||||
Calculate the password expiration date.
|
||||
|
||||
Args:
|
||||
base_time: Optional datetime to base calculation on, defaults to now
|
||||
|
||||
Returns:
|
||||
datetime: Password expiration date
|
||||
"""
|
||||
base = base_time or datetime.now()
|
||||
# Set to beginning of day and add password age
|
||||
return (base.replace(hour=0, minute=0, second=0, microsecond=0) +
|
||||
timedelta(days=int(self.password_age_days)))
|
||||
|
||||
def _convert_to_filetime(self, dt):
|
||||
"""
|
||||
Convert datetime to Windows filetime format (100ns intervals since 1601-01-01).
|
||||
|
||||
Args:
|
||||
dt: Datetime to convert
|
||||
|
||||
Returns:
|
||||
int: Windows filetime integer
|
||||
"""
|
||||
epoch_timedelta = timedelta(seconds=self._EPOCH_TIMESTAMP)
|
||||
new_dt = dt + epoch_timedelta
|
||||
return int(new_dt.timestamp() * self._HUNDREDS_OF_NANOSECONDS)
|
||||
|
||||
def _get_expiration_time_attr(self):
|
||||
"""
|
||||
Get the current password expiration time from LDAP.
|
||||
|
||||
Returns:
|
||||
int: Password expiration time as integer, or 0 if not found
|
||||
"""
|
||||
try:
|
||||
res = self.samdb.search(
|
||||
base=self.computer_dn,
|
||||
scope=ldb.SCOPE_BASE,
|
||||
expression="(objectClass=*)",
|
||||
attrs=[self._ATTR_PASSWORD_EXPIRATION_TIME]
|
||||
)
|
||||
return int(res[0].get(self._ATTR_PASSWORD_EXPIRATION_TIME, 0)[0])
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
log('W31', logdata)
|
||||
return 0
|
||||
|
||||
def _read_dconf_pass_last_mod(self):
|
||||
"""
|
||||
Read the password last modified time from dconf.
|
||||
|
||||
Returns:
|
||||
int: Timestamp of last password modification or current time if not found
|
||||
"""
|
||||
try:
|
||||
key_path = self._KEY_PASSWORD_LAST_MODIFIED + self.target_user
|
||||
last_modified = subprocess.check_output(
|
||||
['dconf', 'read', key_path],
|
||||
text=True
|
||||
).strip().strip("'\"")
|
||||
return int(last_modified)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
log('W32', logdata)
|
||||
return self.current_time_int
|
||||
|
||||
def _write_dconf_pass_last_mod(self):
|
||||
"""
|
||||
Write the password last modified time to dconf.
|
||||
"""
|
||||
try:
|
||||
# Ensure dbus session is available
|
||||
self._ensure_dbus_session()
|
||||
|
||||
# Write current time to dconf
|
||||
key_path = self._KEY_PASSWORD_LAST_MODIFIED + self.target_user
|
||||
last_modified = f'"{self.current_time_int}"'
|
||||
subprocess.check_output(['dconf', 'write', key_path, last_modified])
|
||||
log('D222')
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['exc'] = exc
|
||||
log('W28', logdata)
|
||||
|
||||
def _ensure_dbus_session(self):
|
||||
"""Ensure a D-Bus session is available for dconf operations."""
|
||||
dbus_address = os.getenv("DBUS_SESSION_BUS_ADDRESS")
|
||||
if not dbus_address:
|
||||
result = subprocess.run(
|
||||
["dbus-daemon", "--fork", "--session", "--print-address"],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
dbus_address = result.stdout.strip()
|
||||
os.environ["DBUS_SESSION_BUS_ADDRESS"] = dbus_address
|
||||
|
||||
def _get_last_login_hours_ago(self):
|
||||
"""
|
||||
Get the number of hours since the user's last login.
|
||||
|
||||
Returns:
|
||||
int: Hours since last login, or 0 if error or no login found
|
||||
"""
|
||||
logdata = dict()
|
||||
logdata['target_user'] = self.target_user
|
||||
try:
|
||||
output = subprocess.check_output(
|
||||
["last", "-n", "1", self.target_user],
|
||||
env={'LANG':'C'},
|
||||
text=True
|
||||
).split("\n")[0]
|
||||
|
||||
parts = output.split()
|
||||
if len(parts) < 7:
|
||||
return 0
|
||||
|
||||
# Parse login time
|
||||
login_str = f"{parts[4]} {parts[5]} {parts[6]}"
|
||||
last_login_time = datetime.strptime(login_str, "%b %d %H:%M")
|
||||
last_login_time = last_login_time.replace(year=datetime.now().year)
|
||||
|
||||
# Calculate hours difference
|
||||
time_diff = datetime.now() - last_login_time
|
||||
hours_ago = int(time_diff.total_seconds() // 3600)
|
||||
logdata['hours_ago'] = hours_ago
|
||||
log('D224', logdata)
|
||||
return hours_ago
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('W33', logdata)
|
||||
return 0
|
||||
|
||||
def _get_changed_password_hours_ago(self):
|
||||
"""
|
||||
Calculate how many hours ago the password was last changed.
|
||||
|
||||
Returns:
|
||||
int: Hours since password was last changed, or 0 if error
|
||||
"""
|
||||
logdata = dict()
|
||||
logdata['target_user'] = self.target_user
|
||||
try:
|
||||
diff_time = self.current_time_int - self.pass_last_mod_int
|
||||
hours_difference = diff_time // 3.6e10
|
||||
hours_ago = int(hours_difference)
|
||||
logdata['hours_ago'] = hours_ago
|
||||
log('D225', logdata)
|
||||
return hours_ago
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('W34', logdata)
|
||||
return 0
|
||||
|
||||
def _generate_password(self):
|
||||
"""
|
||||
Generate a secure password based on policy settings.
|
||||
|
||||
Returns:
|
||||
str: Generated password meeting complexity requirements
|
||||
"""
|
||||
# Get password length from config
|
||||
password_length = self.config.get('PasswordLength', 14)
|
||||
if not isinstance(password_length, int) or not (8 <= password_length <= 64):
|
||||
password_length = 14
|
||||
|
||||
# Get password complexity from config
|
||||
password_complexity = self.config.get('PasswordComplexity', 4)
|
||||
if not isinstance(password_complexity, int) or not (1 <= password_complexity <= 4):
|
||||
password_complexity = 4
|
||||
|
||||
# Get character set based on complexity
|
||||
char_set = self._PASSWORD_COMPLEXITY.get(password_complexity, self._PASSWORD_COMPLEXITY[4])
|
||||
|
||||
# Generate initial password
|
||||
password = ''.join(secrets.choice(char_set) for _ in range(password_length))
|
||||
|
||||
# Ensure password meets complexity requirements
|
||||
if password_complexity >= 3 and not any(c.isdigit() for c in password):
|
||||
# Add a digit if required but missing
|
||||
digit = secrets.choice(string.digits)
|
||||
position = secrets.randbelow(len(password))
|
||||
password = password[:position] + digit + password[position:]
|
||||
|
||||
if password_complexity == 4 and not any(c in string.punctuation for c in password):
|
||||
# Add a special character if required but missing
|
||||
special_char = secrets.choice(string.punctuation)
|
||||
position = secrets.randbelow(len(password))
|
||||
password = password[:position] + special_char + password[position:]
|
||||
|
||||
return password
|
||||
|
||||
def _get_json_password_data(self, password):
|
||||
"""
|
||||
Format password information as JSON.
|
||||
|
||||
Args:
|
||||
password: The password
|
||||
|
||||
Returns:
|
||||
str: JSON formatted password information
|
||||
"""
|
||||
return f'{{"n":"{self.target_user}","t":"{self.expiration_date_int}","p":"{password}"}}'
|
||||
|
||||
def _create_password_blob(self, password):
|
||||
"""
|
||||
Create encrypted password blob for LDAP storage.
|
||||
|
||||
Args:
|
||||
password: Password to encrypt
|
||||
|
||||
Returns:
|
||||
bytes: Encrypted password blob
|
||||
"""
|
||||
# Create JSON data and encode as UTF-16LE with null terminator
|
||||
json_data = self._get_json_password_data(password)
|
||||
password_bytes = json_data.encode("utf-16-le") + b"\x00\x00"
|
||||
# Save and change loglevel
|
||||
logger = logging.getLogger()
|
||||
old_level = logger.level
|
||||
logger.setLevel(logging.ERROR)
|
||||
# Encrypt the password
|
||||
dpapi_blob = dpapi_ng.ncrypt_protect_secret(
|
||||
password_bytes,
|
||||
self.encryption_principal,
|
||||
auth_protocol='kerberos'
|
||||
)
|
||||
# Restoreloglevel
|
||||
logger.setLevel(old_level)
|
||||
# Create full blob with metadata
|
||||
return self._add_blob_metadata(dpapi_blob)
|
||||
|
||||
def _add_blob_metadata(self, dpapi_blob):
|
||||
"""
|
||||
Add metadata to the encrypted password blob.
|
||||
|
||||
Args:
|
||||
dpapi_blob: Encrypted password blob
|
||||
|
||||
Returns:
|
||||
bytes: Complete blob with metadata
|
||||
"""
|
||||
# Convert timestamp to correct format
|
||||
left, right = struct.unpack('<LL', struct.pack('Q', self.current_time_int))
|
||||
packed = struct.pack('<LL', right, left)
|
||||
|
||||
# Add blob length and padding
|
||||
prefix = packed + struct.pack('<i', len(dpapi_blob)) + b'\x00\x00\x00\x00'
|
||||
|
||||
# Combine metadata and encrypted blob
|
||||
return prefix + dpapi_blob
|
||||
|
||||
def _change_user_password(self, new_password):
|
||||
"""
|
||||
Change the password for the target user.
|
||||
|
||||
Args:
|
||||
new_password: New password to set
|
||||
|
||||
Returns:
|
||||
bool: True if password was changed successfully, False otherwise
|
||||
"""
|
||||
logdata = dict()
|
||||
logdata['target_use'] = self.target_user
|
||||
try:
|
||||
# Use chpasswd to change the password
|
||||
process = subprocess.Popen(
|
||||
["chpasswd"],
|
||||
stdin=subprocess.PIPE,
|
||||
text=True
|
||||
)
|
||||
process.communicate(f"{self.target_user}:{new_password}")
|
||||
|
||||
# Record the time of change
|
||||
self._write_dconf_pass_last_mod()
|
||||
log('D221', logdata)
|
||||
return True
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('W27', logdata)
|
||||
return False
|
||||
|
||||
def _update_ldap_password(self, encrypted_blob):
|
||||
"""
|
||||
Update the encrypted password and expiration time in LDAP.
|
||||
|
||||
Args:
|
||||
encrypted_blob: Encrypted password blob
|
||||
|
||||
Returns:
|
||||
bool: True if LDAP was updated successfully, False otherwise
|
||||
"""
|
||||
logdata = dict()
|
||||
logdata['computer_dn'] = self.computer_dn
|
||||
try:
|
||||
# Create LDAP modification message
|
||||
mod_msg = ldb.Message()
|
||||
mod_msg.dn = self.computer_dn
|
||||
|
||||
# Update password blob
|
||||
mod_msg[self._ATTR_ENCRYPTED_PASSWORD] = ldb.MessageElement(
|
||||
encrypted_blob,
|
||||
ldb.FLAG_MOD_REPLACE,
|
||||
self._ATTR_ENCRYPTED_PASSWORD
|
||||
)
|
||||
|
||||
# Update expiration time
|
||||
mod_msg[self._ATTR_PASSWORD_EXPIRATION_TIME] = ldb.MessageElement(
|
||||
str(self.expiration_date_int),
|
||||
ldb.FLAG_MOD_REPLACE,
|
||||
self._ATTR_PASSWORD_EXPIRATION_TIME
|
||||
)
|
||||
|
||||
# Perform the LDAP modification
|
||||
self.samdb.modify(mod_msg)
|
||||
log('D226', logdata)
|
||||
return True
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('E75', logdata)
|
||||
return False
|
||||
|
||||
def _should_update_password(self):
|
||||
"""
|
||||
Determine if the password should be updated based on policy.
|
||||
|
||||
Returns:
|
||||
tuple: (bool: update needed, bool: perform post-action)
|
||||
"""
|
||||
# Check if password has expired
|
||||
if not self._is_password_expired():
|
||||
# Password not expired, check if post-login action needed
|
||||
return self._check_post_login_action()
|
||||
|
||||
# Password has expired, update needed
|
||||
return True, False
|
||||
|
||||
def _is_password_expired(self):
|
||||
"""
|
||||
Check if the password has expired according to policy.
|
||||
|
||||
Returns:
|
||||
bool: True if password has expired, False otherwise
|
||||
"""
|
||||
# Case 1: No expiration protection, check LDAP attribute
|
||||
if not self.password_expiration_protection:
|
||||
if self.expiration_time_attr > self.current_time_int:
|
||||
return False
|
||||
# Case 2: With expiration protection, check both policy and LDAP
|
||||
elif self.password_expiration_protection:
|
||||
policy_expiry = self.pass_last_mod_int + (self.password_age_days * int(self._DAY_FLOAT))
|
||||
if policy_expiry > self.current_time_int and self.expiration_time_attr > self.current_time_int:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _check_post_login_action(self):
|
||||
"""
|
||||
Check if a post-login password change action should be performed.
|
||||
|
||||
Returns:
|
||||
tuple: (bool: update needed, bool: perform post-action)
|
||||
"""
|
||||
# Check if password was changed after last login
|
||||
if self._get_changed_password_hours_ago() < self.last_login_hours_ago:
|
||||
return False, False
|
||||
|
||||
# Check if enough time has passed since login
|
||||
if self.last_login_hours_ago < self.post_authentication_reset_delay:
|
||||
return False, False
|
||||
|
||||
# Check if action is configured
|
||||
if self.post_authentication_actions == self._ACTION_NONE:
|
||||
return False, False
|
||||
|
||||
# Update needed, determine if post-action required
|
||||
return True, self.post_authentication_actions > self._ACTION_CHANGE_PASSWORD
|
||||
|
||||
def _perform_post_action(self):
|
||||
"""
|
||||
Perform post-password-change action based on configuration.
|
||||
"""
|
||||
if self.post_authentication_actions == self._ACTION_TERMINATE_SESSIONS:
|
||||
self._terminate_user_sessions()
|
||||
elif self.post_authentication_actions == self._ACTION_REBOOT:
|
||||
log('D220')
|
||||
subprocess.run(["reboot"])
|
||||
|
||||
def _terminate_user_sessions(self):
|
||||
"""
|
||||
Terminates all processes associated with the active sessions of the target user.
|
||||
"""
|
||||
# Get active sessions for the target user
|
||||
user_sessions = [user for user in psutil.users() if user.name == self.target_user]
|
||||
logdata = dict()
|
||||
logdata['target_user'] = self.target_user
|
||||
if not user_sessions:
|
||||
log('D227', logdata)
|
||||
return
|
||||
|
||||
# Terminate each session
|
||||
for session in user_sessions:
|
||||
try:
|
||||
# Get the process and terminate it
|
||||
proc = psutil.Process(session.pid)
|
||||
proc.kill() # Send SIGKILL
|
||||
logdata['pid'] = session.pid
|
||||
log('D228')
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied) as exc:
|
||||
logdata['pid'] = session.pid
|
||||
logdata['exc'] = exc
|
||||
log('W35', logdata)
|
||||
|
||||
def update_laps_password(self):
|
||||
"""
|
||||
Update the LAPS password if needed based on policy.
|
||||
Checks expiration and login times to determine if update is needed.
|
||||
"""
|
||||
# Check if password update is needed
|
||||
update_needed, perform_post_action = self._should_update_password()
|
||||
|
||||
if not update_needed:
|
||||
log('D229')
|
||||
return False
|
||||
|
||||
# Generate new password
|
||||
password = self._generate_password()
|
||||
|
||||
# Create encrypted password blob
|
||||
encrypted_blob = self._create_password_blob(password)
|
||||
|
||||
# Update password in LDAP
|
||||
ldap_success = self._update_ldap_password(encrypted_blob)
|
||||
|
||||
if not ldap_success:
|
||||
return False
|
||||
|
||||
# Change local user password
|
||||
local_success = self._change_user_password(password)
|
||||
|
||||
if not local_success:
|
||||
log('E76')
|
||||
return False
|
||||
|
||||
log('D230')
|
||||
|
||||
# Perform post-action if configured
|
||||
if perform_post_action:
|
||||
self._perform_post_action()
|
||||
|
||||
|
||||
def apply(self):
|
||||
"""
|
||||
Main entry point for the LAPS applier.
|
||||
"""
|
||||
if self.__module_enabled:
|
||||
log('D218')
|
||||
self.update_laps_password()
|
||||
else:
|
||||
log('D219')
|
58
gpoa/frontend/networkshare_applier.py
Normal file
58
gpoa/frontend/networkshare_applier.py
Normal file
@ -0,0 +1,58 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .appliers.netshare import Networkshare
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from util.logging import log
|
||||
|
||||
class networkshare_applier(applier_frontend):
|
||||
__module_name = 'NetworksharesApplier'
|
||||
__module_name_user = 'NetworksharesApplierUser'
|
||||
__module_experimental = True
|
||||
__module_enabled = False
|
||||
|
||||
def __init__(self, storage, sid, username = None):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self.networkshare_info = self.storage.get_networkshare(self.sid)
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
self.__module_enabled_user = check_enabled(self.storage, self.__module_name_user, self.__module_experimental)
|
||||
|
||||
def run(self):
|
||||
for networkshare in self.networkshare_info:
|
||||
Networkshare(networkshare, self.username)
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D187')
|
||||
self.run()
|
||||
else:
|
||||
log('D181')
|
||||
def admin_context_apply(self):
|
||||
pass
|
||||
|
||||
def user_context_apply(self):
|
||||
if self.__module_enabled_user:
|
||||
log('D188')
|
||||
self.run()
|
||||
else:
|
||||
log('D189')
|
@ -17,7 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
import logging
|
||||
|
||||
import subprocess
|
||||
from enum import Enum
|
||||
|
||||
@ -26,7 +26,7 @@ from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
|
||||
class NTPServerType(Enum):
|
||||
@ -117,30 +117,33 @@ class ntp_applier(applier_frontend):
|
||||
ntp_server_enabled = self.storage.get_hklm_entry(self.ntp_server_enabled)
|
||||
ntp_client_enabled = self.storage.get_hklm_entry(self.ntp_client_enabled)
|
||||
|
||||
if NTPServerType.NTP.value != server_type.data:
|
||||
logdata = dict()
|
||||
logdata['server_type'] = server_type
|
||||
log('W10', logdata)
|
||||
else:
|
||||
log('D126')
|
||||
if '1' == ntp_server_enabled.data:
|
||||
log('D127')
|
||||
self._start_chrony_client(server_address)
|
||||
self._chrony_as_server()
|
||||
elif '0' == ntp_server_enabled.data:
|
||||
log('D128')
|
||||
self._chrony_as_client()
|
||||
if server_type and server_type.data:
|
||||
if NTPServerType.NTP.value != server_type.data:
|
||||
logdata = dict()
|
||||
logdata['server_type'] = server_type
|
||||
log('W10', logdata)
|
||||
else:
|
||||
log('D129')
|
||||
log('D126')
|
||||
if ntp_server_enabled:
|
||||
if '1' == ntp_server_enabled.data and server_address:
|
||||
log('D127')
|
||||
self._start_chrony_client(server_address)
|
||||
self._chrony_as_server()
|
||||
elif '0' == ntp_server_enabled.data:
|
||||
log('D128')
|
||||
self._chrony_as_client()
|
||||
else:
|
||||
log('D129')
|
||||
|
||||
if '1' == ntp_client_enabled.data:
|
||||
log('D130')
|
||||
self._start_chrony_client()
|
||||
elif '0' == ntp_client_enabled.data:
|
||||
log('D131')
|
||||
self._stop_chrony_client()
|
||||
else:
|
||||
log('D132')
|
||||
elif ntp_client_enabled:
|
||||
if '1' == ntp_client_enabled.data:
|
||||
log('D130')
|
||||
self._start_chrony_client()
|
||||
elif '0' == ntp_client_enabled.data:
|
||||
log('D131')
|
||||
self._stop_chrony_client()
|
||||
else:
|
||||
log('D132')
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,12 +18,7 @@
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
from util.logging import slogm, log
|
||||
from util.rpm import (
|
||||
update
|
||||
, install_rpm
|
||||
, remove_rpm
|
||||
)
|
||||
from util.logging import log
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
@ -62,8 +57,7 @@ class package_applier(applier_frontend):
|
||||
)
|
||||
def run(self):
|
||||
for flag in self.sync_packages_setting:
|
||||
if flag.data:
|
||||
self.flagSync = bool(int(flag.data))
|
||||
self.flagSync = bool(flag.data)
|
||||
|
||||
if 0 < self.install_packages_setting.count() or 0 < self.remove_packages_setting.count():
|
||||
if self.flagSync:
|
||||
@ -104,8 +98,8 @@ class package_applier_user(applier_frontend):
|
||||
self.username = username
|
||||
self.fulcmd = list()
|
||||
self.fulcmd.append('/usr/libexec/gpupdate/pkcon_runner')
|
||||
self.fulcmd.append('--sid')
|
||||
self.fulcmd.append(self.sid)
|
||||
self.fulcmd.append('--user')
|
||||
self.fulcmd.append(self.username)
|
||||
self.fulcmd.append('--loglevel')
|
||||
logger = logging.getLogger()
|
||||
self.fulcmd.append(str(logger.level))
|
||||
@ -119,7 +113,7 @@ class package_applier_user(applier_frontend):
|
||||
self.sync_packages_setting = self.storage.filter_hkcu_entries(self.sid, sync_branch)
|
||||
self.flagSync = False
|
||||
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_enabled)
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
|
||||
def user_context_apply(self):
|
||||
'''
|
||||
|
@ -19,36 +19,74 @@
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
, check_windows_mapping_enabled
|
||||
)
|
||||
from .appliers.polkit import polkit
|
||||
from util.logging import slogm, log
|
||||
|
||||
import logging
|
||||
from util.logging import log
|
||||
|
||||
class polkit_applier(applier_frontend):
|
||||
__module_name = 'PolkitApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__deny_all = 'Software\\Policies\\Microsoft\\Windows\\RemovableStorageDevices\\Deny_All'
|
||||
__deny_all_win = 'Software\\Policies\\Microsoft\\Windows\\RemovableStorageDevices\\Deny_All'
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\Polkit\\'
|
||||
__registry_locks_branch = 'Software\\BaseALT\\Policies\\PolkitLocks\\'
|
||||
__polkit_map = {
|
||||
__deny_all: ['49-gpoa_disk_permissions', { 'Deny_All': 0 }]
|
||||
__deny_all_win: ['49-gpoa_disk_permissions', { 'Deny_All': 0 }],
|
||||
__registry_branch : ['49-alt_group_policy_permissions', {}],
|
||||
__registry_locks_branch : ['47-alt_group_policy_permissions', {}]
|
||||
}
|
||||
|
||||
def __init__(self, storage):
|
||||
self.storage = storage
|
||||
deny_all = storage.filter_hklm_entries(self.__deny_all).first()
|
||||
deny_all_win = None
|
||||
if check_windows_mapping_enabled(self.storage):
|
||||
deny_all_win = storage.filter_hklm_entries(self.__deny_all_win).first()
|
||||
# Deny_All hook: initialize defaults
|
||||
template_file = self.__polkit_map[self.__deny_all][0]
|
||||
template_vars = self.__polkit_map[self.__deny_all][1]
|
||||
if deny_all:
|
||||
polkit_filter = '{}%'.format(self.__registry_branch)
|
||||
polkit_locks_filter = '{}%'.format(self.__registry_locks_branch)
|
||||
self.polkit_keys = self.storage.filter_hklm_entries(polkit_filter)
|
||||
self.polkit_locks = self.storage.filter_hklm_entries(polkit_locks_filter)
|
||||
template_file = self.__polkit_map[self.__deny_all_win][0]
|
||||
template_vars = self.__polkit_map[self.__deny_all_win][1]
|
||||
template_file_all = self.__polkit_map[self.__registry_branch][0]
|
||||
template_vars_all = self.__polkit_map[self.__registry_branch][1]
|
||||
template_file_all_lock = self.__polkit_map[self.__registry_locks_branch][0]
|
||||
template_vars_all_lock = self.__polkit_map[self.__registry_locks_branch][1]
|
||||
locks = list()
|
||||
for lock in self.polkit_locks:
|
||||
if bool(int(lock.data)):
|
||||
locks.append(lock.valuename)
|
||||
|
||||
dict_lists_rules = {'No': [[], []],
|
||||
'Yes': [[], []],
|
||||
'Auth_self' : [[], []],
|
||||
'Auth_admin': [[], []],
|
||||
'Auth_self_keep': [[], []],
|
||||
'Auth_admin_keep': [[], []]}
|
||||
|
||||
check_and_add_to_list = (lambda it, act: dict_lists_rules[act][0].append(it.valuename)
|
||||
if it.valuename not in locks
|
||||
else dict_lists_rules[act][1].append(it.valuename))
|
||||
|
||||
for it_data in self.polkit_keys:
|
||||
check_and_add_to_list(it_data, it_data.data)
|
||||
|
||||
for key, item in dict_lists_rules.items():
|
||||
self.__polkit_map[self.__registry_branch][1][key] = item[0]
|
||||
self.__polkit_map[self.__registry_locks_branch][1][key] = item[1]
|
||||
|
||||
if deny_all_win:
|
||||
logdata = dict()
|
||||
logdata['Deny_All'] = deny_all.data
|
||||
logdata['Deny_All_win'] = deny_all_win.data
|
||||
log('D69', logdata)
|
||||
self.__polkit_map[self.__deny_all][1]['Deny_All'] = deny_all.data
|
||||
self.__polkit_map[self.__deny_all_win][1]['Deny_All'] = deny_all_win.data
|
||||
else:
|
||||
log('D71')
|
||||
self.policies = []
|
||||
self.policies.append(polkit(template_file, template_vars))
|
||||
self.policies.append(polkit(template_file_all, template_vars_all))
|
||||
self.policies.append(polkit(template_file_all_lock, template_vars_all_lock))
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
@ -70,31 +108,55 @@ class polkit_applier_user(applier_frontend):
|
||||
__module_name = 'PolkitApplierUser'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__deny_all = 'Software\\Policies\\Microsoft\\Windows\\RemovableStorageDevices\\Deny_All'
|
||||
__deny_all_win = 'Software\\Policies\\Microsoft\\Windows\\RemovableStorageDevices\\Deny_All'
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\Polkit\\'
|
||||
__polkit_map = {
|
||||
__deny_all: ['48-gpoa_disk_permissions_user', { 'Deny_All': 0, 'User': '' }]
|
||||
__deny_all_win: ['48-gpoa_disk_permissions_user', { 'Deny_All': 0, 'User': '' }],
|
||||
__registry_branch : ['48-alt_group_policy_permissions_user', {'User': ''}]
|
||||
}
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
|
||||
deny_all = storage.filter_hkcu_entries(self.sid, self.__deny_all).first()
|
||||
deny_all_win = None
|
||||
if check_windows_mapping_enabled(self.storage):
|
||||
deny_all_win = storage.filter_hkcu_entries(self.sid, self.__deny_all_win).first()
|
||||
polkit_filter = '{}%'.format(self.__registry_branch)
|
||||
self.polkit_keys = self.storage.filter_hkcu_entries(self.sid, polkit_filter)
|
||||
# Deny_All hook: initialize defaults
|
||||
template_file = self.__polkit_map[self.__deny_all][0]
|
||||
template_vars = self.__polkit_map[self.__deny_all][1]
|
||||
if deny_all:
|
||||
template_file = self.__polkit_map[self.__deny_all_win][0]
|
||||
template_vars = self.__polkit_map[self.__deny_all_win][1]
|
||||
template_file_all = self.__polkit_map[self.__registry_branch][0]
|
||||
template_vars_all = self.__polkit_map[self.__registry_branch][1]
|
||||
|
||||
dict_lists_rules = {'No': [],
|
||||
'Yes': [],
|
||||
'Auth_self': [],
|
||||
'Auth_admin': [],
|
||||
'Auth_self_keep': [],
|
||||
'Auth_admin_keep': []}
|
||||
|
||||
for it_data in self.polkit_keys:
|
||||
dict_lists_rules[it_data.data].append(it_data.valuename)
|
||||
|
||||
self.__polkit_map[self.__registry_branch][1]['User'] = self.username
|
||||
|
||||
for key, item in dict_lists_rules.items():
|
||||
self.__polkit_map[self.__registry_branch][1][key] = item
|
||||
|
||||
if deny_all_win:
|
||||
logdata = dict()
|
||||
logdata['user'] = self.username
|
||||
logdata['Deny_All'] = deny_all.data
|
||||
logdata['Deny_All_win'] = deny_all_win.data
|
||||
log('D70', logdata)
|
||||
self.__polkit_map[self.__deny_all][1]['Deny_All'] = deny_all.data
|
||||
self.__polkit_map[self.__deny_all][1]['User'] = self.username
|
||||
self.__polkit_map[self.__deny_all_win][1]['Deny_All'] = deny_all_win.data
|
||||
self.__polkit_map[self.__deny_all_win][1]['User'] = self.username
|
||||
else:
|
||||
log('D72')
|
||||
self.policies = []
|
||||
self.policies.append(polkit(template_file, template_vars, self.username))
|
||||
self.policies.append(polkit(template_file_all, template_vars_all, self.username))
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
|
@ -19,9 +19,7 @@
|
||||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
import pysss_nss_idmap
|
||||
|
||||
from django.template import base
|
||||
from util.logging import log
|
||||
from .appliers.folder import remove_dir_tree
|
||||
from .applier_frontend import (
|
||||
@ -97,7 +95,6 @@ class scripts_applier_user(applier_frontend):
|
||||
, self.__module_name
|
||||
, self.__module_experimental
|
||||
)
|
||||
self.filling_cache()
|
||||
|
||||
def cleaning_cache(self):
|
||||
log('D161')
|
||||
@ -145,15 +142,17 @@ def install_script(storage_script_entry, script_dir, access_permissions):
|
||||
'''
|
||||
dir_cr = Path(script_dir)
|
||||
dir_cr.mkdir(parents=True, exist_ok=True)
|
||||
script_name = str(int(storage_script_entry.number)).zfill(5) + '_' + os.path.basename(storage_script_entry.path)
|
||||
if storage_script_entry.number is None:
|
||||
return
|
||||
script_name = str(storage_script_entry.number).zfill(5) + '_' + os.path.basename(storage_script_entry.path)
|
||||
script_file = os.path.join(script_dir, script_name)
|
||||
shutil.copyfile(storage_script_entry.path, script_file)
|
||||
|
||||
os.chmod(script_file, int(access_permissions, base = 8))
|
||||
if storage_script_entry.arg:
|
||||
if storage_script_entry.args:
|
||||
dir_path = script_dir + '/' + script_name + '.arg'
|
||||
dir_arg = Path(dir_path)
|
||||
dir_arg.mkdir(parents=True, exist_ok=True)
|
||||
file_arg = open(dir_path + '/arg', 'w')
|
||||
file_arg.write(storage_script_entry.arg)
|
||||
file_arg.write(storage_script_entry.args)
|
||||
file_arg.close()
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,30 +16,31 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from gpt.shortcuts import json2sc
|
||||
from util.windows import expand_windows_var
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
from util.util import (
|
||||
get_homedir,
|
||||
homedir_exists
|
||||
homedir_exists,
|
||||
string_to_literal_eval
|
||||
)
|
||||
from gpt.shortcuts import shortcut, get_ttype
|
||||
|
||||
def storage_get_shortcuts(storage, sid, username=None):
|
||||
def storage_get_shortcuts(storage, sid, username=None, shortcuts_machine=None):
|
||||
'''
|
||||
Query storage for shortcuts' rows for specified SID.
|
||||
'''
|
||||
shortcut_objs = storage.get_shortcuts(sid)
|
||||
shortcuts = list()
|
||||
if username and shortcuts_machine:
|
||||
shortcut_objs += shortcuts_machine
|
||||
|
||||
for sc_obj in shortcut_objs:
|
||||
sc = json2sc(sc_obj.shortcut)
|
||||
for sc in shortcut_objs:
|
||||
if username:
|
||||
sc.set_expanded_path(expand_windows_var(sc.path, username))
|
||||
shortcuts.append(sc)
|
||||
@ -137,14 +138,46 @@ class shortcut_applier_user(applier_frontend):
|
||||
__module_name = 'ShortcutsApplierUser'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__REGISTRY_PATH_SHORTCATSMERGE= '/Software/BaseALT/Policies/GPUpdate/ShortcutsMerge'
|
||||
__DCONF_REGISTRY_PATH_PREFERENCES_MACHINE = 'Software/BaseALT/Policies/Preferences/Machine'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self.__module_enabled = check_enabled(self.storage, self.__module_name, self.__module_experimental)
|
||||
|
||||
def get_machine_shortcuts(self):
|
||||
result = list()
|
||||
try:
|
||||
storage_machine_dict = self.storage.get_dictionary_from_dconf_file_db()
|
||||
machine_shortcuts = storage_machine_dict.get(
|
||||
self.__DCONF_REGISTRY_PATH_PREFERENCES_MACHINE, dict()).get('Shortcuts')
|
||||
shortcut_objs = string_to_literal_eval(machine_shortcuts)
|
||||
for obj in shortcut_objs:
|
||||
shortcut_machine =shortcut(
|
||||
obj.get('dest'),
|
||||
obj.get('path'),
|
||||
obj.get('arguments'),
|
||||
obj.get('name'),
|
||||
obj.get('action'),
|
||||
get_ttype(obj.get('target_type')))
|
||||
shortcut_machine.set_usercontext(1)
|
||||
result.append(shortcut_machine)
|
||||
except:
|
||||
return None
|
||||
return result
|
||||
|
||||
|
||||
|
||||
def check_enabled_shortcuts_merge(self):
|
||||
return self.storage.get_key_value(self.__REGISTRY_PATH_SHORTCATSMERGE)
|
||||
|
||||
def run(self, in_usercontext):
|
||||
shortcuts = storage_get_shortcuts(self.storage, self.sid, self.username)
|
||||
shortcuts_machine = None
|
||||
if self.check_enabled_shortcuts_merge():
|
||||
shortcuts_machine = self.get_machine_shortcuts()
|
||||
shortcuts = storage_get_shortcuts(self.storage, self.sid, self.username, shortcuts_machine)
|
||||
|
||||
if shortcuts:
|
||||
for sc in shortcuts:
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -21,19 +21,18 @@ from .applier_frontend import (
|
||||
, check_enabled
|
||||
)
|
||||
from .appliers.systemd import systemd_unit
|
||||
from util.logging import slogm, log
|
||||
from util.logging import log
|
||||
|
||||
import logging
|
||||
|
||||
class systemd_applier(applier_frontend):
|
||||
__module_name = 'SystemdApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\SystemdUnits'
|
||||
__registry_branch = 'Software/BaseALT/Policies/SystemdUnits'
|
||||
|
||||
def __init__(self, storage):
|
||||
self.storage = storage
|
||||
self.systemd_unit_settings = self.storage.filter_hklm_entries('Software\\BaseALT\\Policies\\SystemdUnits%')
|
||||
self.systemd_unit_settings = self.storage.filter_hklm_entries(self.__registry_branch)
|
||||
self.units = []
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
@ -43,15 +42,14 @@ class systemd_applier(applier_frontend):
|
||||
|
||||
def run(self):
|
||||
for setting in self.systemd_unit_settings:
|
||||
valuename = setting.hive_key.rpartition('\\')[2]
|
||||
try:
|
||||
self.units.append(systemd_unit(valuename, int(setting.data)))
|
||||
self.units.append(systemd_unit(setting.valuename, int(setting.data)))
|
||||
logdata = dict()
|
||||
logdata['unit'] = format(valuename)
|
||||
logdata['unit'] = format(setting.valuename)
|
||||
log('I4', logdata)
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['unit'] = format(valuename)
|
||||
logdata['unit'] = format(setting.valuename)
|
||||
logdata['exc'] = exc
|
||||
log('I5', logdata)
|
||||
for unit in self.units:
|
||||
@ -76,7 +74,7 @@ class systemd_applier_user(applier_frontend):
|
||||
__module_name = 'SystemdApplierUser'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software\\BaseALT\\Policies\\SystemdUnits'
|
||||
__registry_branch = 'Software/BaseALT/Policies/SystemdUnits'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
|
70
gpoa/frontend/thunderbird_applier.py
Normal file
70
gpoa/frontend/thunderbird_applier.py
Normal file
@ -0,0 +1,70 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
|
||||
|
||||
import json
|
||||
import os
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
from util.logging import log
|
||||
from util.util import is_machine_name
|
||||
from .firefox_applier import create_dict
|
||||
|
||||
class thunderbird_applier(applier_frontend):
|
||||
__module_name = 'ThunderbirdApplier'
|
||||
__module_experimental = False
|
||||
__module_enabled = True
|
||||
__registry_branch = 'Software/Policies/Mozilla/Thunderbird'
|
||||
__thunderbird_policies = '/etc/thunderbird/policies'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self._is_machine_name = is_machine_name(self.username)
|
||||
self.policies = dict()
|
||||
self.policies_json = dict({ 'policies': self.policies })
|
||||
self.thunderbird_keys = self.storage.filter_hklm_entries(self.__registry_branch)
|
||||
self.policies_gen = dict()
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
, self.__module_experimental
|
||||
)
|
||||
|
||||
|
||||
def machine_apply(self):
|
||||
'''
|
||||
Write policies.json to Thunderbird.
|
||||
'''
|
||||
self.policies_json = create_dict(self.thunderbird_keys, self.__registry_branch)
|
||||
|
||||
destfile = os.path.join(self.__thunderbird_policies, 'policies.json')
|
||||
os.makedirs(self.__thunderbird_policies, exist_ok=True)
|
||||
with open(destfile, 'w') as f:
|
||||
json.dump(self.policies_json, f)
|
||||
logdata = dict()
|
||||
logdata['destfile'] = destfile
|
||||
log('D212', logdata)
|
||||
|
||||
def apply(self):
|
||||
if self.__module_enabled:
|
||||
log('D213')
|
||||
self.machine_apply()
|
||||
else:
|
||||
log('D214')
|
197
gpoa/frontend/yandex_browser_applier.py
Normal file
197
gpoa/frontend/yandex_browser_applier.py
Normal file
@ -0,0 +1,197 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .applier_frontend import (
|
||||
applier_frontend
|
||||
, check_enabled
|
||||
)
|
||||
|
||||
import json
|
||||
import os
|
||||
from util.logging import log
|
||||
from util.util import is_machine_name, string_to_literal_eval
|
||||
|
||||
class yandex_browser_applier(applier_frontend):
|
||||
__module_name = 'YandexBrowserApplier'
|
||||
__module_enabled = True
|
||||
__module_experimental = False
|
||||
__registry_branch = 'Software/Policies/YandexBrowser'
|
||||
__managed_policies_path = '/etc/opt/yandex/browser/policies/managed'
|
||||
__recommended_policies_path = '/etc/opt/yandex/browser/policies/recommended'
|
||||
|
||||
def __init__(self, storage, sid, username):
|
||||
self.storage = storage
|
||||
self.sid = sid
|
||||
self.username = username
|
||||
self._is_machine_name = is_machine_name(self.username)
|
||||
self.yandex_keys = self.storage.filter_hklm_entries(self.__registry_branch)
|
||||
|
||||
self.policies_json = dict()
|
||||
|
||||
self.__module_enabled = check_enabled(
|
||||
self.storage
|
||||
, self.__module_name
|
||||
, self.__module_experimental
|
||||
)
|
||||
|
||||
def machine_apply(self):
|
||||
'''
|
||||
Apply machine settings.
|
||||
'''
|
||||
|
||||
destfile = os.path.join(self.__managed_policies_path, 'policies.json')
|
||||
|
||||
try:
|
||||
recommended__json = self.policies_json.pop('Recommended')
|
||||
except:
|
||||
recommended__json = {}
|
||||
|
||||
#Replacing all nested dictionaries with a list
|
||||
dict_item_to_list = (
|
||||
lambda target_dict :
|
||||
{key:[*val.values()] if type(val) == dict else string_to_literal_eval(val) for key,val in target_dict.items()}
|
||||
)
|
||||
os.makedirs(self.__managed_policies_path, exist_ok=True)
|
||||
with open(destfile, 'w') as f:
|
||||
json.dump(dict_item_to_list(self.policies_json), f)
|
||||
logdata = dict()
|
||||
logdata['destfile'] = destfile
|
||||
log('D185', logdata)
|
||||
|
||||
destfilerec = os.path.join(self.__recommended_policies_path, 'policies.json')
|
||||
os.makedirs(self.__recommended_policies_path, exist_ok=True)
|
||||
with open(destfilerec, 'w') as f:
|
||||
json.dump(dict_item_to_list(recommended__json), f)
|
||||
logdata = dict()
|
||||
logdata['destfilerec'] = destfilerec
|
||||
log('D185', logdata)
|
||||
|
||||
|
||||
def apply(self):
|
||||
'''
|
||||
All actual job done here.
|
||||
'''
|
||||
if self.__module_enabled:
|
||||
log('D183')
|
||||
self.create_dict(self.yandex_keys)
|
||||
self.machine_apply()
|
||||
else:
|
||||
log('D184')
|
||||
|
||||
def get_valuename_typeint(self):
|
||||
'''
|
||||
List of keys resulting from parsing chrome.admx with parsing_chrom_admx_intvalues.py
|
||||
'''
|
||||
valuename_typeint = (['DefaultPageSaveSettings',
|
||||
'DefaultUploadSetting',
|
||||
'YandexAutoLaunchMode',
|
||||
'DefaultClipboardSetting',
|
||||
'DefaultFileSystemReadGuardSetting',
|
||||
'DefaultFileSystemWriteGuardSetting',
|
||||
'DefaultImagesSetting',
|
||||
'DefaultJavaScriptJitSetting',
|
||||
'DefaultJavaScriptSetting',
|
||||
'DefaultLocalFontsSetting',
|
||||
'DefaultPopupsSetting',
|
||||
'DefaultSensorsSetting',
|
||||
'DefaultSerialGuardSetting',
|
||||
'DefaultWebBluetoothGuardSetting',
|
||||
'DefaultWebHidGuardSetting',
|
||||
'DefaultWebUsbGuardSetting',
|
||||
'DefaultWindowManagementSetting',
|
||||
'SafeSitesFilterBehavior',
|
||||
'YandexUserFeedbackMode',
|
||||
'TurboSettings',
|
||||
'SidePanelMode',
|
||||
'RestoreOnStartup',
|
||||
'RestoreOnStartup_recommended',
|
||||
'BrowserSwitcherParsingMode',
|
||||
'DefaultNotificationsSetting',
|
||||
'YandexPowerSavingMode',
|
||||
'ChromeVariations',
|
||||
'DeveloperToolsAvailability',
|
||||
'DownloadRestrictions',
|
||||
'NetworkPredictionOptions',
|
||||
'DownloadRestrictions_recommended',
|
||||
'NetworkPredictionOptions_recommended',
|
||||
'DefaultCookiesSetting',
|
||||
'DefaultGeolocationSetting',
|
||||
'IncognitoModeAvailability',
|
||||
'DefaultPrintingSettings',
|
||||
'DefaultPluginsSetting',
|
||||
'DefaultInsecureContentSetting',
|
||||
'PasswordProtectionWarningTrigger',
|
||||
'SafeBrowsingProtectionLevel',
|
||||
'SafeBrowsingProtectionLevel_recommended',
|
||||
'DiskCacheSize'])
|
||||
return valuename_typeint
|
||||
|
||||
|
||||
def get_boolean(self,data):
|
||||
if data in ['0', 'false', None, 'none', 0]:
|
||||
return False
|
||||
if data in ['1', 'true', 1]:
|
||||
return True
|
||||
def get_parts(self, hivekeyname):
|
||||
'''
|
||||
Parse registry path string and leave key parameters
|
||||
'''
|
||||
parts = hivekeyname.replace(self.__registry_branch, '').split('/')
|
||||
return parts
|
||||
|
||||
|
||||
def create_dict(self, yandex_keys):
|
||||
'''
|
||||
Collect dictionaries from registry keys into a general dictionary
|
||||
'''
|
||||
counts = dict()
|
||||
#getting the list of keys to read as an integer
|
||||
valuename_typeint = self.get_valuename_typeint()
|
||||
for it_data in yandex_keys:
|
||||
branch = counts
|
||||
try:
|
||||
if type(it_data.data) is bytes:
|
||||
it_data.data = it_data.data.decode(encoding='utf-16').replace('\x00','')
|
||||
parts = self.get_parts(it_data.hive_key)
|
||||
#creating a nested dictionary from elements
|
||||
for part in parts[:-1]:
|
||||
branch = branch.setdefault(part, {})
|
||||
#dictionary key value initialization
|
||||
if it_data.type == 4:
|
||||
if it_data.valuename in valuename_typeint:
|
||||
branch[parts[-1]] = int(it_data.data)
|
||||
else:
|
||||
branch[parts[-1]] = self.get_boolean(it_data.data)
|
||||
else:
|
||||
if it_data.data[0] == '[' and it_data.data[-1] == ']':
|
||||
try:
|
||||
branch[parts[-1]] = json.loads(str(it_data.data))
|
||||
except:
|
||||
branch[parts[-1]] = str(it_data.data).replace('\\', '/')
|
||||
else:
|
||||
branch[parts[-1]] = str(it_data.data).replace('\\', '/')
|
||||
|
||||
except Exception as exc:
|
||||
logdata = dict()
|
||||
logdata['Exception'] = exc
|
||||
logdata['keyname'] = it_data.keyname
|
||||
log('D178', logdata)
|
||||
try:
|
||||
self.policies_json = counts['']
|
||||
except:
|
||||
self.policies_json = {}
|
@ -23,10 +23,11 @@ import signal
|
||||
import gettext
|
||||
import locale
|
||||
|
||||
from backend import backend_factory
|
||||
from backend import backend_factory, save_dconf
|
||||
from frontend.frontend_manager import frontend_manager, determine_username
|
||||
from plugin import plugin_manager
|
||||
from messages import message_with_code
|
||||
from storage import Dconf_registry
|
||||
|
||||
from util.util import get_machine_name
|
||||
from util.users import (
|
||||
@ -61,6 +62,9 @@ def parse_arguments():
|
||||
arguments.add_argument('--list-backends',
|
||||
action='store_true',
|
||||
help='Show list of available backends')
|
||||
arguments.add_argument('--force',
|
||||
action='store_true',
|
||||
help='Force GPT download')
|
||||
arguments.add_argument('--loglevel',
|
||||
type=int,
|
||||
default=4,
|
||||
@ -120,6 +124,7 @@ class gpoa_controller:
|
||||
print('local')
|
||||
print('samba')
|
||||
return
|
||||
Dconf_registry._force = self.__args.force
|
||||
self.start_plugins()
|
||||
self.start_backend()
|
||||
|
||||
@ -149,6 +154,7 @@ class gpoa_controller:
|
||||
back.retrieve_and_store()
|
||||
# Start frontend only on successful backend finish
|
||||
self.start_frontend()
|
||||
save_dconf(self.username, self.is_machine, nodomain)
|
||||
except Exception as exc:
|
||||
logdata = dict({'message': str(exc)})
|
||||
# In case we're handling "E3" - it means that
|
||||
|
@ -19,7 +19,7 @@
|
||||
import json
|
||||
from base64 import b64decode
|
||||
from Crypto.Cipher import AES
|
||||
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
from util.xml import get_xml_root
|
||||
|
||||
def decrypt_pass(cpassword):
|
||||
@ -67,6 +67,12 @@ def read_drives(drives_file):
|
||||
drive_obj.set_pass(decrypt_pass(props.get('cpassword')))
|
||||
drive_obj.set_dir(props.get('letter'))
|
||||
drive_obj.set_path(props.get('path'))
|
||||
drive_obj.set_action(props.get('action'))
|
||||
drive_obj.set_thisDrive(props.get('thisDrive'))
|
||||
drive_obj.set_allDrives(props.get('allDrives'))
|
||||
drive_obj.set_label(props.get('label'))
|
||||
drive_obj.set_persistent(props.get('persistent'))
|
||||
drive_obj.set_useLetter(props.get('useLetter'))
|
||||
|
||||
drives.append(drive_obj)
|
||||
|
||||
@ -87,12 +93,18 @@ def json2drive(json_str):
|
||||
|
||||
return drive_obj
|
||||
|
||||
class drivemap:
|
||||
class drivemap(DynamicAttributes):
|
||||
def __init__(self):
|
||||
self.login = None
|
||||
self.password = None
|
||||
self.dir = None
|
||||
self.path = None
|
||||
self.action = None
|
||||
self.thisDrive = None
|
||||
self.allDrives = None
|
||||
self.label = None
|
||||
self.persistent = None
|
||||
self.useLetter = None
|
||||
|
||||
def set_login(self, username):
|
||||
self.login = username
|
||||
@ -110,6 +122,24 @@ class drivemap:
|
||||
def set_path(self, path):
|
||||
self.path = path
|
||||
|
||||
def set_action(self, action):
|
||||
self.action = action
|
||||
|
||||
def set_thisDrive(self, thisDrive):
|
||||
self.thisDrive = thisDrive
|
||||
|
||||
def set_allDrives(self, allDrives):
|
||||
self.allDrives = allDrives
|
||||
|
||||
def set_label(self, label):
|
||||
self.label = label
|
||||
|
||||
def set_persistent(self, persistent):
|
||||
self.persistent = persistent
|
||||
|
||||
def set_useLetter(self, useLetter):
|
||||
self.useLetter = useLetter
|
||||
|
||||
def to_json(self):
|
||||
drive = dict()
|
||||
drive['login'] = self.login
|
||||
|
56
gpoa/gpt/dynamic_attributes.py
Normal file
56
gpoa/gpt/dynamic_attributes.py
Normal file
@ -0,0 +1,56 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
from enum import Enum
|
||||
|
||||
class DynamicAttributes:
|
||||
def __init__(self, **kwargs):
|
||||
self.policy_name = None
|
||||
for key, value in kwargs.items():
|
||||
self.__setattr__(key, value)
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
if isinstance(value, Enum):
|
||||
value = str(value)
|
||||
if isinstance(value, str):
|
||||
for q in ["'", "\""]:
|
||||
if any(q in ch for ch in value):
|
||||
value = value.replace(q, "″")
|
||||
self.__dict__[key] = value
|
||||
|
||||
def items(self):
|
||||
return self.__dict__.items()
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.__dict__.items())
|
||||
|
||||
def get_original_value(self, key):
|
||||
value = self.__dict__.get(key)
|
||||
if isinstance(value, str):
|
||||
value = value.replace("″", "'")
|
||||
return value
|
||||
|
||||
class RegistryKeyMetadata(DynamicAttributes):
|
||||
def __init__(self, policy_name, type, is_list=None, mod_previous_value=None):
|
||||
self.policy_name = policy_name
|
||||
self.type = type
|
||||
self.reloaded_with_policy_key = None
|
||||
self.is_list = is_list
|
||||
self.mod_previous_value = mod_previous_value
|
||||
|
||||
def __repr__(self):
|
||||
return str(dict(self))
|
@ -17,24 +17,8 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from util.xml import get_xml_root
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
from enum import Enum
|
||||
|
||||
class FileAction(Enum):
|
||||
CREATE = 'C'
|
||||
REPLACE = 'R'
|
||||
UPDATE = 'U'
|
||||
DELETE = 'D'
|
||||
|
||||
|
||||
def action_letter2enum(letter):
|
||||
if letter in ['C', 'R', 'U', 'D']:
|
||||
if letter == 'C': return FileAction.CREATE
|
||||
if letter == 'R': return FileAction.REPLACE
|
||||
if letter == 'U': return FileAction.UPDATE
|
||||
if letter == 'D': return FileAction.DELETE
|
||||
|
||||
return FileAction.CREATE
|
||||
|
||||
def read_envvars(envvars_file):
|
||||
variables = list()
|
||||
@ -43,8 +27,8 @@ def read_envvars(envvars_file):
|
||||
props = var.find('Properties')
|
||||
name = props.get('name')
|
||||
value = props.get('value')
|
||||
var_obj = envvar(name, value)
|
||||
var_obj.set_action(action_letter2enum(props.get('action', default='C')))
|
||||
action = props.get('action', default='C')
|
||||
var_obj = envvar(name, value, action)
|
||||
|
||||
variables.append(var_obj)
|
||||
|
||||
@ -54,12 +38,9 @@ def merge_envvars(storage, sid, envvar_objects, policy_name):
|
||||
for envv in envvar_objects:
|
||||
storage.add_envvar(sid, envv, policy_name)
|
||||
|
||||
class envvar:
|
||||
def __init__(self, name, value):
|
||||
class envvar(DynamicAttributes):
|
||||
def __init__(self, name, value, action):
|
||||
self.name = name
|
||||
self.value = value
|
||||
self.action = FileAction.CREATE
|
||||
|
||||
def set_action(self, action):
|
||||
self.action = action
|
||||
|
||||
|
@ -17,6 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from util.xml import get_xml_root
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
def read_files(filesxml):
|
||||
files = list()
|
||||
@ -30,6 +31,7 @@ def read_files(filesxml):
|
||||
fil_obj.set_archive(props.get('archive', default=None))
|
||||
fil_obj.set_hidden(props.get('hidden', default=None))
|
||||
fil_obj.set_suppress(props.get('suppress', default=None))
|
||||
fil_obj.set_executable(props.get('executable', default=None))
|
||||
files.append(fil_obj)
|
||||
|
||||
return files
|
||||
@ -38,7 +40,7 @@ def merge_files(storage, sid, file_objects, policy_name):
|
||||
for fileobj in file_objects:
|
||||
storage.add_file(sid, fileobj, policy_name)
|
||||
|
||||
class fileentry:
|
||||
class fileentry(DynamicAttributes):
|
||||
def __init__(self, fromPath):
|
||||
self.fromPath = fromPath
|
||||
|
||||
@ -54,3 +56,5 @@ class fileentry:
|
||||
self.hidden = hidden
|
||||
def set_suppress(self, suppress):
|
||||
self.suppress = suppress
|
||||
def set_executable(self, executable):
|
||||
self.executable = executable
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,28 +17,11 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from enum import Enum
|
||||
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
from util.xml import get_xml_root
|
||||
|
||||
|
||||
class FileAction(Enum):
|
||||
CREATE = 'C'
|
||||
REPLACE = 'R'
|
||||
UPDATE = 'U'
|
||||
DELETE = 'D'
|
||||
|
||||
|
||||
def action_letter2enum(letter):
|
||||
if letter in ['C', 'R', 'U', 'D']:
|
||||
if letter == 'C': return FileAction.CREATE
|
||||
if letter == 'R': return FileAction.REPLACE
|
||||
if letter == 'U': return FileAction.UPDATE
|
||||
if letter == 'D': return FileAction.DELETE
|
||||
|
||||
return FileAction.CREATE
|
||||
|
||||
|
||||
def action_enum2letter(enumitem):
|
||||
return enumitem.value
|
||||
@ -61,14 +44,17 @@ def read_folders(folders_file):
|
||||
|
||||
for fld in get_xml_root(folders_file):
|
||||
props = fld.find('Properties')
|
||||
fld_obj = folderentry(props.get('path'))
|
||||
fld_obj.set_action(action_letter2enum(props.get('action', default='C')))
|
||||
path = props.get('path')
|
||||
action = props.get('action', default='C')
|
||||
fld_obj = folderentry(path, action)
|
||||
fld_obj.set_delete_folder(folder_int2bool(props.get('deleteFolder', default=1)))
|
||||
fld_obj.set_delete_sub_folders(folder_int2bool(props.get('deleteSubFolders', default=1)))
|
||||
fld_obj.set_delete_files(folder_int2bool(props.get('deleteFiles', default=1)))
|
||||
fld_obj.set_hidden_folder(folder_int2bool(props.get('hidden', default=0)))
|
||||
|
||||
folders.append(fld_obj)
|
||||
|
||||
|
||||
return folders
|
||||
|
||||
def merge_folders(storage, sid, folder_objects, policy_name):
|
||||
@ -76,13 +62,14 @@ def merge_folders(storage, sid, folder_objects, policy_name):
|
||||
storage.add_folder(sid, folder, policy_name)
|
||||
|
||||
|
||||
class folderentry:
|
||||
def __init__(self, path):
|
||||
class folderentry(DynamicAttributes):
|
||||
def __init__(self, path, action):
|
||||
self.path = path
|
||||
self.action = FileAction.CREATE
|
||||
self.action = action
|
||||
self.delete_folder = False
|
||||
self.delete_sub_folders = False
|
||||
self.delete_files = False
|
||||
self.hidden_folder = False
|
||||
|
||||
def set_action(self, action):
|
||||
self.action = action
|
||||
@ -96,3 +83,5 @@ class folderentry:
|
||||
def set_delete_files(self, del_bool):
|
||||
self.delete_files = del_bool
|
||||
|
||||
def set_hidden_folder(self, hid_bool):
|
||||
self.hidden_folder = hid_bool
|
48
gpoa/gpt/gpo_dconf_mapping.py
Normal file
48
gpoa/gpt/gpo_dconf_mapping.py
Normal file
@ -0,0 +1,48 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
class GpoInfoDconf(DynamicAttributes):
|
||||
_counter = 0
|
||||
def __init__(self, gpo) -> None:
|
||||
GpoInfoDconf._counter += 1
|
||||
self.counter = GpoInfoDconf._counter
|
||||
self.display_name = None
|
||||
self.name = None
|
||||
self.version = None
|
||||
self.link = None
|
||||
self._fill_attributes(gpo)
|
||||
|
||||
def _fill_attributes(self, gpo):
|
||||
try:
|
||||
self.display_name = gpo.display_name
|
||||
except:
|
||||
self.display_name = "Unknown"
|
||||
try:
|
||||
self.name = gpo.name
|
||||
except:
|
||||
self.name = "Unknown"
|
||||
try:
|
||||
self.version = gpo.version
|
||||
except:
|
||||
self.version = "Unknown"
|
||||
try:
|
||||
self.link = gpo.link
|
||||
except:
|
||||
self.link = "Unknown"
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -23,6 +23,7 @@ from enum import Enum, unique
|
||||
from samba.gp_parse.gp_pol import GPPolParser
|
||||
|
||||
from storage import registry_factory
|
||||
from storage.dconf_registry import add_to_dict
|
||||
|
||||
from .polfile import (
|
||||
read_polfile
|
||||
@ -68,6 +69,10 @@ from .scriptsini import (
|
||||
read_scripts
|
||||
, merge_scripts
|
||||
)
|
||||
from .networkshares import (
|
||||
read_networkshares
|
||||
, merge_networkshares
|
||||
)
|
||||
import util
|
||||
import util.preg
|
||||
from util.paths import (
|
||||
@ -91,6 +96,7 @@ class FileType(Enum):
|
||||
SERVICES = 'services.xml'
|
||||
PRINTERS = 'printers.xml'
|
||||
SCRIPTS = 'scripts.ini'
|
||||
NETWORKSHARES = 'networkshares.xml'
|
||||
|
||||
def get_preftype(path_to_file):
|
||||
fpath = Path(path_to_file)
|
||||
@ -117,6 +123,7 @@ def pref_parsers():
|
||||
parsers[FileType.SERVICES] = read_services
|
||||
parsers[FileType.PRINTERS] = read_printers
|
||||
parsers[FileType.SCRIPTS] = read_scripts
|
||||
parsers[FileType.NETWORKSHARES] = read_networkshares
|
||||
|
||||
return parsers
|
||||
|
||||
@ -138,6 +145,7 @@ def pref_mergers():
|
||||
mergers[FileType.SERVICES] = merge_services
|
||||
mergers[FileType.PRINTERS] = merge_printers
|
||||
mergers[FileType.SCRIPTS] = merge_scripts
|
||||
mergers[FileType.NETWORKSHARES] = merge_networkshares
|
||||
|
||||
return mergers
|
||||
|
||||
@ -146,10 +154,14 @@ def get_merger(preference_type):
|
||||
return mergers[preference_type]
|
||||
|
||||
class gpt:
|
||||
def __init__(self, gpt_path, sid):
|
||||
def __init__(self, gpt_path, sid, username='Machine', gpo_info=None):
|
||||
add_to_dict(gpt_path, username, gpo_info)
|
||||
self.path = gpt_path
|
||||
self.username = username
|
||||
self.sid = sid
|
||||
self.storage = registry_factory('registry')
|
||||
self.storage = registry_factory()
|
||||
self.storage._gpt_read_flag = True
|
||||
self.gpo_info = gpo_info
|
||||
self.name = ''
|
||||
self.guid = self.path.rpartition('/')[2]
|
||||
if 'default' == self.guid:
|
||||
@ -171,6 +183,7 @@ class gpt:
|
||||
, 'services'
|
||||
, 'scheduledtasks'
|
||||
, 'scripts'
|
||||
, 'networkshares'
|
||||
]
|
||||
self.settings = dict()
|
||||
self.settings['machine'] = dict()
|
||||
@ -206,7 +219,7 @@ class gpt:
|
||||
if self.settings['machine']['regpol']:
|
||||
mlogdata = dict({'polfile': self.settings['machine']['regpol']})
|
||||
log('D34', mlogdata)
|
||||
util.preg.merge_polfile(self.settings['machine']['regpol'], policy_name=self.name)
|
||||
util.preg.merge_polfile(self.settings['machine']['regpol'], policy_name=self.name, gpo_info=self.gpo_info)
|
||||
# Merge machine preferences to registry if possible
|
||||
for preference_name, preference_path in self.settings['machine'].items():
|
||||
if preference_path:
|
||||
@ -232,7 +245,11 @@ class gpt:
|
||||
if self.settings['user']['regpol']:
|
||||
mulogdata = dict({'polfile': self.settings['user']['regpol']})
|
||||
log('D35', mulogdata)
|
||||
util.preg.merge_polfile(self.settings['user']['regpol'], sid=self.sid, policy_name=self.name)
|
||||
util.preg.merge_polfile(self.settings['user']['regpol'],
|
||||
sid=self.sid,
|
||||
policy_name=self.name,
|
||||
username=self.username,
|
||||
gpo_info=self.gpo_info)
|
||||
# Merge user preferences to registry if possible
|
||||
for preference_name, preference_path in self.settings['user'].items():
|
||||
if preference_path:
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,6 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from util.xml import get_xml_root
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
def read_inifiles(inifiles_file):
|
||||
inifiles = list()
|
||||
@ -27,7 +28,7 @@ def read_inifiles(inifiles_file):
|
||||
ini_obj.set_section(prors.get('section', default=None))
|
||||
ini_obj.set_property(prors.get('property', default=None))
|
||||
ini_obj.set_value(prors.get('value', default=None))
|
||||
ini_obj.set_action(prors.get('action'))
|
||||
ini_obj.set_action(prors.get('action', default='C'))
|
||||
|
||||
inifiles.append(ini_obj)
|
||||
|
||||
@ -37,7 +38,7 @@ def merge_inifiles(storage, sid, inifile_objects, policy_name):
|
||||
for iniobj in inifile_objects:
|
||||
storage.add_ini(sid, iniobj, policy_name)
|
||||
|
||||
class inifile:
|
||||
class inifile(DynamicAttributes):
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
|
||||
|
57
gpoa/gpt/networkshares.py
Normal file
57
gpoa/gpt/networkshares.py
Normal file
@ -0,0 +1,57 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from util.xml import get_xml_root
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
def read_networkshares(networksharesxml):
|
||||
networkshares = list()
|
||||
|
||||
for share in get_xml_root(networksharesxml):
|
||||
props = share.find('Properties')
|
||||
networkshare_obj = networkshare(props.get('name'))
|
||||
networkshare_obj.set_action(props.get('action', default='C'))
|
||||
networkshare_obj.set_path(props.get('path', default=None))
|
||||
networkshare_obj.set_all_regular(props.get('allRegular', default=None))
|
||||
networkshare_obj.set_comment(props.get('comment', default=None))
|
||||
networkshare_obj.set_limitUsers(props.get('limitUsers', default=None))
|
||||
networkshare_obj.set_abe(props.get('abe', default=None))
|
||||
networkshares.append(networkshare_obj)
|
||||
|
||||
return networkshares
|
||||
|
||||
def merge_networkshares(storage, sid, networkshares_objects, policy_name):
|
||||
for networkshareobj in networkshares_objects:
|
||||
storage.add_networkshare(sid, networkshareobj, policy_name)
|
||||
|
||||
class networkshare(DynamicAttributes):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def set_action(self, action):
|
||||
self.action = action
|
||||
def set_path(self, path):
|
||||
self.path = path
|
||||
def set_all_regular(self, allRegular):
|
||||
self.allRegular = allRegular
|
||||
def set_comment(self, comment):
|
||||
self.comment = comment
|
||||
def set_limitUsers(self, limitUsers):
|
||||
self.limitUsers = limitUsers
|
||||
def set_abe(self, abe):
|
||||
self.abe = abe
|
@ -24,9 +24,10 @@ def read_polfile(filename):
|
||||
return load_preg(filename).entries
|
||||
|
||||
def merge_polfile(storage, sid, policy_objects, policy_name):
|
||||
for entry in policy_objects:
|
||||
if not sid:
|
||||
storage.add_hklm_entry(entry, policy_name)
|
||||
else:
|
||||
storage.add_hkcu_entry(entry, sid, policy_name)
|
||||
pass
|
||||
# for entry in policy_objects:
|
||||
# if not sid:
|
||||
# storage.add_hklm_entry(entry, policy_name)
|
||||
# else:
|
||||
# storage.add_hkcu_entry(entry, sid, policy_name)
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,6 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import json
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
from util.xml import get_xml_root
|
||||
|
||||
@ -60,7 +61,7 @@ def json2printer(json_str):
|
||||
|
||||
return prn
|
||||
|
||||
class printer:
|
||||
class printer(DynamicAttributes):
|
||||
def __init__(self, ptype, name, status):
|
||||
'''
|
||||
ptype may be one of:
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,7 +18,7 @@
|
||||
|
||||
import configparser
|
||||
import os
|
||||
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
def read_scripts(scripts_file):
|
||||
scripts = Scripts_lists()
|
||||
@ -61,19 +61,19 @@ def read_scripts(scripts_file):
|
||||
section_scripts[key_index].set_args(config[act][key])
|
||||
if logon_scripts:
|
||||
for i in sorted(logon_scripts.keys()):
|
||||
scripts.add_script(act_upper, logon_scripts[i])
|
||||
scripts.add_script('LOGON', logon_scripts[i])
|
||||
|
||||
if logoff_scripts:
|
||||
for i in sorted(logoff_scripts.keys()):
|
||||
scripts.add_script(act_upper, logoff_scripts[i])
|
||||
scripts.add_script('LOGOFF', logoff_scripts[i])
|
||||
|
||||
if startup_scripts:
|
||||
for i in sorted(startup_scripts.keys()):
|
||||
scripts.add_script(act_upper, startup_scripts[i])
|
||||
scripts.add_script('STARTUP', startup_scripts[i])
|
||||
|
||||
if shutdown_scripts:
|
||||
for i in sorted(shutdown_scripts.keys()):
|
||||
scripts.add_script(act_upper, shutdown_scripts[i])
|
||||
scripts.add_script('SHUTDOWN', shutdown_scripts[i])
|
||||
|
||||
|
||||
return scripts
|
||||
@ -115,7 +115,7 @@ class Scripts_lists:
|
||||
self.get_shutdown_scripts().append(script)
|
||||
|
||||
|
||||
class Script:
|
||||
class Script(DynamicAttributes):
|
||||
__logon_counter = 0
|
||||
__logoff_counter = 0
|
||||
__startup_counter = 0
|
||||
@ -126,6 +126,7 @@ class Script:
|
||||
self.action = action_upper
|
||||
self.path = os.path.join(script_dir, action_upper, script_filename.upper())
|
||||
if not os.path.isfile(self.path):
|
||||
self.number = None
|
||||
return None
|
||||
self.args = None
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,6 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from util.xml import get_xml_root
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
def read_services(service_file):
|
||||
'''
|
||||
@ -43,7 +44,7 @@ def merge_services(storage, sid, service_objects, policy_name):
|
||||
for srv in service_objects:
|
||||
pass
|
||||
|
||||
class service:
|
||||
class service(DynamicAttributes):
|
||||
def __init__(self, name):
|
||||
self.unit = name
|
||||
self.servname = None
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,20 +18,23 @@
|
||||
|
||||
from pathlib import Path
|
||||
import stat
|
||||
import logging
|
||||
from enum import Enum
|
||||
|
||||
from xml.etree import ElementTree
|
||||
from xdg.DesktopEntry import DesktopEntry
|
||||
import json
|
||||
|
||||
from util.windows import transform_windows_path
|
||||
from util.xml import get_xml_root
|
||||
from util.paths import get_desktop_files_directory
|
||||
from .dynamic_attributes import DynamicAttributes
|
||||
|
||||
class TargetType(Enum):
|
||||
FILESYSTEM = 'FILESYSTEM'
|
||||
URL = 'URL'
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
def get_ttype(targetstr):
|
||||
'''
|
||||
Validation function for targetType property
|
||||
@ -42,7 +45,7 @@ def get_ttype(targetstr):
|
||||
'''
|
||||
ttype = TargetType.FILESYSTEM
|
||||
|
||||
if targetstr == 'URL':
|
||||
if targetstr == 'URL'or targetstr == TargetType.URL:
|
||||
ttype = TargetType.URL
|
||||
|
||||
return ttype
|
||||
@ -85,6 +88,9 @@ def read_shortcuts(shortcuts_file):
|
||||
sc.set_guid(link.get('uid'))
|
||||
sc.set_usercontext(link.get('userContext', False))
|
||||
sc.set_icon(props.get('iconPath'))
|
||||
if props.get('comment'):
|
||||
sc.set_comment(props.get('comment'))
|
||||
|
||||
shortcuts.append(sc)
|
||||
|
||||
return shortcuts
|
||||
@ -93,24 +99,23 @@ def merge_shortcuts(storage, sid, shortcut_objects, policy_name):
|
||||
for shortcut in shortcut_objects:
|
||||
storage.add_shortcut(sid, shortcut, policy_name)
|
||||
|
||||
def json2sc(json_str):
|
||||
'''
|
||||
Build shortcut out of string-serialized JSON
|
||||
'''
|
||||
json_obj = json.loads(json_str)
|
||||
link_type = get_ttype(json_obj['type'])
|
||||
|
||||
sc = shortcut(json_obj['dest'], json_obj['path'], json_obj['arguments'], json_obj['name'], json_obj['action'], link_type)
|
||||
sc.set_changed(json_obj['changed'])
|
||||
sc.set_clsid(json_obj['clsid'])
|
||||
sc.set_guid(json_obj['guid'])
|
||||
sc.set_usercontext(json_obj['is_in_user_context'])
|
||||
if 'icon' in json_obj:
|
||||
sc.set_icon(json_obj['icon'])
|
||||
def find_desktop_entry(binary_path):
|
||||
desktop_dir = get_desktop_files_directory()
|
||||
binary_name = ''.join(binary_path.split('/')[-1])
|
||||
desktop_file_path = Path(f"{desktop_dir}/{binary_name}.desktop")
|
||||
|
||||
return sc
|
||||
if desktop_file_path.exists():
|
||||
desktop_entry = DesktopEntry()
|
||||
desktop_entry.parse(desktop_file_path)
|
||||
return desktop_entry
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class shortcut(DynamicAttributes):
|
||||
_ignore_fields = {"desktop_file_template", "desktop_file"}
|
||||
|
||||
class shortcut:
|
||||
def __init__(self, dest, path, arguments, name=None, action=None, ttype=TargetType.FILESYSTEM):
|
||||
'''
|
||||
:param dest: Path to resulting file on file system
|
||||
@ -119,16 +124,42 @@ class shortcut:
|
||||
:param name: Name of the application
|
||||
:param type: Link type - FILESYSTEM or URL
|
||||
'''
|
||||
self.dest = dest
|
||||
self.dest = self.replace_slashes(dest)
|
||||
self.path = path
|
||||
self.expanded_path = None
|
||||
self.arguments = arguments
|
||||
self.name = name
|
||||
self.name = self.replace_name(name)
|
||||
self.action = action
|
||||
self.changed = ''
|
||||
self.icon = None
|
||||
self.comment = ''
|
||||
self.is_in_user_context = self.set_usercontext()
|
||||
self.type = ttype
|
||||
self.desktop_file_template = None
|
||||
|
||||
|
||||
def items(self):
|
||||
return ((k, v) for k, v in super().items() if k not in self._ignore_fields)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.items())
|
||||
|
||||
|
||||
def replace_slashes(self, input_path):
|
||||
if input_path.startswith('%'):
|
||||
index = input_path.find('%', 1)
|
||||
if index != -1:
|
||||
replace_path = input_path[:index + 2] + input_path[index + 2:].replace('/','-')
|
||||
return replace_path
|
||||
return input_path.replace('/','-')
|
||||
|
||||
def replace_name(self, input_name):
|
||||
if input_name.startswith('%'):
|
||||
index = input_name.find('%', 1)
|
||||
if index != -1:
|
||||
replace_name = input_name[index + 2:]
|
||||
return replace_name
|
||||
return input_name
|
||||
|
||||
def __str__(self):
|
||||
result = self.to_json()
|
||||
@ -149,6 +180,9 @@ class shortcut:
|
||||
def set_icon(self, icon_name):
|
||||
self.icon = icon_name
|
||||
|
||||
def set_comment(self, comment):
|
||||
self.comment = comment
|
||||
|
||||
def set_type(self, ttype):
|
||||
'''
|
||||
Set type of the hyperlink - FILESYSTEM or URL
|
||||
@ -177,28 +211,6 @@ class shortcut:
|
||||
def is_usercontext(self):
|
||||
return self.is_in_user_context
|
||||
|
||||
def to_json(self):
|
||||
'''
|
||||
Return shortcut's JSON for further serialization.
|
||||
'''
|
||||
content = dict()
|
||||
content['dest'] = self.dest
|
||||
content['path'] = self.path
|
||||
content['name'] = self.name
|
||||
content['arguments'] = self.arguments
|
||||
content['clsid'] = self.clsid
|
||||
content['guid'] = self.guid
|
||||
content['changed'] = self.changed
|
||||
content['action'] = self.action
|
||||
content['is_in_user_context'] = self.is_in_user_context
|
||||
content['type'] = ttype2str(self.type)
|
||||
if self.icon:
|
||||
content['icon'] = self.icon
|
||||
result = self.desktop()
|
||||
result.content.update(content)
|
||||
|
||||
return json.dumps(result.content)
|
||||
|
||||
def desktop(self, dest=None):
|
||||
'''
|
||||
Returns desktop file object which may be written to disk.
|
||||
@ -206,6 +218,7 @@ class shortcut:
|
||||
if dest:
|
||||
self.desktop_file = DesktopEntry(dest)
|
||||
else:
|
||||
self.desktop_file_template = find_desktop_entry(self.path)
|
||||
self.desktop_file = DesktopEntry()
|
||||
self.desktop_file.addGroup('Desktop Entry')
|
||||
self.desktop_file.set('Version', '1.0')
|
||||
@ -217,7 +230,7 @@ class shortcut:
|
||||
'''
|
||||
Update desktop file object from internal data.
|
||||
'''
|
||||
if self.type == TargetType.URL:
|
||||
if get_ttype(self.type) == TargetType.URL:
|
||||
self.desktop_file.set('Type', 'Link')
|
||||
else:
|
||||
self.desktop_file.set('Type', 'Application')
|
||||
@ -227,14 +240,21 @@ class shortcut:
|
||||
desktop_path = self.path
|
||||
if self.expanded_path:
|
||||
desktop_path = self.expanded_path
|
||||
if self.type == TargetType.URL:
|
||||
if get_ttype(self.type) == TargetType.URL:
|
||||
self.desktop_file.set('URL', desktop_path)
|
||||
else:
|
||||
self.desktop_file.set('Terminal', 'false')
|
||||
self.desktop_file.set('Exec', '{} {}'.format(desktop_path, self.arguments))
|
||||
str2bool_lambda = (lambda boolstr: boolstr if isinstance(boolstr, bool)
|
||||
else boolstr and boolstr.lower() in ['True', 'true', 'yes', '1'])
|
||||
if self.desktop_file_template:
|
||||
terminal_state = str2bool_lambda(self.desktop_file_template.get('Terminal'))
|
||||
self.desktop_file.set('Terminal', 'true' if terminal_state else 'false')
|
||||
self.desktop_file.set('Exec', '{} {}'.format(desktop_path, self.get_original_value('arguments')))
|
||||
self.desktop_file.set('Comment', self.comment)
|
||||
|
||||
if self.icon:
|
||||
self.desktop_file.set('Icon', self.icon)
|
||||
elif self.desktop_file_template and self.desktop_file_template.get('Icon', False):
|
||||
self.desktop_file.set('Icon', self.desktop_file_template.get('Icon'))
|
||||
|
||||
def _write_desktop(self, dest, create_only=False, read_firstly=False):
|
||||
'''
|
||||
|
@ -25,6 +25,7 @@ import os
|
||||
import sys
|
||||
import pwd
|
||||
import signal
|
||||
from storage import Dconf_registry
|
||||
|
||||
from util.users import (
|
||||
is_root
|
||||
@ -83,6 +84,11 @@ def parse_cli_arguments():
|
||||
type=int,
|
||||
default=5,
|
||||
help='Set logging verbosity level')
|
||||
argparser.add_argument('-f',
|
||||
'--force',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='Force GPT download')
|
||||
argparser.add_argument('-s',
|
||||
'--system',
|
||||
action='store_true',
|
||||
@ -165,6 +171,7 @@ def main():
|
||||
gettext.bindtextdomain('gpoa', '/usr/lib/python3/site-packages/gpoa/locale')
|
||||
gettext.textdomain('gpoa')
|
||||
set_loglevel(args.loglevel)
|
||||
Dconf_registry._force = args.force
|
||||
gpo_appliers = runner_factory(args, process_target(args.target))
|
||||
|
||||
if gpo_appliers:
|
||||
|
@ -2,7 +2,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -19,9 +19,7 @@
|
||||
|
||||
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import subprocess
|
||||
|
||||
from util.util import (
|
||||
runcmd
|
||||
@ -146,10 +144,10 @@ def is_unit_enabled(unit_name, unit_global=False):
|
||||
|
||||
def get_status():
|
||||
'''
|
||||
Check that gpupdate.service and gpupdate-user.service are enabled.
|
||||
Check that gpupdate.timer and gpupdate-user.timer are enabled.
|
||||
'''
|
||||
is_gpupdate = is_unit_enabled('gpupdate.service')
|
||||
is_gpupdate_user = is_unit_enabled('gpupdate-user.service', unit_global=True)
|
||||
is_gpupdate = is_unit_enabled('gpupdate.timer')
|
||||
is_gpupdate_user = is_unit_enabled('gpupdate-user.timer', unit_global=True)
|
||||
|
||||
if is_gpupdate and is_gpupdate_user:
|
||||
return True
|
||||
@ -218,7 +216,7 @@ def enable_gp(policy_name, backend_type):
|
||||
cmd_set_gpupdate_policy = ['/usr/sbin/control', 'system-policy', 'gpupdate']
|
||||
cmd_gpoa_nodomain = ['/usr/sbin/gpoa', '--nodomain', '--loglevel', '5']
|
||||
cmd_enable_gpupdate_service = ['/bin/systemctl', 'enable', 'gpupdate.service']
|
||||
cmd_enable_gpupdate_user_service = ['/bin/systemctl', '--global', 'enable', 'gpupdate-user.service']
|
||||
cmd_enable_gpupdate_user_service = ['/bin/systemctl', '--global', 'disable', 'gpupdate-user.service']
|
||||
cmd_enable_gpupdate_timer = ['/bin/systemctl', 'enable', 'gpupdate.timer']
|
||||
cmd_enable_gpupdate_user_timer = ['/bin/systemctl', '--global', 'enable', 'gpupdate-user.timer']
|
||||
cmd_enable_gpupdate_scripts_service = ['/bin/systemctl', 'enable', 'gpupdate-scripts-run.service']
|
||||
@ -254,11 +252,7 @@ def enable_gp(policy_name, backend_type):
|
||||
# Enable gpupdate-setup.service for all users
|
||||
if not rollback_on_error(cmd_enable_gpupdate_user_service):
|
||||
return
|
||||
if not is_unit_enabled('gpupdate-user.service', unit_global=True):
|
||||
disable_gp()
|
||||
return
|
||||
|
||||
# Enable gpupdate-scripts-run.service
|
||||
# Enable gpupdate-scripts-run.service
|
||||
if not rollback_on_error(cmd_enable_gpupdate_scripts_service):
|
||||
return
|
||||
if not is_unit_enabled('gpupdate-scripts-run.service'):
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -62,6 +62,13 @@ msgstr "Политика Chromium"
|
||||
msgid "Set user property to"
|
||||
msgstr "Установка свойств для пользователя"
|
||||
|
||||
msgid "The line in the configuration file was cleared"
|
||||
msgstr "В конфигурационном файле была очищена строка"
|
||||
|
||||
msgid "Found GPT in cache"
|
||||
msgstr "Найден GPT в кеше"
|
||||
|
||||
|
||||
# Error
|
||||
msgid "Insufficient permissions to run gpupdate"
|
||||
msgstr "Недостаточно прав для запуска gpupdate"
|
||||
@ -228,6 +235,39 @@ msgstr "Ошибка очистки каталога для машины"
|
||||
msgid "Error cleaning directory for user"
|
||||
msgstr "Ошибка очистки каталога для пользователя"
|
||||
|
||||
msgid "Error while executing command for widgets"
|
||||
msgstr "Ошибка при выполнении команды для виджетов"
|
||||
|
||||
msgid "Error creating environment variables"
|
||||
msgstr "Ошибка создания переменных среды"
|
||||
|
||||
msgid "Error running kwriteconfig5 command"
|
||||
msgstr "Ошибка выполнения команды kwriteconfig5"
|
||||
|
||||
msgid "Error getting list of keys"
|
||||
msgstr "Ошибка получения списка ключей"
|
||||
|
||||
msgid "Error getting key value"
|
||||
msgstr "Ошибка при получении значения ключей"
|
||||
|
||||
msgid "Failed to update dconf database"
|
||||
msgstr "Не удалось обновить базу данных dconf"
|
||||
|
||||
msgid "Exception occurred while updating dconf database"
|
||||
msgstr "Возникло исключение при обновлении базы данных dconf"
|
||||
|
||||
msgid "Failed to retrieve data from dconf database"
|
||||
msgstr "Не удалось получить данные из базы dconf"
|
||||
|
||||
msgid "Autofs restart failed"
|
||||
msgstr "Перезапуск Autofs не удался"
|
||||
|
||||
msgid "Failed to update LDAP with new password data"
|
||||
msgstr "Не удалось обновить LDAP новыми данными пароля"
|
||||
|
||||
msgid "Failed to change local user password"
|
||||
msgstr "Не удалось изменить пароль локального пользователя"
|
||||
|
||||
# Error_end
|
||||
|
||||
# Debug
|
||||
@ -603,11 +643,11 @@ msgstr "Запуск применение настроек Envvar для маш
|
||||
msgid "Envvar applier for machine will not be started"
|
||||
msgstr "Применение настроек Envvar для машины не запускается"
|
||||
|
||||
msgid "Running Envvar applier for user in user context"
|
||||
msgstr "Запуск применение настроек Envvar для пользователя в контексте пользователя"
|
||||
msgid "Running Envvar applier for user in admin context"
|
||||
msgstr "Запуск применение настроек Envvar для пользователя в контексте администратора"
|
||||
|
||||
msgid "Envvar applier for user in user context will not be started"
|
||||
msgstr "Применение настроек Envvar для пользователя в контексте пользователя не запускается"
|
||||
msgid "Envvar applier for user in admin context will not be started"
|
||||
msgstr "Применение настроек Envvar для пользователя в контексте администратора не запускается"
|
||||
|
||||
msgid "Running Package applier for machine"
|
||||
msgstr "Запуск установки пакетов для машины"
|
||||
@ -709,25 +749,25 @@ msgid "Running File copy applier for machine"
|
||||
msgstr "Запуск применение настроек копирования файлов для машины"
|
||||
|
||||
msgid "Running File copy applier for machine will not be started"
|
||||
msgstr "Запуск применение настроек копирования файлов для машины не будет запущено"
|
||||
msgstr "Применение настроек копирования файлов для машины не будет запущено"
|
||||
|
||||
msgid "Running File copy applier for user in administrator context"
|
||||
msgstr "Запуск применение настроек копирования файлов для пользователя в контексте администратора"
|
||||
|
||||
msgid "Running File copy applier for user in administrator context will not be started"
|
||||
msgstr "Запуск применение настроек копирования файлов для пользователя в контексте администратора не будет запущено"
|
||||
msgstr "Применение настроек копирования файлов для пользователя в контексте администратора не будет запущено"
|
||||
|
||||
msgid "Running ini applier for machine"
|
||||
msgstr "Запуск применение настроек ini файлов для машины"
|
||||
|
||||
msgid "Running ini applier for machine will not be started"
|
||||
msgstr "Запуск применение настроек ini файлов для машины не будет запущено"
|
||||
msgstr "Применение настроек ini файлов для машины не будет запущено"
|
||||
|
||||
msgid "Running ini applier for user in administrator context"
|
||||
msgstr "Запуск применение настроек ini файлов для пользователя в контексте администратора"
|
||||
msgid "Running ini applier for user in user context"
|
||||
msgstr "Запуск применение настроек ini файлов для пользователя в контексте пользователя"
|
||||
|
||||
msgid "Running ini applier for user in administrator context will not be started"
|
||||
msgstr "Запуск применение настроек ini файлов для пользователя в контексте администратора не будет запущено"
|
||||
msgid "Running ini applier for user in user context will not be started"
|
||||
msgstr "Применение настроек ini файлов для пользователя в контексте пользователя не будет запущено"
|
||||
|
||||
msgid "Ini-file path not recognized"
|
||||
msgstr "Путь к ini-файлу не распознан"
|
||||
@ -741,6 +781,174 @@ msgstr "Сохранение информации об ini-файле"
|
||||
msgid "Dictionary key generation failed"
|
||||
msgstr "Формирования ключа словаря не удалось"
|
||||
|
||||
msgid "Running CIFS applier for machine"
|
||||
msgstr "Запуск применение настроек CIFS для машины"
|
||||
|
||||
msgid "CIFS applier for machine will not be started"
|
||||
msgstr "Применение настроек CIFS для машины не будет запущено"
|
||||
|
||||
msgid "Saving information about network shares"
|
||||
msgstr "Сохранение информации о сетевых ресурсах"
|
||||
|
||||
msgid "Running networkshare applier for machine"
|
||||
msgstr "Запуск применение настроек сетевых каталогов для машины"
|
||||
|
||||
msgid "Running networkshare applier for machine will not be starte"
|
||||
msgstr "Применение настроек сетевых каталогов для машины не будет запущено"
|
||||
|
||||
msgid "Apply network share data action failed"
|
||||
msgstr "Не удалось применить действие с данными общего сетевого ресурса"
|
||||
|
||||
msgid "Running yandex_browser_applier for machine"
|
||||
msgstr "Запуск yandex_browser_applier для машины"
|
||||
|
||||
msgid "Yandex_browser_applier for machine will not be started"
|
||||
msgstr "Yandex_browser_applier для машины не запустится"
|
||||
|
||||
msgid "Wrote YandexBrowser preferences to"
|
||||
msgstr "Запись настройки Яндекс Браузера в"
|
||||
|
||||
msgid "Running networkshare applier for user"
|
||||
msgstr "Запуск применение настроек сетевых каталогов для пользователя"
|
||||
|
||||
msgid "File copy"
|
||||
msgstr "Копирование файла"
|
||||
|
||||
msgid "Running networkshare applier for user will not be started"
|
||||
msgstr "Применение настроек сетевых каталогов для пользователя не будет запущено"
|
||||
|
||||
msgid "File update"
|
||||
msgstr "Обновление файла"
|
||||
|
||||
msgid "Applying settings for network share"
|
||||
msgstr "Применение настроек для сетевой папки"
|
||||
|
||||
msgid "Deleting a file"
|
||||
msgstr "Удаление файла"
|
||||
|
||||
msgid "Running GPOA by root for user"
|
||||
msgstr "Запуск GPOA от root для пользователя"
|
||||
|
||||
msgid "The GPOA process was started for computer"
|
||||
msgstr "Процесс GPOA запущен для компьютера"
|
||||
|
||||
msgid "Running networkshare applier for machine will not be started"
|
||||
msgstr "Применение настроек сетевых каталогов для машины не будет запущено"
|
||||
|
||||
msgid "Failed to create a symlink to the network drives mountpoint"
|
||||
msgstr "Не удалось создать ссылку на точку монтирования сетевых дисков пользователя"
|
||||
|
||||
msgid "Failed to create a symlink to the system network drives mountpoint"
|
||||
msgstr "Не удалось создать ссылку на точку монтирования системных сетевых дисков"
|
||||
|
||||
msgid "Failed to create a symlink to the hidden network drives mountpoint"
|
||||
msgstr "Не удалось создать ссылку на точку монтирования скрытых сетевых дисков пользователя"
|
||||
|
||||
msgid "Failed to create a symlink to the hidden system network drives mountpoint"
|
||||
msgstr "Не удалось создать ссылку на точку монтирования скрытых системных сетевых дисков"
|
||||
|
||||
msgid "Running KDE applier for machine"
|
||||
msgstr "Запуск применения настроек KDE для машины"
|
||||
|
||||
msgid "KDE applier for machine will not be started"
|
||||
msgstr "Применение настроек KDE для машины не удалось"
|
||||
|
||||
msgid "Running KDE applier for user in user context"
|
||||
msgstr "Запуск применения настроек KDE в контексте пользователя"
|
||||
|
||||
msgid "KDE applier for user in user context will not be started"
|
||||
msgstr "KDE в контексте пользователя не запускается"
|
||||
|
||||
msgid "Changing the configuration file"
|
||||
msgstr "Изменение конфигурационного файла"
|
||||
|
||||
msgid "Widget command completed successfully"
|
||||
msgstr "Команда для виджетов выполнена успешно"
|
||||
|
||||
msgid "Getting a list of keys"
|
||||
msgstr "Получение списка ключей"
|
||||
|
||||
msgid "Getting the key value"
|
||||
msgstr "Получение значения ключа"
|
||||
|
||||
msgid "Successfully updated dconf database"
|
||||
msgstr "База данных dconf успешно обновлена"
|
||||
|
||||
msgid "Creating a dictionary with keys and values from the dconf database"
|
||||
msgstr "Формирование словаря с ключами и значениями из базы dconf"
|
||||
|
||||
msgid "No entry found for the specified path"
|
||||
msgstr "Не найдено записей по указанному пути"
|
||||
|
||||
msgid "Creating an ini file with policies for dconf"
|
||||
msgstr "Создание ini-файла с политиками для dconf"
|
||||
|
||||
msgid "GPO version was not found"
|
||||
msgstr "Версия GPO не найдена"
|
||||
|
||||
msgid "SYSVOL entry found in cache"
|
||||
msgstr "Запись SYSVOL найдена в кеше"
|
||||
|
||||
msgid "Wrote Thunderbird preferences to"
|
||||
msgstr "Настройки Thunderbird записаны в"
|
||||
|
||||
msgid "Running Thunderbird applier for machine"
|
||||
msgstr "Запуск применение настроек Thunderbird для машины"
|
||||
|
||||
msgid "Thunderbird applier for machine will not be started"
|
||||
msgstr "Применение настроек Thunderbird для компьютера не запускается"
|
||||
|
||||
msgid "The environment file has been cleaned"
|
||||
msgstr "Файл environment очищен"
|
||||
|
||||
msgid "Cleanup of file environment failed"
|
||||
msgstr "Очистка файла environment не удалась"
|
||||
|
||||
msgid "Failed to get dictionary"
|
||||
msgstr "Не удалось получить словарь"
|
||||
|
||||
msgid "LAPS applier started"
|
||||
msgstr "Запущен обработчик LAPS"
|
||||
|
||||
msgid "LAPS applier is disabled"
|
||||
msgstr "Обработчик LAPS отключен"
|
||||
|
||||
msgid "Rebooting system after password change"
|
||||
msgstr "Перезагрузка системы после смены пароля"
|
||||
|
||||
msgid "Password changed"
|
||||
msgstr "Пароль изменён"
|
||||
|
||||
msgid "Writing password changes time"
|
||||
msgstr "Запись времени изменения пароля"
|
||||
|
||||
msgid "Requirements not met"
|
||||
msgstr "Требования не выполнены"
|
||||
|
||||
msgid "The number of hours from the moment of the last user entrance"
|
||||
msgstr "Количество часов с момента последнего входа пользователя"
|
||||
|
||||
msgid "The number of hours since the password has last changed"
|
||||
msgstr "Количество часов с момента последнего изменения пароля"
|
||||
|
||||
msgid "LDAP updated with new password data"
|
||||
msgstr "LDAP обновлён новыми данными пароля"
|
||||
|
||||
msgid "No active sessions found"
|
||||
msgstr "Активные сеансы не найдены"
|
||||
|
||||
msgid "Process terminated"
|
||||
msgstr "Процесс завершён"
|
||||
|
||||
msgid "Password update not needed"
|
||||
msgstr "Обновление пароля не требуется"
|
||||
|
||||
msgid "Password successfully updated"
|
||||
msgstr "Пароль успешно обновлён"
|
||||
|
||||
msgid "Cleaning the autofs catalog"
|
||||
msgstr "Очистка каталога autofs"
|
||||
|
||||
# Debug_end
|
||||
|
||||
# Warning
|
||||
@ -787,6 +995,75 @@ msgstr "Не удалось кэшировать файл"
|
||||
msgid "Could not create a valid list of keys"
|
||||
msgstr "Не удалось создать допустимый список ключей"
|
||||
|
||||
msgid "Failed to copy file"
|
||||
msgstr "Не удалось скопировать файл"
|
||||
|
||||
msgid "Failed to create KDE settings list"
|
||||
msgstr "Не удалось создать список настроек KDE"
|
||||
|
||||
msgid "Could not find tools to configure KDE"
|
||||
msgstr "Не удалось найти инструменты для настройки KDE"
|
||||
|
||||
msgid "Failed to open KDE settings"
|
||||
msgstr "Не удалось открыть настройки KDE"
|
||||
|
||||
msgid "Failed to change KDE configuration file"
|
||||
msgstr "Не удалось изменить файл конфигурации KDE"
|
||||
|
||||
msgid "Error connecting to server"
|
||||
msgstr "Ошибка при подключении к серверу"
|
||||
|
||||
msgid "Wallpaper configuration file not found"
|
||||
msgstr "Конфигурационный файл для обоев не найден"
|
||||
|
||||
msgid "The user setting was not installed, conflict with computer setting"
|
||||
msgstr "Пользовательская настройка не была установлена, конфликт с настройкой компьютера"
|
||||
|
||||
msgid "Action for ini file failed"
|
||||
msgstr "Не удалось выполнить действие для INI-файла"
|
||||
|
||||
msgid "Couldn't get the uid"
|
||||
msgstr "Не удалось получить uid"
|
||||
|
||||
msgid "Failed to load content from remote host"
|
||||
msgstr "Не удалось загрузить контент с удаленного узла"
|
||||
|
||||
msgid "Force mode activated"
|
||||
msgstr "Режим force задействован"
|
||||
|
||||
msgid "Failed to change password"
|
||||
msgstr "Не удалось изменить пароль"
|
||||
|
||||
msgid "Failed to write password modification time"
|
||||
msgstr "Не удалось записать время изменения пароля"
|
||||
|
||||
msgid "LAPS requirements not met, module disabled"
|
||||
msgstr "Требования LAPS не выполнены, модуль отключён"
|
||||
|
||||
msgid "Could not resolve encryption principal name. Return admin group SID"
|
||||
msgstr "Не удалось определить имя шифрования. Возвращён SID группы администраторов"
|
||||
|
||||
msgid "Failed to get expiration time from LDAP"
|
||||
msgstr "Не удалось получить время истечения срока действия из LDAP"
|
||||
|
||||
msgid "Failed to read password modification time from dconf"
|
||||
msgstr "Не удалось прочитать время изменения пароля из dconf"
|
||||
|
||||
msgid "Failed to get last login time"
|
||||
msgstr "Не удалось получить время последнего входа"
|
||||
|
||||
msgid "Failed to calculate password age"
|
||||
msgstr "Не удалось вычислить возраст пароля"
|
||||
|
||||
msgid "Failed to terminate process"
|
||||
msgstr "Не удалось завершить процесс"
|
||||
|
||||
msgid "The user was not found to change the password"
|
||||
msgstr "Пользователь для изменения пароля не был найден"
|
||||
|
||||
msgid "Error while cleaning the autofs catalog"
|
||||
msgstr "Ошибка при очистке каталога autofs"
|
||||
|
||||
# Fatal
|
||||
msgid "Unable to refresh GPO list"
|
||||
msgstr "Невозможно обновить список объектов групповых политик"
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -30,6 +30,8 @@ def info_code(code):
|
||||
info_ids[7] = 'Firefox policy'
|
||||
info_ids[8] = 'Chromium policy'
|
||||
info_ids[9] = 'Set user property to'
|
||||
info_ids[10] = 'The line in the configuration file was cleared'
|
||||
info_ids[11] = 'Found GPT in cache'
|
||||
|
||||
return info_ids.get(code, 'Unknown info code')
|
||||
|
||||
@ -99,8 +101,17 @@ def error_code(code):
|
||||
error_ids[63] = 'Error merging user GPT (from machine GPO)'
|
||||
error_ids[64] = 'Error to cleanup directory for machine'
|
||||
error_ids[65] = 'Error to cleanup directory for user'
|
||||
|
||||
|
||||
error_ids[66] = 'Error while executing command for widgets'
|
||||
error_ids[67] = 'Error creating environment variables'
|
||||
error_ids[68] = 'Error running kwriteconfig5 command'
|
||||
error_ids[69] = 'Error getting list of keys'
|
||||
error_ids[70] = 'Error getting key value'
|
||||
error_ids[71] = 'Failed to update dconf database'
|
||||
error_ids[72] = 'Exception occurred while updating dconf database'
|
||||
error_ids[73] = 'Failed to retrieve data from dconf database'
|
||||
error_ids[74] = 'Autofs restart failed'
|
||||
error_ids[75] = 'Failed to update LDAP with new password data'
|
||||
error_ids[76] = 'Failed to change local user password'
|
||||
return error_ids.get(code, 'Unknown error code')
|
||||
|
||||
def debug_code(code):
|
||||
@ -240,8 +251,8 @@ def debug_code(code):
|
||||
debug_ids[133] = 'NTP applier for machine will not be started'
|
||||
debug_ids[134] = 'Running Envvar applier for machine'
|
||||
debug_ids[135] = 'Envvar applier for machine will not be started'
|
||||
debug_ids[136] = 'Running Envvar applier for user in user context'
|
||||
debug_ids[137] = 'Envvar applier for user in user context will not be started'
|
||||
debug_ids[136] = 'Running Envvar applier for user in admin context'
|
||||
debug_ids[137] = 'Envvar applier for user in admin context will not be started'
|
||||
debug_ids[138] = 'Running Package applier for machine'
|
||||
debug_ids[139] = 'Package applier for machine will not be started'
|
||||
debug_ids[140] = 'Running Package applier for user in administrator context'
|
||||
@ -277,12 +288,64 @@ def debug_code(code):
|
||||
debug_ids[170] = 'Running File copy applier for user in administrator context will not be started'
|
||||
debug_ids[171] = 'Running ini applier for machine'
|
||||
debug_ids[172] = 'Running ini applier for machine will not be started'
|
||||
debug_ids[173] = 'Running ini applier for user in administrator context'
|
||||
debug_ids[174] = 'Running ini applier for user in administrator context will not be started'
|
||||
debug_ids[173] = 'Running ini applier for user in user context'
|
||||
debug_ids[174] = 'Running ini applier for user in user context will not be started'
|
||||
debug_ids[175] = 'Ini-file path not recognized'
|
||||
debug_ids[176] = 'Ini-file is not readable'
|
||||
debug_ids[177] = 'Saving information about ini-file'
|
||||
debug_ids[178] = 'Dictionary key generation failed'
|
||||
debug_ids[179] = 'Running CIFS applier for machine'
|
||||
debug_ids[180] = 'CIFS applier for machine will not be started'
|
||||
debug_ids[181] = 'Running networkshare applier for machine will not be started'
|
||||
debug_ids[182] = 'Apply network share data action failed'
|
||||
debug_ids[183] = 'Running yandex_browser_applier for machine'
|
||||
debug_ids[184] = 'Yandex_browser_applier for machine will not be started'
|
||||
debug_ids[185] = 'Wrote YandexBrowser preferences to'
|
||||
debug_ids[186] = 'Saving information about network shares'
|
||||
debug_ids[187] = 'Running networkshare applier for machine'
|
||||
debug_ids[188] = 'Running networkshare applier for user'
|
||||
debug_ids[189] = 'Running networkshare applier for user will not be started'
|
||||
debug_ids[190] = 'Applying settings for network share'
|
||||
debug_ids[191] = 'File copy'
|
||||
debug_ids[192] = 'File update'
|
||||
debug_ids[193] = 'Deleting a file'
|
||||
debug_ids[194] = 'Failed to create a symlink to the network drives mountpoint'
|
||||
debug_ids[195] = 'Failed to create a symlink to the system network drives mountpoint'
|
||||
debug_ids[196] = 'Failed to create a symlink to the hidden network drives mountpoint'
|
||||
debug_ids[197] = 'Failed to create a symlink to the hidden system network drives mountpoint'
|
||||
debug_ids[198] = 'Running KDE applier for machine'
|
||||
debug_ids[199] = 'KDE applier for machine will not be started'
|
||||
debug_ids[200] = 'Running KDE applier for user in user context'
|
||||
debug_ids[201] = 'KDE applier for user in user context will not be started'
|
||||
debug_ids[202] = 'Changing the configuration file'
|
||||
debug_ids[203] = 'Widget command completed successfully'
|
||||
debug_ids[204] = 'Getting a list of keys'
|
||||
debug_ids[205] = 'Getting the key value'
|
||||
debug_ids[206] = 'Successfully updated dconf database'
|
||||
debug_ids[207] = 'Creating a dictionary with keys and values from the dconf database'
|
||||
debug_ids[208] = 'No entry found for the specified path'
|
||||
debug_ids[209] = 'Creating an ini file with policies for dconf'
|
||||
debug_ids[211] = 'SYSVOL entry found in cache'
|
||||
debug_ids[212] = 'Wrote Thunderbird preferences to'
|
||||
debug_ids[213] = 'Running Thunderbird applier for machine'
|
||||
debug_ids[214] = 'Thunderbird applier for machine will not be started'
|
||||
debug_ids[215] = 'The environment file has been cleaned'
|
||||
debug_ids[216] = 'Cleanup of file environment failed'
|
||||
debug_ids[217] = 'Failed to get dictionary'
|
||||
debug_ids[218] = 'LAPS applier started'
|
||||
debug_ids[219] = 'LAPS applier is disabled'
|
||||
debug_ids[220] = 'Rebooting system after password change'
|
||||
debug_ids[221] = 'Password changed'
|
||||
debug_ids[222] = 'Writing password changes time'
|
||||
debug_ids[223] = 'Requirements not met'
|
||||
debug_ids[224] = 'The number of hours from the moment of the last user entrance'
|
||||
debug_ids[225] = 'The number of hours since the password has last changed'
|
||||
debug_ids[226] = 'LDAP updated with new password data'
|
||||
debug_ids[227] = 'No active sessions found'
|
||||
debug_ids[228] = 'Process terminated'
|
||||
debug_ids[229] = 'Password update not needed'
|
||||
debug_ids[230] = 'Password successfully updated'
|
||||
debug_ids[231] = 'Cleaning the autofs catalog'
|
||||
|
||||
return debug_ids.get(code, 'Unknown debug code')
|
||||
|
||||
@ -308,6 +371,29 @@ def warning_code(code):
|
||||
warning_ids[12] = 'Failed to read the list of files'
|
||||
warning_ids[13] = 'Failed to caching the file'
|
||||
warning_ids[14] = 'Could not create a valid list of keys'
|
||||
warning_ids[15] = 'Failed to copy file'
|
||||
warning_ids[16] = 'Failed to create KDE settings list'
|
||||
warning_ids[17] = 'Could not find tools to configure KDE'
|
||||
warning_ids[18] = 'Failed to open KDE settings'
|
||||
warning_ids[19] = 'Failed to change KDE configuration file'
|
||||
warning_ids[20] = 'Error connecting to server'
|
||||
warning_ids[21] = 'Wallpaper configuration file not found'
|
||||
warning_ids[22] = 'The user setting was not installed, conflict with computer setting'
|
||||
warning_ids[23] = 'Action for ini file failed'
|
||||
warning_ids[24] = 'Couldn\'t get the uid'
|
||||
warning_ids[25] = 'Failed to load content from remote host'
|
||||
warning_ids[26] = 'Force mode activated'
|
||||
warning_ids[27] = 'Failed to change password'
|
||||
warning_ids[28] = 'Failed to write password modification time'
|
||||
warning_ids[29] = 'LAPS requirements not met, module disabled'
|
||||
warning_ids[30] = 'Could not resolve encryption principal name. Return admin group SID'
|
||||
warning_ids[31] = 'Failed to get expiration time from LDAP'
|
||||
warning_ids[32] = 'Failed to read password modification time from dconf'
|
||||
warning_ids[33] = 'Failed to get last login time'
|
||||
warning_ids[34] = 'Failed to calculate password age'
|
||||
warning_ids[35] = 'Failed to terminate process'
|
||||
warning_ids[36] = 'The user was not found to change the password'
|
||||
warning_ids[37] = 'Error while cleaning the autofs catalog'
|
||||
|
||||
return warning_ids.get(code, 'Unknown warning code')
|
||||
|
||||
|
@ -20,15 +20,13 @@
|
||||
import rpm
|
||||
import subprocess
|
||||
from gpoa.storage import registry_factory
|
||||
from util.gpoa_ini_parsing import GpoaConfigObj
|
||||
from util.util import get_uid_by_username, string_to_literal_eval
|
||||
import logging
|
||||
from util.logging import log
|
||||
import argparse
|
||||
import gettext
|
||||
import locale
|
||||
from messages import message_with_code
|
||||
from util.arguments import (
|
||||
set_loglevel
|
||||
)
|
||||
|
||||
|
||||
def is_rpm_installed(rpm_name):
|
||||
@ -44,35 +42,35 @@ def is_rpm_installed(rpm_name):
|
||||
|
||||
class Pkcon_applier:
|
||||
|
||||
def __init__(self, sid = None):
|
||||
self.__install_key_name = 'Install'
|
||||
self.__remove_key_name = 'Remove'
|
||||
self.__hkcu_branch = 'Software\\BaseALT\\Policies\\Packages'
|
||||
self.__hklm_branch = 'Software\\BaseALT\\Policies\\Packages'
|
||||
def __init__(self, user = None):
|
||||
install_key_name = 'Install'
|
||||
remove_key_name = 'Remove'
|
||||
hklm_branch = 'Software/BaseALT/Policies/Packages'
|
||||
self.__install_command = ['/usr/bin/pkcon', '-y', 'install']
|
||||
self.__remove_command = ['/usr/bin/pkcon', '-y', 'remove']
|
||||
self.__reinstall_command = ['/usr/bin/pkcon', '-y', 'reinstall']
|
||||
self.install_packages = set()
|
||||
self.remove_packages = set()
|
||||
self.storage = registry_factory('registry')
|
||||
if sid:
|
||||
install_branch_user = '{}\\{}%'.format(self.__hkcu_branch, self.__install_key_name)
|
||||
remove_branch_user = '{}\\{}%'.format(self.__hkcu_branch, self.__remove_key_name)
|
||||
self.install_packages_setting = self.storage.filter_hkcu_entries(sid, install_branch_user)
|
||||
self.remove_packages_setting = self.storage.filter_hkcu_entries(sid, remove_branch_user)
|
||||
self.storage = registry_factory()
|
||||
if user:
|
||||
uid = get_uid_by_username(user)
|
||||
dict_dconf_db = self.storage.get_dictionary_from_dconf_file_db(uid)
|
||||
else:
|
||||
install_branch = '{}\\{}%'.format(self.__hklm_branch, self.__install_key_name)
|
||||
remove_branch = '{}\\{}%'.format(self.__hklm_branch, self.__remove_key_name)
|
||||
self.install_packages_setting = self.storage.filter_hklm_entries(install_branch)
|
||||
self.remove_packages_setting = self.storage.filter_hklm_entries(remove_branch)
|
||||
dict_dconf_db = self.storage.get_dictionary_from_dconf_file_db()
|
||||
dict_packages = dict_dconf_db.get(hklm_branch,{})
|
||||
self.install_packages_setting = string_to_literal_eval(dict_packages.get(install_key_name,[]))
|
||||
self.remove_packages_setting = string_to_literal_eval(dict_packages.get(remove_key_name,[]))
|
||||
|
||||
for package in self.install_packages_setting:
|
||||
if not is_rpm_installed(package.data):
|
||||
self.install_packages.add(package.data)
|
||||
package = package.strip()
|
||||
if not is_rpm_installed(package):
|
||||
self.install_packages.add(package)
|
||||
for package in self.remove_packages_setting:
|
||||
if package.data in self.install_packages:
|
||||
self.install_packages.remove(package.data)
|
||||
if is_rpm_installed(package.data):
|
||||
self.remove_packages.add(package.data)
|
||||
package = package.strip()
|
||||
if package in self.install_packages:
|
||||
self.install_packages.remove(package)
|
||||
if is_rpm_installed(package):
|
||||
self.remove_packages.add(package)
|
||||
|
||||
def apply(self):
|
||||
log('D142')
|
||||
@ -137,13 +135,13 @@ if __name__ == '__main__':
|
||||
gettext.textdomain('gpoa')
|
||||
logger = logging.getLogger()
|
||||
parser = argparse.ArgumentParser(description='Package applier')
|
||||
parser.add_argument('--sid', type = str, help = 'sid', nargs = '?', default = None)
|
||||
parser.add_argument('--user', type = str, help = 'user', nargs = '?', default = None)
|
||||
parser.add_argument('--loglevel', type = int, help = 'loglevel', nargs = '?', default = 30)
|
||||
|
||||
args = parser.parse_args()
|
||||
logger.setLevel(args.loglevel)
|
||||
if args.sid:
|
||||
applier = Pkcon_applier(args.sid)
|
||||
if args.user:
|
||||
applier = Pkcon_applier(args.user)
|
||||
else:
|
||||
applier = Pkcon_applier()
|
||||
applier.apply()
|
||||
|
@ -35,6 +35,6 @@ class plugin_manager:
|
||||
logging.warning(slogm(str(exc)))
|
||||
|
||||
def run(self):
|
||||
self.plugins.get('adp', plugin('adp')).run()
|
||||
#self.plugins.get('adp', plugin('adp')).run()
|
||||
self.plugins.get('roles', plugin('roles')).run()
|
||||
|
||||
|
@ -21,6 +21,8 @@ import subprocess
|
||||
import argparse
|
||||
import os
|
||||
from pathlib import Path
|
||||
import psutil
|
||||
import time
|
||||
|
||||
class Scripts_runner:
|
||||
'''
|
||||
@ -104,12 +106,39 @@ class Scripts_runner:
|
||||
|
||||
def run_cmd_subprocess(self, cmd):
|
||||
try:
|
||||
subprocess.Popen(cmd)
|
||||
subprocess.run(cmd)
|
||||
return 'Script run: {}'.format(cmd)
|
||||
except Exception as exc:
|
||||
return exc
|
||||
|
||||
def find_process_by_name_and_script(name, script_path):
|
||||
|
||||
for proc in psutil.process_iter(['pid', 'name', 'cmdline']):
|
||||
try:
|
||||
# Check if the process name matches and the script path is in the command line arguments
|
||||
if proc.info['name'] == name and script_path in proc.info['cmdline']:
|
||||
return proc
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
|
||||
continue
|
||||
return None
|
||||
|
||||
def wait_for_process(name, script_path, check_interval=1):
|
||||
|
||||
process = find_process_by_name_and_script(name, script_path)
|
||||
if not process:
|
||||
print(f"Process with name {name} and script path {script_path} not found.")
|
||||
return
|
||||
|
||||
try:
|
||||
# Loop to wait for the process to finish
|
||||
while process.is_running():
|
||||
print(f"Waiting for process {name} with PID {process.pid} to finish...")
|
||||
time.sleep(check_interval)
|
||||
print(f"Process {name} with PID {process.pid} has finished.")
|
||||
return
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
print(f"Process {name} with PID {process.pid} is no longer accessible.")
|
||||
return
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Scripts runner')
|
||||
@ -117,6 +146,9 @@ if __name__ == '__main__':
|
||||
parser.add_argument('--user', type = str, help = 'User name ', nargs = '?', default = None)
|
||||
parser.add_argument('--action', type = str, help = 'MACHINE : [STARTUP or SHUTDOWN], USER : [LOGON or LOGOFF]', nargs = '?', default = None)
|
||||
|
||||
process_name = "python3"
|
||||
script_path = "/usr/sbin/gpoa"
|
||||
wait_for_process(process_name, script_path)
|
||||
args = parser.parse_args()
|
||||
try:
|
||||
Scripts_runner(args.mode, args.user, args.action)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2023 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -16,12 +16,19 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .sqlite_registry import sqlite_registry
|
||||
from .sqlite_cache import sqlite_cache
|
||||
|
||||
def cache_factory(cache_name):
|
||||
return sqlite_cache(cache_name)
|
||||
from storage.dconf_registry import Dconf_registry
|
||||
|
||||
def registry_factory(registry_name='registry', registry_dir=None):
|
||||
return sqlite_registry(registry_name, registry_dir)
|
||||
def registry_factory(registry_name='', envprofile=None , username=None):
|
||||
if username:
|
||||
Dconf_registry._username = username
|
||||
else:
|
||||
Dconf_registry._envprofile = 'system'
|
||||
if envprofile:
|
||||
Dconf_registry._envprofile = envprofile
|
||||
|
||||
if registry_name == 'dconf':
|
||||
return Dconf_registry()
|
||||
else:
|
||||
return Dconf_registry
|
||||
|
||||
|
861
gpoa/storage/dconf_registry.py
Normal file
861
gpoa/storage/dconf_registry.py
Normal file
@ -0,0 +1,861 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2023 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from util.util import (string_to_literal_eval,
|
||||
try_dict_to_literal_eval,
|
||||
touch_file, get_uid_by_username,
|
||||
add_prefix_to_keys,
|
||||
remove_keys_with_prefix,
|
||||
clean_data)
|
||||
from util.paths import get_dconf_config_path
|
||||
from util.logging import log
|
||||
import re
|
||||
from collections import OrderedDict
|
||||
import itertools
|
||||
from gpt.dynamic_attributes import RegistryKeyMetadata
|
||||
import gi
|
||||
gi.require_version("Gvdb", "1.0")
|
||||
gi.require_version("GLib", "2.0")
|
||||
from gi.repository import Gvdb, GLib
|
||||
|
||||
|
||||
class PregDconf():
|
||||
def __init__(self, keyname, valuename, type_preg, data):
|
||||
self.keyname = keyname
|
||||
self.valuename = valuename
|
||||
self.hive_key = '{}/{}'.format(self.keyname, self.valuename)
|
||||
self.type = type_preg
|
||||
self.data = data
|
||||
|
||||
|
||||
class gplist(list):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def first(self):
|
||||
if self:
|
||||
return self[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def count(self):
|
||||
return len(self)
|
||||
|
||||
class Dconf_registry():
|
||||
'''
|
||||
A class variable that represents a global registry dictionary shared among instances of the class
|
||||
'''
|
||||
_GpoPriority = 'Software/BaseALT/Policies/GpoPriority'
|
||||
_gpo_name = set()
|
||||
global_registry_dict = dict({_GpoPriority:{}})
|
||||
previous_global_registry_dict = dict()
|
||||
__template_file = '/usr/share/dconf/user_mandatory.template'
|
||||
_policies_path = 'Software/'
|
||||
_policies_win_path = 'SOFTWARE/'
|
||||
_gpt_read_flag = False
|
||||
_force = False
|
||||
__dconf_dict_flag = False
|
||||
__dconf_dict = dict()
|
||||
_dconf_db = dict()
|
||||
_dict_gpo_name_version_cache = dict()
|
||||
_username = None
|
||||
_uid = None
|
||||
_envprofile = None
|
||||
_path_bin_system = "/etc/dconf/db/policy"
|
||||
|
||||
list_keys = list()
|
||||
_info = dict()
|
||||
_counter_gpt = itertools.count(0)
|
||||
|
||||
shortcuts = list()
|
||||
folders = list()
|
||||
files = list()
|
||||
drives = list()
|
||||
scheduledtasks = list()
|
||||
environmentvariables = list()
|
||||
inifiles = list()
|
||||
services = list()
|
||||
printers = list()
|
||||
scripts = list()
|
||||
networkshares = list()
|
||||
|
||||
_true_strings = {
|
||||
"True",
|
||||
"true",
|
||||
"TRUE",
|
||||
"yes",
|
||||
"Yes",
|
||||
"enabled",
|
||||
"enable",
|
||||
"Enabled",
|
||||
"Enable",
|
||||
'1'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set_info(cls, key , data):
|
||||
cls._info[key] = data
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_info(cls, key):
|
||||
return cls._info.setdefault(key, None)
|
||||
|
||||
@staticmethod
|
||||
def get_next_number():
|
||||
return next(Dconf_registry._counter_gpt)
|
||||
|
||||
@staticmethod
|
||||
def get_matching_keys(path):
|
||||
if path[0] != '/':
|
||||
path = '/' + path
|
||||
logdata = dict()
|
||||
envprofile = get_dconf_envprofile()
|
||||
try:
|
||||
process = subprocess.Popen(['dconf', 'list', path],
|
||||
env=envprofile, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
logdata['path'] = path
|
||||
log('D204', logdata)
|
||||
output, error = process.communicate()
|
||||
if not output and not error:
|
||||
return
|
||||
if not error:
|
||||
keys = output.strip().split('\n')
|
||||
for key in keys:
|
||||
Dconf_registry.get_matching_keys(f'{path}{key}')
|
||||
else:
|
||||
Dconf_registry.list_keys.append(path)
|
||||
return Dconf_registry.list_keys
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('E69', logdata)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_key_values(keys):
|
||||
key_values = {}
|
||||
for key in keys:
|
||||
key_values[key] = Dconf_registry.get_key_value(key)
|
||||
return key_values
|
||||
|
||||
@staticmethod
|
||||
def get_key_value(key):
|
||||
logdata = dict()
|
||||
envprofile = get_dconf_envprofile()
|
||||
try:
|
||||
process = subprocess.Popen(['dconf', 'read', key],
|
||||
env=envprofile, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
logdata['key'] = key
|
||||
output, error = process.communicate()
|
||||
|
||||
if not error:
|
||||
return string_to_literal_eval(string_to_literal_eval(output))
|
||||
else:
|
||||
return None
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('E70', logdata)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def dconf_update(uid=None):
|
||||
logdata = dict()
|
||||
path_dconf_config = get_dconf_config_path(uid)
|
||||
db_file = path_dconf_config[:-3]
|
||||
try:
|
||||
process = subprocess.Popen(['dconf', 'compile', db_file, path_dconf_config],
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
output, error = process.communicate()
|
||||
|
||||
if error:
|
||||
logdata['error'] = error
|
||||
log('E71', logdata)
|
||||
else:
|
||||
logdata['outpupt'] = output
|
||||
log('D206', logdata)
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
log('E72', logdata)
|
||||
|
||||
@classmethod
|
||||
def check_profile_template(cls):
|
||||
if Path(cls.__template_file).exists():
|
||||
return True
|
||||
else:
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def update_dict_to_previous(cls):
|
||||
dict_clean_previous = remove_keys_with_prefix(cls._dconf_db)
|
||||
dict_with_previous = add_prefix_to_keys(dict_clean_previous)
|
||||
cls.global_registry_dict.update(dict_with_previous)
|
||||
|
||||
@classmethod
|
||||
def apply_template(cls, uid):
|
||||
logdata = dict()
|
||||
if uid and cls.check_profile_template():
|
||||
with open(cls.__template_file, "r") as f:
|
||||
template = f.read()
|
||||
# Replace the "{uid}" placeholder with the actual UID value
|
||||
content = template.replace("{{uid}}", str(uid))
|
||||
|
||||
elif uid:
|
||||
content = f"user-db:user\n" \
|
||||
f"system-db:distr\n" \
|
||||
f"system-db:policy\n" \
|
||||
f"system-db:policy{uid}\n" \
|
||||
f"system-db:local\n" \
|
||||
f"system-db:default\n" \
|
||||
f"system-db:local\n" \
|
||||
f"system-db:policy{uid}\n" \
|
||||
f"system-db:policy\n" \
|
||||
f"system-db:distr\n"
|
||||
else:
|
||||
logdata['uid'] = uid
|
||||
log('W24', logdata)
|
||||
return
|
||||
|
||||
user_mandatory = f'/run/dconf/user/{uid}'
|
||||
touch_file(user_mandatory)
|
||||
|
||||
with open(user_mandatory, "w") as f:
|
||||
f.write(content)
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_policies_from_dconf(cls):
|
||||
return cls.get_dictionary_from_dconf(cls._policies_path, cls._policies_win_path)
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_dictionary_from_dconf(self, *startswith_list):
|
||||
output_dict = {}
|
||||
for startswith in startswith_list:
|
||||
dconf_dict = self.get_key_values(self.get_matching_keys(startswith))
|
||||
for key, value in dconf_dict.items():
|
||||
keys_tmp = key.split('/')
|
||||
update_dict(output_dict.setdefault('/'.join(keys_tmp[:-1])[1:], {}), {keys_tmp[-1]: str(value)})
|
||||
|
||||
log('D207')
|
||||
return output_dict
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_dictionary_from_dconf_file_db(self, uid=None, path_bin=None, save_dconf_db=False):
|
||||
logdata = dict()
|
||||
error_skip = None
|
||||
if path_bin:
|
||||
error_skip = True
|
||||
elif not uid:
|
||||
path_bin = self._path_bin_system
|
||||
else:
|
||||
path_bin = self._path_bin_system + str(uid)
|
||||
output_dict = {}
|
||||
try:
|
||||
if (GLib.file_get_contents(path_bin)[0]):
|
||||
bytes1 = GLib.Bytes.new(GLib.file_get_contents(path_bin)[1])
|
||||
table = Gvdb.Table.new_from_bytes(bytes1, True)
|
||||
|
||||
name_list = Gvdb.Table.get_names(table)
|
||||
for name in name_list:
|
||||
value = Gvdb.Table.get_value(table, name)
|
||||
if value is None:
|
||||
continue
|
||||
list_path = name.split('/')
|
||||
if value.is_of_type(GLib.VariantType('s')):
|
||||
part = output_dict.setdefault('/'.join(list_path[1:-1]), {})
|
||||
part[list_path[-1]] = value.get_string()
|
||||
elif value.is_of_type(GLib.VariantType('i')):
|
||||
part = output_dict.setdefault('/'.join(list_path[1:-1]), {})
|
||||
part[list_path[-1]] = value.get_int32()
|
||||
except Exception as exc:
|
||||
logdata['exc'] = exc
|
||||
logdata['path_bin'] = path_bin
|
||||
if not error_skip:
|
||||
log('E73', logdata)
|
||||
else:
|
||||
log('D217', logdata)
|
||||
if save_dconf_db:
|
||||
Dconf_registry._dconf_db = output_dict
|
||||
return output_dict
|
||||
|
||||
|
||||
@classmethod
|
||||
def filter_entries(cls, startswith, registry_dict = None):
|
||||
if not registry_dict:
|
||||
registry_dict = cls.global_registry_dict
|
||||
if startswith[-1] == '%':
|
||||
startswith = startswith[:-1]
|
||||
if startswith[-1] == '/' or startswith[-1] == '\\':
|
||||
startswith = startswith[:-1]
|
||||
return filter_dict_keys(startswith, flatten_dictionary(registry_dict))
|
||||
return filter_dict_keys(startswith, flatten_dictionary(registry_dict))
|
||||
|
||||
|
||||
@classmethod
|
||||
def filter_hklm_entries(cls, startswith):
|
||||
pregs = cls.filter_entries(startswith)
|
||||
list_entiers = list()
|
||||
for keyname, value in pregs.items():
|
||||
if isinstance(value, dict):
|
||||
for valuename, data in value.items():
|
||||
list_entiers.append(PregDconf(
|
||||
keyname, convert_string_dconf(valuename), find_preg_type(data), data))
|
||||
elif isinstance(value, list):
|
||||
for data in value:
|
||||
list_entiers.append(PregDconf(
|
||||
keyname, data, find_preg_type(data), data))
|
||||
else:
|
||||
list_entiers.append(PregDconf(
|
||||
'/'.join(keyname.split('/')[:-1]), convert_string_dconf(keyname.split('/')[-1]), find_preg_type(value), value))
|
||||
|
||||
|
||||
return gplist(list_entiers)
|
||||
|
||||
|
||||
@classmethod
|
||||
def filter_hkcu_entries(cls, sid, startswith):
|
||||
return cls.filter_hklm_entries(startswith)
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_storage(cls,dictionary = None):
|
||||
if dictionary:
|
||||
result = dictionary
|
||||
elif Dconf_registry._gpt_read_flag:
|
||||
result = Dconf_registry.global_registry_dict
|
||||
else:
|
||||
if Dconf_registry.__dconf_dict_flag:
|
||||
result = Dconf_registry.__dconf_dict
|
||||
else:
|
||||
Dconf_registry.__dconf_dict = Dconf_registry.get_policies_from_dconf()
|
||||
result = Dconf_registry.__dconf_dict
|
||||
Dconf_registry.__dconf_dict_flag = True
|
||||
return result
|
||||
|
||||
|
||||
@classmethod
|
||||
def filling_storage_from_dconf(cls):
|
||||
Dconf_registry.global_registry_dict = Dconf_registry.get_storage()
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_entry(cls, path, dictionary = None, preg = True):
|
||||
logdata = dict()
|
||||
result = Dconf_registry.get_storage(dictionary)
|
||||
|
||||
keys = path.split("\\") if "\\" in path else path.split("/")
|
||||
key = '/'.join(keys[:-1]) if keys[0] else '/'.join(keys[:-1])[1:]
|
||||
|
||||
if isinstance(result, dict) and key in result.keys():
|
||||
data = result.get(key).get(keys[-1])
|
||||
return PregDconf(
|
||||
key, convert_string_dconf(keys[-1]), find_preg_type(data), data) if preg else data
|
||||
else:
|
||||
logdata['path'] = path
|
||||
log('D208', logdata)
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def check_enable_key(cls ,key):
|
||||
data = cls.get_entry(key, preg = False)
|
||||
if data:
|
||||
if isinstance(data, str):
|
||||
return True if data in cls._true_strings else False
|
||||
elif isinstance(data, int):
|
||||
return bool(data)
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def get_hkcu_entry(cls, sid, hive_key, dictionary = None):
|
||||
return cls.get_hklm_entry(hive_key, dictionary)
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_hklm_entry(cls, hive_key, dictionary = None):
|
||||
return cls.get_entry(hive_key, dictionary)
|
||||
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_shortcut(cls, sid, sc_obj, policy_name):
|
||||
sc_obj.policy_name = policy_name
|
||||
cls.shortcuts.append(sc_obj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_printer(cls, sid, pobj, policy_name):
|
||||
pobj.policy_name = policy_name
|
||||
cls.printers.append(pobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_drive(cls, sid, dobj, policy_name):
|
||||
dobj.policy_name = policy_name
|
||||
cls.drives.append(dobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_folder(cls, sid, fobj, policy_name):
|
||||
fobj.policy_name = policy_name
|
||||
cls.folders.append(fobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_envvar(self, sid, evobj, policy_name):
|
||||
evobj.policy_name = policy_name
|
||||
self.environmentvariables.append(evobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_script(cls, sid, scrobj, policy_name):
|
||||
scrobj.policy_name = policy_name
|
||||
cls.scripts.append(scrobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_file(cls, sid, fileobj, policy_name):
|
||||
fileobj.policy_name = policy_name
|
||||
cls.files.append(fileobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_ini(cls, sid, iniobj, policy_name):
|
||||
iniobj.policy_name = policy_name
|
||||
cls.inifiles.append(iniobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def add_networkshare(cls, sid, networkshareobj, policy_name):
|
||||
networkshareobj.policy_name = policy_name
|
||||
cls.networkshares.append(networkshareobj)
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_shortcuts(cls, sid):
|
||||
return cls.shortcuts
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_printers(cls, sid):
|
||||
return cls.printers
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_drives(cls, sid):
|
||||
return cls.drives
|
||||
|
||||
@classmethod
|
||||
def get_folders(cls, sid):
|
||||
return cls.folders
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_envvars(cls, sid):
|
||||
return cls.environmentvariables
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_scripts(cls, sid, action):
|
||||
action_scripts = list()
|
||||
for part in cls.scripts:
|
||||
if action == 'LOGON' and part.action == 'LOGON':
|
||||
action_scripts.append(part)
|
||||
elif action == 'LOGOFF' and part.action == 'LOGOFF':
|
||||
action_scripts.append(part)
|
||||
elif action == 'STARTUP' and part.action == 'STARTUP':
|
||||
action_scripts.append(part)
|
||||
elif action == 'SHUTDOWN' and part.action == 'SHUTDOWN':
|
||||
action_scripts.append(part)
|
||||
return action_scripts
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_files(cls, sid):
|
||||
return cls.files
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_networkshare(cls, sid):
|
||||
return cls.networkshares
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_ini(cls, sid):
|
||||
return cls.inifiles
|
||||
|
||||
|
||||
@classmethod
|
||||
def wipe_user(cls, sid):
|
||||
cls.wipe_hklm()
|
||||
|
||||
|
||||
@classmethod
|
||||
def wipe_hklm(cls):
|
||||
cls.global_registry_dict = dict({cls._GpoPriority:{}})
|
||||
|
||||
|
||||
def filter_dict_keys(starting_string, input_dict):
|
||||
result = dict()
|
||||
for key in input_dict:
|
||||
key_list = remove_empty_values(re.split(r'\\|/', key))
|
||||
start_list = remove_empty_values(re.split(r'\\|/', starting_string))
|
||||
if key_list[:len(start_list)] == start_list:
|
||||
result[key] = input_dict.get(key)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def find_preg_type(argument):
|
||||
if isinstance(argument, int):
|
||||
return 4
|
||||
else:
|
||||
return 1
|
||||
|
||||
|
||||
def update_dict(dict1, dict2, save_key=None):
|
||||
'''
|
||||
Updates dict1 with the key-value pairs from dict2
|
||||
'''
|
||||
for key, value in dict2.items():
|
||||
if key in dict1:
|
||||
# If both values are dictionaries, recursively call the update_dict function
|
||||
if isinstance(dict1[key], dict) and isinstance(value, dict):
|
||||
save_key = key
|
||||
update_dict(dict1[key], value, save_key)
|
||||
# If the value in dict1 is a list, extend it with unique values from value
|
||||
elif isinstance(dict1[key], list):
|
||||
dict1[key].extend(set(value) - set(dict1[key]))
|
||||
else:
|
||||
# If the value in dict1 is not a dictionary or the value in dict2 is not a dictionary,
|
||||
# replace the value in dict1 with the value from dict2
|
||||
if save_key and save_key.startswith('Source'):
|
||||
value.reloaded_with_policy_key = [dict1[key].policy_name]
|
||||
if dict1[key].reloaded_with_policy_key:
|
||||
value.reloaded_with_policy_key += dict1[key].reloaded_with_policy_key
|
||||
dict1[key] = value
|
||||
else:
|
||||
dict1[key] = value
|
||||
else:
|
||||
# If the key does not exist in dict1, add the key-value pair from dict2 to dict1
|
||||
dict1[key] = value
|
||||
|
||||
|
||||
def add_to_dict(string, username, gpo_info):
|
||||
if gpo_info:
|
||||
counter = gpo_info.counter
|
||||
display_name = gpo_info.display_name
|
||||
name = gpo_info.name
|
||||
version = gpo_info.version
|
||||
else:
|
||||
counter = 0
|
||||
display_name = 'Local Policy'
|
||||
name = None
|
||||
version = None
|
||||
|
||||
if username is None or username == 'Machine':
|
||||
machine= '{}/Machine/{}'.format(Dconf_registry._GpoPriority, counter)
|
||||
dictionary = Dconf_registry.global_registry_dict.setdefault(machine, dict())
|
||||
else:
|
||||
if name in Dconf_registry._gpo_name:
|
||||
return
|
||||
user = '{}/User/{}'.format(Dconf_registry._GpoPriority, counter)
|
||||
dictionary = Dconf_registry.global_registry_dict.setdefault(user, dict())
|
||||
Dconf_registry._gpo_name.add(name)
|
||||
|
||||
dictionary['display_name'] = display_name
|
||||
dictionary['name'] = name
|
||||
dictionary['version'] = str(version)
|
||||
dictionary['correct_path'] = string
|
||||
|
||||
def get_mod_previous_value(key_source, key_valuename):
|
||||
previous_sourc = try_dict_to_literal_eval(Dconf_registry._dconf_db
|
||||
.get(key_source, {})
|
||||
.get(key_valuename, {}))
|
||||
return previous_sourc.get('mod_previous_value') if previous_sourc else None
|
||||
|
||||
def get_previous_value(key_source, key_valuename):
|
||||
previous = key_source.replace('Source', 'Previous')
|
||||
return (Dconf_registry._dconf_db
|
||||
.get(previous, {})
|
||||
.get(key_valuename, None))
|
||||
|
||||
def load_preg_dconf(pregfile, pathfile, policy_name, username, gpo_info):
|
||||
'''
|
||||
Loads the configuration from preg registry into a dictionary
|
||||
'''
|
||||
# Prefix for storing key data
|
||||
source_pre = "Source"
|
||||
dd = dict()
|
||||
for i in pregfile.entries:
|
||||
# Skip this entry if the valuename starts with '**del'
|
||||
if i.valuename.lower().startswith('**del'):
|
||||
continue
|
||||
valuename = convert_string_dconf(i.valuename)
|
||||
data = check_data(i.data, i.type)
|
||||
if i.valuename != i.data and i.valuename:
|
||||
key_registry_source = f"{source_pre}/{i.keyname}".replace('\\', '/')
|
||||
key_registry = f"{i.keyname}".replace('\\', '/')
|
||||
key_valuename = valuename.replace('\\', '/')
|
||||
if i.keyname.replace('\\', '/') in dd:
|
||||
# If the key exists in dd, update its value with the new key-value pair
|
||||
dd[i.keyname.replace('\\', '/')].update({key_valuename:data})
|
||||
mod_previous_value = get_mod_previous_value(key_registry_source, key_valuename)
|
||||
previous_value = get_previous_value(key_registry, key_valuename)
|
||||
if previous_value != data:
|
||||
(dd[key_registry_source]
|
||||
.update({key_valuename:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=previous_value)}))
|
||||
else:
|
||||
(dd[key_registry_source]
|
||||
.update({key_valuename:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=mod_previous_value)}))
|
||||
else:
|
||||
# If the key does not exist in dd, create a new key-value pair
|
||||
dd[i.keyname.replace('\\', '/')] = {key_valuename:data}
|
||||
mod_previous_value = get_mod_previous_value(key_registry_source, key_valuename)
|
||||
previous_value = get_previous_value(key_registry, key_valuename)
|
||||
if previous_value != data:
|
||||
dd[key_registry_source] = {key_valuename:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=previous_value)}
|
||||
else:
|
||||
dd[key_registry_source] = {key_valuename:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=mod_previous_value)}
|
||||
|
||||
elif not i.valuename:
|
||||
keyname_tmp = i.keyname.replace('\\', '/').split('/')
|
||||
keyname = '/'.join(keyname_tmp[:-1])
|
||||
mod_previous_value = get_mod_previous_value(f"{source_pre}/{keyname}", keyname_tmp[-1])
|
||||
previous_value = get_previous_value(f"{keyname}", keyname_tmp[-1])
|
||||
if keyname in dd:
|
||||
# If the key exists in dd, update its value with the new key-value pair
|
||||
dd[keyname].update({keyname_tmp[-1]:data})
|
||||
if previous_value != data:
|
||||
dd[f"{source_pre}/{keyname}"].update({keyname_tmp[-1]:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=previous_value)})
|
||||
else:
|
||||
dd[f"{source_pre}/{keyname}"].update({keyname_tmp[-1]:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=mod_previous_value)})
|
||||
else:
|
||||
# If the key does not exist in dd, create a new key-value pair
|
||||
dd[keyname] = {keyname_tmp[-1]:data}
|
||||
if previous_value != data:
|
||||
dd[f"{source_pre}/{keyname}"] = {keyname_tmp[-1]:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=previous_value)}
|
||||
else:
|
||||
dd[f"{source_pre}/{keyname}"] = {keyname_tmp[-1]:RegistryKeyMetadata(policy_name, i.type, mod_previous_value=mod_previous_value)}
|
||||
|
||||
else:
|
||||
# If the value name is the same as the data,
|
||||
# split the keyname and add the data to the appropriate location in dd.
|
||||
all_list_key = i.keyname.split('\\')
|
||||
key_d ='/'.join(all_list_key[:-1])
|
||||
dd_target = dd.setdefault(key_d,{})
|
||||
key_source = f"Source/{key_d}"
|
||||
dd_target_source = dd.setdefault(key_source, {})
|
||||
data_list = dd_target.setdefault(all_list_key[-1], []).append(data)
|
||||
mod_previous_value = get_mod_previous_value(key_source, all_list_key[-1])
|
||||
previous_value = get_previous_value(key_d, all_list_key[-1])
|
||||
if previous_value != str(data_list):
|
||||
dd_target_source[all_list_key[-1]] = RegistryKeyMetadata(policy_name, i.type, is_list=True, mod_previous_value=previous_value)
|
||||
else:
|
||||
dd_target_source[all_list_key[-1]] = RegistryKeyMetadata(policy_name, i.type, is_list=True, mod_previous_value=mod_previous_value)
|
||||
|
||||
# Update the global registry dictionary with the contents of dd
|
||||
update_dict(Dconf_registry.global_registry_dict, dd)
|
||||
|
||||
|
||||
def create_dconf_ini_file(filename, data, uid=None, nodomain=None):
|
||||
'''
|
||||
Create an ini-file based on a dictionary of dictionaries.
|
||||
Args:
|
||||
data (dict): The dictionary of dictionaries containing the data for the ini-file.
|
||||
filename (str): The filename to save the ini-file.
|
||||
Returns:
|
||||
None
|
||||
Raises:
|
||||
None
|
||||
'''
|
||||
with open(filename, 'a' if nodomain else 'w') as file:
|
||||
for section, section_data in data.items():
|
||||
file.write(f'[{section}]\n')
|
||||
for key, value in section_data.items():
|
||||
if isinstance(value, int):
|
||||
file.write(f'{key} = {value}\n')
|
||||
else:
|
||||
file.write(f'{key} = "{value}"\n')
|
||||
file.write('\n')
|
||||
logdata = dict()
|
||||
logdata['path'] = filename
|
||||
log('D209', logdata)
|
||||
create_dconf_file_locks(filename, data)
|
||||
Dconf_registry.dconf_update(uid)
|
||||
|
||||
|
||||
def create_dconf_file_locks(filename_ini, data):
|
||||
"""
|
||||
Creates a dconf lock file based on the provided filename and data.
|
||||
|
||||
:param filename_ini: Path to the ini file (str)
|
||||
:param data: Dictionary containing configuration data
|
||||
"""
|
||||
# Extract the path parts up to the directory of the ini file
|
||||
tmp_lock = filename_ini.split('/')[:-1]
|
||||
|
||||
# Construct the path to the lock file
|
||||
file_lock = '/'.join(tmp_lock + ['locks', tmp_lock[-1][:-1] + 'pol'])
|
||||
|
||||
# Create an empty lock file
|
||||
touch_file(file_lock)
|
||||
|
||||
# Open the lock file for writing
|
||||
with open(file_lock, 'w') as file:
|
||||
# Iterate over all lock keys obtained from the data
|
||||
for key_lock in get_keys_dconf_locks(data):
|
||||
# Remove the "lock/" prefix from the key and split into parts
|
||||
key = key_lock.split('/')[1:]
|
||||
# Write the cleaned key to the lock file
|
||||
file.write(f'{key}\n')
|
||||
|
||||
def get_keys_dconf_locks(data):
|
||||
"""
|
||||
Extracts keys from the provided data that start with "Locks/"
|
||||
and have a value of 1.
|
||||
|
||||
:param data: Dictionary containing configuration data
|
||||
:return: List of lock keys (str) without the "Locks/" prefix
|
||||
"""
|
||||
result = []
|
||||
# Flatten the nested dictionary into a single-level dictionary
|
||||
flatten_data = flatten_dictionary(data)
|
||||
|
||||
# Iterate through all keys in the flattened dictionary
|
||||
for key in flatten_data:
|
||||
# Check if the key starts with "Locks/" and its value is 1
|
||||
if key.startswith('Locks/') and flatten_data[key] == 1:
|
||||
# Remove the "Locks" prefix and append to the result
|
||||
result.append(key.removeprefix('Locks'))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def check_data(data, t_data):
|
||||
if isinstance(data, bytes):
|
||||
if t_data == 7:
|
||||
return clean_data(data.decode('utf-16').replace('\x00',''))
|
||||
else:
|
||||
return None
|
||||
elif t_data == 4:
|
||||
return data
|
||||
return clean_data(data)
|
||||
|
||||
def convert_string_dconf(input_string):
|
||||
macros = {
|
||||
'#': '%sharp%',
|
||||
';': '%semicolon%',
|
||||
'//': '%doubleslash%',
|
||||
'/': '%oneslash%'
|
||||
}
|
||||
output_string = input_string
|
||||
for key, value in macros.items():
|
||||
if key in input_string:
|
||||
output_string = input_string.replace(key, value)
|
||||
elif value in input_string:
|
||||
output_string = input_string.replace(value, key)
|
||||
|
||||
return output_string
|
||||
|
||||
def remove_empty_values(input_list):
|
||||
return list(filter(None, input_list))
|
||||
|
||||
def flatten_dictionary(input_dict, result=None, current_key=''):
|
||||
if result is None:
|
||||
result = {}
|
||||
|
||||
for key, value in input_dict.items():
|
||||
new_key = f"{current_key}/{key}" if current_key else key
|
||||
|
||||
if isinstance(value, dict):
|
||||
flatten_dictionary(value, result, new_key)
|
||||
else:
|
||||
result[new_key] = value
|
||||
|
||||
return result
|
||||
|
||||
def get_dconf_envprofile():
|
||||
dconf_envprofile = {'default': {'DCONF_PROFILE': 'default'},
|
||||
'local': {'DCONF_PROFILE': 'local'},
|
||||
'system': {'DCONF_PROFILE': 'system'}
|
||||
}
|
||||
|
||||
if Dconf_registry._envprofile:
|
||||
return dconf_envprofile.get(Dconf_registry._envprofile, dconf_envprofile['system'])
|
||||
|
||||
if not Dconf_registry._username:
|
||||
return dconf_envprofile['system']
|
||||
|
||||
profile = '/run/dconf/user/{}'.format(get_uid_by_username(Dconf_registry._username))
|
||||
return {'DCONF_PROFILE': profile}
|
||||
|
||||
|
||||
def convert_elements_to_list_dicts(elements):
|
||||
return list(map(lambda x: dict(x), elements))
|
||||
|
||||
def remove_duplicate_dicts_in_list(list_dict):
|
||||
return convert_elements_to_list_dicts(list(OrderedDict((tuple(sorted(d.items())), d) for d in list_dict).values()))
|
||||
|
||||
def add_preferences_to_global_registry_dict(username, is_machine):
|
||||
if is_machine:
|
||||
prefix = 'Software/BaseALT/Policies/Preferences/Machine'
|
||||
else:
|
||||
prefix = f'Software/BaseALT/Policies/Preferences/{username}'
|
||||
|
||||
preferences_global = [('Shortcuts',remove_duplicate_dicts_in_list(Dconf_registry.shortcuts)),
|
||||
('Folders',remove_duplicate_dicts_in_list(Dconf_registry.folders)),
|
||||
('Files',remove_duplicate_dicts_in_list(Dconf_registry.files)),
|
||||
('Drives',remove_duplicate_dicts_in_list(Dconf_registry.drives)),
|
||||
('Scheduledtasks',remove_duplicate_dicts_in_list(Dconf_registry.scheduledtasks)),
|
||||
('Environmentvariables',remove_duplicate_dicts_in_list(Dconf_registry.environmentvariables)),
|
||||
('Inifiles',remove_duplicate_dicts_in_list(Dconf_registry.inifiles)),
|
||||
('Services',remove_duplicate_dicts_in_list(Dconf_registry.services)),
|
||||
('Printers',remove_duplicate_dicts_in_list(Dconf_registry.printers)),
|
||||
('Scripts',remove_duplicate_dicts_in_list(Dconf_registry.scripts)),
|
||||
('Networkshares',remove_duplicate_dicts_in_list(Dconf_registry.networkshares))]
|
||||
|
||||
preferences_global_dict = dict()
|
||||
preferences_global_dict[prefix] = dict()
|
||||
|
||||
for key, val in preferences_global:
|
||||
preferences_global_dict[prefix].update({key:clean_data(str(val))})
|
||||
|
||||
update_dict(Dconf_registry.global_registry_dict, preferences_global_dict)
|
||||
|
||||
def extract_display_name_version(data, username):
|
||||
policy_force = data.get('Software/BaseALT/Policies/GPUpdate', {}).get('Force', False)
|
||||
if Dconf_registry._force or policy_force:
|
||||
logdata = dict({'username': username})
|
||||
log('W26', logdata)
|
||||
return {}
|
||||
result = {}
|
||||
tmp = {}
|
||||
if isinstance(data, dict):
|
||||
for key in data.keys():
|
||||
if key.startswith(Dconf_registry._GpoPriority+'/'):
|
||||
tmp[key] = data[key]
|
||||
for value in tmp.values():
|
||||
if isinstance(value, dict) and value.get('version', 'None')!='None' and value.get('display_name'):
|
||||
result[value['display_name']] = {'version': value['version'], 'correct_path': value['correct_path']}
|
||||
Dconf_registry._dict_gpo_name_version_cache = result
|
||||
return result
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2021 BaseALT Ltd. <org@basealt.ru>
|
||||
# Copyright (C) 2021-2024 BaseALT Ltd. <org@basealt.ru>
|
||||
# Copyright (C) 2021 Igor Chudov <nir@nir.org.ru>
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
@ -19,35 +19,48 @@
|
||||
|
||||
import os
|
||||
import os.path
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import smbc
|
||||
|
||||
|
||||
from util.logging import log
|
||||
from util.paths import file_cache_dir, UNCPath
|
||||
from util.paths import file_cache_dir, file_cache_path_home, UNCPath
|
||||
from util.exceptions import NotUNCPathError
|
||||
|
||||
from util.util import get_machine_name
|
||||
|
||||
class fs_file_cache:
|
||||
__read_blocksize = 4096
|
||||
|
||||
def __init__(self, cache_name):
|
||||
def __init__(self, cache_name, username = None):
|
||||
self.cache_name = cache_name
|
||||
self.storage_uri = file_cache_dir()
|
||||
self.username = username
|
||||
if username and username != get_machine_name():
|
||||
try:
|
||||
self.storage_uri = file_cache_path_home(username)
|
||||
except:
|
||||
self.storage_uri = file_cache_dir()
|
||||
else:
|
||||
self.storage_uri = file_cache_dir()
|
||||
logdata = dict({'cache_file': self.storage_uri})
|
||||
log('D20', logdata)
|
||||
self.samba_context = smbc.Context(use_kerberos=1)
|
||||
#, debug=10)
|
||||
|
||||
def store(self, uri):
|
||||
destdir = uri
|
||||
def store(self, uri, destfile = None):
|
||||
try:
|
||||
uri_path = UNCPath(uri)
|
||||
file_name = os.path.basename(uri_path.get_path())
|
||||
file_path = os.path.dirname(uri_path.get_path())
|
||||
destdir = Path('{}/{}/{}'.format(self.storage_uri,
|
||||
uri_path.get_domain(),
|
||||
file_path))
|
||||
if not destfile:
|
||||
file_name = os.path.basename(uri_path.get_path())
|
||||
file_path = os.path.dirname(uri_path.get_path())
|
||||
destdir = Path('{}/{}/{}'.format(self.storage_uri,
|
||||
uri_path.get_domain(),
|
||||
file_path))
|
||||
else:
|
||||
destdir = destfile.parent
|
||||
except NotUNCPathError:
|
||||
return None
|
||||
|
||||
except Exception as exc:
|
||||
logdata = dict({'exception': str(exc)})
|
||||
log('D144', logdata)
|
||||
@ -56,20 +69,29 @@ class fs_file_cache:
|
||||
if not destdir.exists():
|
||||
destdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
destfile = Path('{}/{}/{}'.format(self.storage_uri,
|
||||
uri_path.get_domain(),
|
||||
uri_path.get_path()))
|
||||
if not destfile:
|
||||
destfile = Path('{}/{}/{}'.format(self.storage_uri,
|
||||
uri_path.get_domain(),
|
||||
uri_path.get_path()))
|
||||
|
||||
with open(destfile, 'wb') as df:
|
||||
df.truncate()
|
||||
df.flush()
|
||||
try:
|
||||
fd, tmpfile = tempfile.mkstemp('', str(destfile))
|
||||
df = os.fdopen(fd, 'wb')
|
||||
file_handler = self.samba_context.open(str(uri_path), os.O_RDONLY)
|
||||
while True:
|
||||
data = file_handler.read(self.__read_blocksize)
|
||||
if not data:
|
||||
break
|
||||
df.write(data)
|
||||
df.flush()
|
||||
df.close()
|
||||
os.rename(tmpfile, destfile)
|
||||
os.chmod(destfile, 0o644)
|
||||
except Exception as exc:
|
||||
logdata = dict({'exception': str(exc)})
|
||||
log('W25', logdata)
|
||||
tmppath = Path(tmpfile)
|
||||
if tmppath.exists():
|
||||
tmppath.unlink()
|
||||
|
||||
def get(self, uri):
|
||||
destfile = uri
|
||||
@ -87,8 +109,10 @@ class fs_file_cache:
|
||||
logdata = dict({'exception': str(exc)})
|
||||
log('E36', logdata)
|
||||
raise exc
|
||||
|
||||
return str(destfile)
|
||||
if Path(destfile).exists():
|
||||
return str(destfile)
|
||||
else:
|
||||
return None
|
||||
|
||||
def get_ls_smbdir(self, uri):
|
||||
type_file_smb = 8
|
||||
|
@ -1,260 +0,0 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
class samba_preg(object):
|
||||
'''
|
||||
Object mapping representing HKLM entry (registry key without SID)
|
||||
'''
|
||||
def __init__(self, preg_obj, policy_name):
|
||||
self.policy_name = policy_name
|
||||
self.keyname = preg_obj.keyname
|
||||
self.valuename = preg_obj.valuename
|
||||
self.hive_key = '{}\\{}'.format(self.keyname, self.valuename)
|
||||
self.type = preg_obj.type
|
||||
self.data = preg_obj.data
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['type'] = self.type
|
||||
fields['data'] = self.data
|
||||
|
||||
return fields
|
||||
|
||||
class samba_hkcu_preg(object):
|
||||
'''
|
||||
Object mapping representing HKCU entry (registry key with SID)
|
||||
'''
|
||||
def __init__(self, sid, preg_obj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.keyname = preg_obj.keyname
|
||||
self.valuename = preg_obj.valuename
|
||||
self.hive_key = '{}\\{}'.format(self.keyname, self.valuename)
|
||||
self.type = preg_obj.type
|
||||
self.data = preg_obj.data
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['type'] = self.type
|
||||
fields['data'] = self.data
|
||||
|
||||
return fields
|
||||
|
||||
class ad_shortcut(object):
|
||||
'''
|
||||
Object mapping representing Windows shortcut.
|
||||
'''
|
||||
def __init__(self, sid, sc, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.path = sc.dest
|
||||
self.shortcut = sc.to_json()
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['path'] = self.path
|
||||
fields['shortcut'] = self.shortcut
|
||||
|
||||
return fields
|
||||
|
||||
class info_entry(object):
|
||||
def __init__(self, name, value):
|
||||
self.name = name
|
||||
self.value = value
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['value'] = self.value
|
||||
|
||||
return fields
|
||||
|
||||
class printer_entry(object):
|
||||
'''
|
||||
Object mapping representing Windows printer of some type.
|
||||
'''
|
||||
def __init__(self, sid, pobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.name = pobj.name
|
||||
self.printer = pobj.to_json()
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['name'] = self.name
|
||||
fields['printer'] = self.printer.to_json()
|
||||
|
||||
return fields
|
||||
|
||||
class drive_entry(object):
|
||||
'''
|
||||
Object mapping representing Samba share bound to drive letter
|
||||
'''
|
||||
def __init__(self, sid, dobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.login = dobj.login
|
||||
self.password = dobj.password
|
||||
self.dir = dobj.dir
|
||||
self.path = dobj.path
|
||||
|
||||
def update_fields(self):
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['login'] = self.login
|
||||
fields['password'] = self.password
|
||||
fields['dir'] = self.dir
|
||||
fields['path'] = self.path
|
||||
|
||||
return fields
|
||||
|
||||
class folder_entry(object):
|
||||
'''
|
||||
Object mapping representing file system directory
|
||||
'''
|
||||
def __init__(self, sid, fobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.path = fobj.path
|
||||
self.action = fobj.action.value
|
||||
self.delete_folder = str(fobj.delete_folder)
|
||||
self.delete_sub_folders = str(fobj.delete_sub_folders)
|
||||
self.delete_files = str(fobj.delete_files)
|
||||
|
||||
def update_fields(self):
|
||||
'''
|
||||
Return list of fields to update
|
||||
'''
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['action'] = self.action
|
||||
fields['delete_folder'] = self.delete_folder
|
||||
fields['delete_sub_folders'] = self.delete_sub_folders
|
||||
fields['delete_files'] = self.delete_files
|
||||
|
||||
return fields
|
||||
|
||||
class envvar_entry(object):
|
||||
'''
|
||||
Object mapping representing environment variables
|
||||
'''
|
||||
def __init__(self, sid, evobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.name = evobj.name
|
||||
self.value = evobj.value
|
||||
self.action = evobj.action.value
|
||||
|
||||
def update_fields(self):
|
||||
'''
|
||||
Return list of fields to update
|
||||
'''
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['action'] = self.action
|
||||
fields['value'] = self.value
|
||||
|
||||
return fields
|
||||
|
||||
class script_entry(object):
|
||||
'''
|
||||
Object mapping representing scripts.ini
|
||||
'''
|
||||
def __init__(self, sid, scrobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.action = scrobj.action
|
||||
self.number = scrobj.number
|
||||
self.path = scrobj.path
|
||||
self.arg = scrobj.args
|
||||
|
||||
def update_fields(self):
|
||||
'''
|
||||
Return list of fields to update
|
||||
'''
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['action'] = self.action
|
||||
fields['number'] = self.number
|
||||
fields['path'] = self.path
|
||||
fields['arg'] = self.arg
|
||||
|
||||
return fields
|
||||
|
||||
class file_entry(object):
|
||||
'''
|
||||
Object mapping representing FILES.XML
|
||||
'''
|
||||
def __init__(self, sid, scrobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.action = scrobj.action
|
||||
self.fromPath = scrobj.fromPath
|
||||
self.targetPath = scrobj.targetPath
|
||||
self.readOnly = scrobj.readOnly
|
||||
self.archive = scrobj.archive
|
||||
self.hidden = scrobj.hidden
|
||||
self.suppress = scrobj.suppress
|
||||
|
||||
|
||||
def update_fields(self):
|
||||
'''
|
||||
Return list of fields to update
|
||||
'''
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['action'] = self.action
|
||||
fields['fromPath'] = self.fromPath
|
||||
fields['targetPath'] = self.targetPath
|
||||
fields['readOnly'] = self.readOnly
|
||||
fields['archive'] = self.archive
|
||||
fields['hidden'] = self.hidden
|
||||
fields['suppress'] = self.suppress
|
||||
|
||||
return fields
|
||||
|
||||
class ini_entry(object):
|
||||
'''
|
||||
Object mapping representing INIFILES.XML
|
||||
'''
|
||||
def __init__(self, sid, iniobj, policy_name):
|
||||
self.sid = sid
|
||||
self.policy_name = policy_name
|
||||
self.action = iniobj.action
|
||||
self.path = iniobj.path
|
||||
self.section = iniobj.section
|
||||
self.property = iniobj.property
|
||||
self.value = iniobj.value
|
||||
|
||||
|
||||
def update_fields(self):
|
||||
'''
|
||||
Return list of fields to update
|
||||
'''
|
||||
fields = dict()
|
||||
fields['policy_name'] = self.policy_name
|
||||
fields['action'] = self.action
|
||||
fields['path'] = self.path
|
||||
fields['section'] = self.section
|
||||
fields['property'] = self.property
|
||||
fields['value'] = self.value
|
||||
|
||||
return fields
|
@ -1,101 +0,0 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .cache import cache
|
||||
|
||||
import os
|
||||
|
||||
from sqlalchemy import (
|
||||
create_engine,
|
||||
Table,
|
||||
Column,
|
||||
Integer,
|
||||
String,
|
||||
MetaData
|
||||
)
|
||||
from sqlalchemy.orm import (
|
||||
mapper,
|
||||
sessionmaker
|
||||
)
|
||||
|
||||
from util.logging import log
|
||||
from util.paths import cache_dir
|
||||
|
||||
def mapping_factory(mapper_suffix):
|
||||
exec(
|
||||
'''
|
||||
class mapped_id_{}(object):
|
||||
def __init__(self, str_id, value):
|
||||
self.str_id = str_id
|
||||
self.value = str(value)
|
||||
'''.format(mapper_suffix)
|
||||
)
|
||||
return eval('mapped_id_{}'.format(mapper_suffix))
|
||||
|
||||
class sqlite_cache(cache):
|
||||
def __init__(self, cache_name):
|
||||
self.cache_name = cache_name
|
||||
self.mapper_obj = mapping_factory(self.cache_name)
|
||||
self.storage_uri = os.path.join('sqlite:///{}/{}.sqlite'.format(cache_dir(), self.cache_name))
|
||||
logdata = dict({'cache_file': self.storage_uri})
|
||||
log('D20', logdata)
|
||||
self.db_cnt = create_engine(self.storage_uri, echo=False)
|
||||
self.__metadata = MetaData(self.db_cnt)
|
||||
self.cache_table = Table(
|
||||
self.cache_name,
|
||||
self.__metadata,
|
||||
Column('id', Integer, primary_key=True),
|
||||
Column('str_id', String(65536), unique=True),
|
||||
Column('value', String)
|
||||
)
|
||||
|
||||
self.__metadata.create_all(self.db_cnt)
|
||||
Session = sessionmaker(bind=self.db_cnt)
|
||||
self.db_session = Session()
|
||||
mapper(self.mapper_obj, self.cache_table)
|
||||
|
||||
def store(self, str_id, value):
|
||||
obj = self.mapper_obj(str_id, value)
|
||||
self._upsert(obj)
|
||||
|
||||
def get(self, obj_id):
|
||||
result = self.db_session.query(self.mapper_obj).filter(self.mapper_obj.str_id == obj_id).first()
|
||||
return result
|
||||
|
||||
def get_default(self, obj_id, default_value):
|
||||
result = self.get(obj_id)
|
||||
if result == None:
|
||||
logdata = dict()
|
||||
logdata['object'] = obj_id
|
||||
log('D43', logdata)
|
||||
self.store(obj_id, default_value)
|
||||
return str(default_value)
|
||||
return result.value
|
||||
|
||||
def _upsert(self, obj):
|
||||
try:
|
||||
self.db_session.add(obj)
|
||||
self.db_session.commit()
|
||||
except Exception as exc:
|
||||
self.db_session.rollback()
|
||||
logdata = dict()
|
||||
logdata['msg'] = str(exc)
|
||||
log('D44', logdata)
|
||||
self.db_session.query(self.mapper_obj).filter(self.mapper_obj.str_id == obj.str_id).update({ 'value': obj.value })
|
||||
self.db_session.commit()
|
||||
|
@ -1,576 +0,0 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
|
||||
from sqlalchemy import (
|
||||
create_engine,
|
||||
Table,
|
||||
Column,
|
||||
Integer,
|
||||
String,
|
||||
MetaData,
|
||||
UniqueConstraint
|
||||
)
|
||||
from sqlalchemy.orm import (
|
||||
mapper,
|
||||
sessionmaker
|
||||
)
|
||||
|
||||
from util.logging import log
|
||||
from util.paths import cache_dir
|
||||
from .registry import registry
|
||||
from .record_types import (
|
||||
samba_preg
|
||||
, samba_hkcu_preg
|
||||
, ad_shortcut
|
||||
, info_entry
|
||||
, printer_entry
|
||||
, drive_entry
|
||||
, folder_entry
|
||||
, envvar_entry
|
||||
, script_entry
|
||||
, file_entry
|
||||
, ini_entry
|
||||
)
|
||||
|
||||
class sqlite_registry(registry):
|
||||
def __init__(self, db_name, registry_cache_dir=None):
|
||||
self.db_name = db_name
|
||||
cdir = registry_cache_dir
|
||||
if cdir == None:
|
||||
cdir = cache_dir()
|
||||
self.db_path = os.path.join('sqlite:///{}/{}.sqlite'.format(cdir, self.db_name))
|
||||
self.db_cnt = create_engine(self.db_path, echo=False)
|
||||
self.__metadata = MetaData(self.db_cnt)
|
||||
self.__info = Table(
|
||||
'info',
|
||||
self.__metadata,
|
||||
Column('id', Integer, primary_key=True),
|
||||
Column('name', String(65536), unique=True),
|
||||
Column('value', String(65536))
|
||||
)
|
||||
self.__hklm = Table(
|
||||
'HKLM'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('hive_key', String(65536, collation='NOCASE'),
|
||||
unique=True)
|
||||
, Column('keyname', String(collation='NOCASE'))
|
||||
, Column('valuename', String(collation='NOCASE'))
|
||||
, Column('policy_name', String)
|
||||
, Column('type', Integer)
|
||||
, Column('data', String)
|
||||
)
|
||||
self.__hkcu = Table(
|
||||
'HKCU'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('hive_key', String(65536, collation='NOCASE'))
|
||||
, Column('keyname', String(collation='NOCASE'))
|
||||
, Column('valuename', String(collation='NOCASE'))
|
||||
, Column('policy_name', String)
|
||||
, Column('type', Integer)
|
||||
, Column('data', String)
|
||||
, UniqueConstraint('sid', 'hive_key')
|
||||
)
|
||||
self.__shortcuts = Table(
|
||||
'Shortcuts'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('path', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('shortcut', String)
|
||||
, UniqueConstraint('sid', 'path')
|
||||
)
|
||||
self.__printers = Table(
|
||||
'Printers'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('name', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('printer', String)
|
||||
, UniqueConstraint('sid', 'name')
|
||||
)
|
||||
self.__drives = Table(
|
||||
'Drives'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('login', String)
|
||||
, Column('password', String)
|
||||
, Column('dir', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('path', String)
|
||||
, UniqueConstraint('sid', 'dir')
|
||||
)
|
||||
self.__folders = Table(
|
||||
'Folders'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('path', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('action', String)
|
||||
, Column('delete_folder', String)
|
||||
, Column('delete_sub_folders', String)
|
||||
, Column('delete_files', String)
|
||||
, UniqueConstraint('sid', 'path')
|
||||
)
|
||||
self.__envvars = Table(
|
||||
'Envvars'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('name', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('action', String)
|
||||
, Column('value', String)
|
||||
, UniqueConstraint('sid', 'name')
|
||||
)
|
||||
self.__scripts = Table(
|
||||
'Scripts'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('number', String)
|
||||
, Column('action', String)
|
||||
, Column('path', String)
|
||||
, Column('arg', String)
|
||||
, UniqueConstraint('sid', 'path', 'arg')
|
||||
)
|
||||
self.__files = Table(
|
||||
'Files'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('action', String)
|
||||
, Column('fromPath', String)
|
||||
, Column('targetPath', String)
|
||||
, Column('readOnly', String)
|
||||
, Column('archive', String)
|
||||
, Column('hidden', String)
|
||||
, Column('suppress', String)
|
||||
, UniqueConstraint('sid', 'policy_name', 'targetPath', 'fromPath')
|
||||
)
|
||||
self.__ini = Table(
|
||||
'Ini'
|
||||
, self.__metadata
|
||||
, Column('id', Integer, primary_key=True)
|
||||
, Column('sid', String)
|
||||
, Column('policy_name', String)
|
||||
, Column('action', String)
|
||||
, Column('path', String)
|
||||
, Column('section', String)
|
||||
, Column('property', String)
|
||||
, Column('value', String)
|
||||
, UniqueConstraint('sid', 'action', 'path', 'section', 'property', 'value')
|
||||
)
|
||||
|
||||
self.__metadata.create_all(self.db_cnt)
|
||||
Session = sessionmaker(bind=self.db_cnt)
|
||||
self.db_session = Session()
|
||||
try:
|
||||
mapper(info_entry, self.__info)
|
||||
mapper(samba_preg, self.__hklm)
|
||||
mapper(samba_hkcu_preg, self.__hkcu)
|
||||
mapper(ad_shortcut, self.__shortcuts)
|
||||
mapper(printer_entry, self.__printers)
|
||||
mapper(drive_entry, self.__drives)
|
||||
mapper(folder_entry, self.__folders)
|
||||
mapper(envvar_entry, self.__envvars)
|
||||
mapper(script_entry, self.__scripts)
|
||||
mapper(file_entry, self.__files)
|
||||
mapper(ini_entry, self.__ini)
|
||||
except:
|
||||
pass
|
||||
#logging.error('Error creating mapper')
|
||||
|
||||
def _add(self, row):
|
||||
try:
|
||||
self.db_session.add(row)
|
||||
self.db_session.commit()
|
||||
except Exception as exc:
|
||||
self.db_session.rollback()
|
||||
raise exc
|
||||
|
||||
def _info_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except:
|
||||
(self
|
||||
.db_session.query(info_entry)
|
||||
.filter(info_entry.name == row.name)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def _hklm_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except:
|
||||
(self
|
||||
.db_session
|
||||
.query(samba_preg)
|
||||
.filter(samba_preg.hive_key == row.hive_key)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def _hkcu_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except Exception as exc:
|
||||
(self
|
||||
.db_session
|
||||
.query(samba_hkcu_preg)
|
||||
.filter(samba_hkcu_preg.sid == row.sid)
|
||||
.filter(samba_hkcu_preg.hive_key == row.hive_key)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def _shortcut_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except:
|
||||
(self
|
||||
.db_session
|
||||
.query(ad_shortcut)
|
||||
.filter(ad_shortcut.sid == row.sid)
|
||||
.filter(ad_shortcut.path == row.path)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def _printer_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except:
|
||||
(self
|
||||
.db_session
|
||||
.query(printer_entry)
|
||||
.filter(printer_entry.sid == row.sid)
|
||||
.filter(printer_entry.name == row.name)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def _drive_upsert(self, row):
|
||||
try:
|
||||
self._add(row)
|
||||
except:
|
||||
(self
|
||||
.db_session
|
||||
.query(drive_entry)
|
||||
.filter(drive_entry.sid == row.sid)
|
||||
.filter(drive_entry.dir == row.dir)
|
||||
.update(row.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def set_info(self, name, value):
|
||||
ientry = info_entry(name, value)
|
||||
logdata = dict()
|
||||
logdata['varname'] = name
|
||||
logdata['value'] = value
|
||||
log('D19', logdata)
|
||||
self._info_upsert(ientry)
|
||||
|
||||
def _delete_hklm_keyname(self, keyname):
|
||||
'''
|
||||
Delete PReg hive_key from HKEY_LOCAL_MACHINE
|
||||
'''
|
||||
logdata = dict({'keyname': keyname})
|
||||
try:
|
||||
(self
|
||||
.db_session
|
||||
.query(samba_preg)
|
||||
.filter(samba_preg.keyname == keyname)
|
||||
.delete(synchronize_session=False))
|
||||
self.db_session.commit()
|
||||
log('D65', logdata)
|
||||
except Exception as exc:
|
||||
log('D63', logdata)
|
||||
|
||||
def add_hklm_entry(self, preg_entry, policy_name):
|
||||
'''
|
||||
Write PReg entry to HKEY_LOCAL_MACHINE
|
||||
'''
|
||||
pentry = samba_preg(preg_entry, policy_name)
|
||||
if not pentry.valuename.startswith('**'):
|
||||
self._hklm_upsert(pentry)
|
||||
else:
|
||||
logdata = dict({'key': pentry.hive_key})
|
||||
if pentry.valuename.lower() == '**delvals.':
|
||||
self._delete_hklm_keyname(pentry.keyname)
|
||||
else:
|
||||
log('D27', logdata)
|
||||
|
||||
def _delete_hkcu_keyname(self, keyname, sid):
|
||||
'''
|
||||
Delete PReg hive_key from HKEY_CURRENT_USER
|
||||
'''
|
||||
logdata = dict({'sid': sid, 'keyname': keyname})
|
||||
try:
|
||||
(self
|
||||
.db_session
|
||||
.query(samba_hkcu_preg)
|
||||
.filter(samba_hkcu_preg.sid == sid)
|
||||
.filter(samba_hkcu_preg.keyname == keyname)
|
||||
.delete(synchronize_session=False))
|
||||
self.db_session.commit()
|
||||
log('D66', logdata)
|
||||
except:
|
||||
log('D64', logdata)
|
||||
|
||||
def add_hkcu_entry(self, preg_entry, sid, policy_name):
|
||||
'''
|
||||
Write PReg entry to HKEY_CURRENT_USER
|
||||
'''
|
||||
hkcu_pentry = samba_hkcu_preg(sid, preg_entry, policy_name)
|
||||
logdata = dict({'sid': sid, 'policy': policy_name, 'key': hkcu_pentry.hive_key})
|
||||
if not hkcu_pentry.valuename.startswith('**'):
|
||||
log('D26', logdata)
|
||||
self._hkcu_upsert(hkcu_pentry)
|
||||
else:
|
||||
if hkcu_pentry.valuename.lower() == '**delvals.':
|
||||
self._delete_hkcu_keyname(hkcu_pentry.keyname, sid)
|
||||
else:
|
||||
log('D51', logdata)
|
||||
|
||||
def add_shortcut(self, sid, sc_obj, policy_name):
|
||||
'''
|
||||
Store shortcut information in the database
|
||||
'''
|
||||
sc_entry = ad_shortcut(sid, sc_obj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['link'] = sc_entry.path
|
||||
logdata['sid'] = sid
|
||||
log('D41', logdata)
|
||||
self._shortcut_upsert(sc_entry)
|
||||
|
||||
def add_printer(self, sid, pobj, policy_name):
|
||||
'''
|
||||
Store printer configuration in the database
|
||||
'''
|
||||
prn_entry = printer_entry(sid, pobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['printer'] = prn_entry.name
|
||||
logdata['sid'] = sid
|
||||
log('D40', logdata)
|
||||
self._printer_upsert(prn_entry)
|
||||
|
||||
def add_drive(self, sid, dobj, policy_name):
|
||||
drv_entry = drive_entry(sid, dobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['uri'] = drv_entry.path
|
||||
logdata['sid'] = sid
|
||||
log('D39', logdata)
|
||||
self._drive_upsert(drv_entry)
|
||||
|
||||
def add_folder(self, sid, fobj, policy_name):
|
||||
fld_entry = folder_entry(sid, fobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['folder'] = fld_entry.path
|
||||
logdata['sid'] = sid
|
||||
log('D42', logdata)
|
||||
try:
|
||||
self._add(fld_entry)
|
||||
except Exception as exc:
|
||||
(self
|
||||
._filter_sid_obj(folder_entry, sid)
|
||||
.filter(folder_entry.path == fld_entry.path)
|
||||
.update(fld_entry.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def add_envvar(self, sid, evobj, policy_name):
|
||||
ev_entry = envvar_entry(sid, evobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['envvar'] = ev_entry.name
|
||||
logdata['sid'] = sid
|
||||
log('D53', logdata)
|
||||
try:
|
||||
self._add(ev_entry)
|
||||
except Exception as exc:
|
||||
(self
|
||||
._filter_sid_obj(envvar_entry, sid)
|
||||
.filter(envvar_entry.name == ev_entry.name)
|
||||
.update(ev_entry.update_fields()))
|
||||
self.db_session.commit()
|
||||
def add_script(self, sid, scrobj, policy_name):
|
||||
scr_entry = script_entry(sid, scrobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['script path'] = scrobj.path
|
||||
logdata['sid'] = sid
|
||||
log('D153', logdata)
|
||||
try:
|
||||
self._add(scr_entry)
|
||||
except Exception as exc:
|
||||
(self
|
||||
._filter_sid_obj(script_entry, sid)
|
||||
.filter(script_entry.path == scr_entry.path)
|
||||
.update(scr_entry.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
def add_file(self, sid, fileobj, policy_name):
|
||||
f_entry = file_entry(sid, fileobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['targetPath'] = f_entry.targetPath
|
||||
logdata['fromPath'] = f_entry.fromPath
|
||||
log('D162', logdata)
|
||||
try:
|
||||
self._add(f_entry)
|
||||
except Exception as exc:
|
||||
(self
|
||||
._filter_sid_obj(file_entry, sid)
|
||||
.filter(file_entry.targetPath == f_entry.targetPath)
|
||||
.update(f_entry.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
|
||||
def add_ini(self, sid, iniobj, policy_name):
|
||||
inientry = ini_entry(sid, iniobj, policy_name)
|
||||
logdata = dict()
|
||||
logdata['path'] = inientry.path
|
||||
logdata['action'] = inientry.action
|
||||
log('D177', logdata)
|
||||
try:
|
||||
self._add(inientry)
|
||||
except Exception as exc:
|
||||
(self
|
||||
._filter_sid_obj(ini_entry, sid)
|
||||
.filter(ini_entry.path == inientry.path)
|
||||
.update(inientry.update_fields()))
|
||||
self.db_session.commit()
|
||||
|
||||
|
||||
def _filter_sid_obj(self, row_object, sid):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(row_object)
|
||||
.filter(row_object.sid == sid))
|
||||
return res
|
||||
|
||||
def _filter_sid_list(self, row_object, sid):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(row_object)
|
||||
.filter(row_object.sid == sid)
|
||||
.order_by(row_object.id)
|
||||
.all())
|
||||
return res
|
||||
|
||||
def get_shortcuts(self, sid):
|
||||
return self._filter_sid_list(ad_shortcut, sid)
|
||||
|
||||
def get_printers(self, sid):
|
||||
return self._filter_sid_list(printer_entry, sid)
|
||||
|
||||
def get_drives(self, sid):
|
||||
return self._filter_sid_list(drive_entry, sid)
|
||||
|
||||
def get_folders(self, sid):
|
||||
return self._filter_sid_list(folder_entry, sid)
|
||||
|
||||
def get_envvars(self, sid):
|
||||
return self._filter_sid_list(envvar_entry, sid)
|
||||
|
||||
def _filter_scripts_list(self, row_object, sid, action):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(row_object)
|
||||
.filter(row_object.sid == sid)
|
||||
.filter(row_object.action == action)
|
||||
.order_by(row_object.id)
|
||||
.all())
|
||||
return res
|
||||
|
||||
def get_scripts(self, sid, action):
|
||||
return self._filter_scripts_list(script_entry, sid, action)
|
||||
|
||||
def get_files(self, sid):
|
||||
return self._filter_sid_list(file_entry, sid)
|
||||
|
||||
def get_ini(self, sid):
|
||||
return self._filter_sid_list(ini_entry, sid)
|
||||
|
||||
def get_hkcu_entry(self, sid, hive_key):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(samba_hkcu_preg)
|
||||
.filter(samba_hkcu_preg.sid == sid)
|
||||
.filter(samba_hkcu_preg.hive_key == hive_key)
|
||||
.first())
|
||||
# Try to get the value from machine SID as a default if no option is set.
|
||||
if not res:
|
||||
machine_sid = self.get_info('machine_sid')
|
||||
res = self.db_session.query(samba_hkcu_preg).filter(samba_hkcu_preg.sid == machine_sid).filter(samba_hkcu_preg.hive_key == hive_key).first()
|
||||
return res
|
||||
|
||||
def filter_hkcu_entries(self, sid, startswith):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(samba_hkcu_preg)
|
||||
.filter(samba_hkcu_preg.sid == sid)
|
||||
.filter(samba_hkcu_preg.hive_key.like(startswith)))
|
||||
return res
|
||||
|
||||
def get_info(self, name):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(info_entry)
|
||||
.filter(info_entry.name == name)
|
||||
.first())
|
||||
return res.value
|
||||
|
||||
def get_hklm_entry(self, hive_key):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(samba_preg)
|
||||
.filter(samba_preg.hive_key == hive_key)
|
||||
.first())
|
||||
return res
|
||||
|
||||
def filter_hklm_entries(self, startswith):
|
||||
res = (self
|
||||
.db_session
|
||||
.query(samba_preg)
|
||||
.filter(samba_preg.hive_key.like(startswith)))
|
||||
return res
|
||||
|
||||
def wipe_user(self, sid):
|
||||
self._wipe_sid(samba_hkcu_preg, sid)
|
||||
self._wipe_sid(ad_shortcut, sid)
|
||||
self._wipe_sid(printer_entry, sid)
|
||||
self._wipe_sid(drive_entry, sid)
|
||||
self._wipe_sid(script_entry, sid)
|
||||
self._wipe_sid(file_entry, sid)
|
||||
self._wipe_sid(ini_entry, sid)
|
||||
|
||||
def _wipe_sid(self, row_object, sid):
|
||||
(self
|
||||
.db_session
|
||||
.query(row_object)
|
||||
.filter(row_object.sid == sid)
|
||||
.delete())
|
||||
self.db_session.commit()
|
||||
|
||||
def wipe_hklm(self):
|
||||
self.db_session.query(samba_preg).delete()
|
||||
self.db_session.commit()
|
||||
|
63
gpoa/templates/47-alt_group_policy_permissions.rules.j2
Normal file
63
gpoa/templates/47-alt_group_policy_permissions.rules.j2
Normal file
@ -0,0 +1,63 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
|
||||
{% if No|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in No -%}
|
||||
action.id == "{{res}}"{% if No|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.NO;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Yes|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Yes -%}
|
||||
action.id == "{{res}}"{% if Yes|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.YES;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_self|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self -%}
|
||||
action.id == "{{res}}"{% if Auth_self|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_SELF;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_admin|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin -%}
|
||||
action.id == "{{res}}"{% if Auth_admin|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_ADMIN;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_self_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self_keep -%}
|
||||
action.id == "{{res}}"{% if Auth_self_keep|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_SELF_KEEP;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_admin_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin_keep -%}
|
||||
action.id == "{{res}}"{% if Auth_admin_keep|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_ADMIN_KEEP;
|
||||
}
|
||||
});
|
||||
|
||||
{% endif %}
|
63
gpoa/templates/48-alt_group_policy_permissions_user.rules.j2
Normal file
63
gpoa/templates/48-alt_group_policy_permissions_user.rules.j2
Normal file
@ -0,0 +1,63 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
|
||||
{% if No|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in No -%}
|
||||
action.id == "{{res}}" {% if No|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.NO;
|
||||
}
|
||||
});{% endif %}{% if Yes|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Yes -%}
|
||||
action.id == "{{res}}" {% if Yes|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.YES;
|
||||
}
|
||||
});{% endif %}{% if Auth_self|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self -%}
|
||||
action.id == "{{res}}" {% if Auth_self|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.AUTH_SELF;
|
||||
}
|
||||
});{% endif %}{% if Auth_admin|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin -%}
|
||||
action.id == "{{res}}" {% if Auth_admin|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.AUTH_ADMIN;
|
||||
}
|
||||
});{% endif %}{% if Auth_self_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self_keep -%}
|
||||
action.id == "{{res}}" {% if Auth_self_keep|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.AUTH_SELF_KEEP;
|
||||
}
|
||||
});{% endif %}{% if Auth_admin_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin_keep -%}
|
||||
action.id == "{{res}}" {% if Auth_admin_keep|length == loop.index %}&&{% else %}||{% endif %}
|
||||
{% endfor %}subject.user == "{{User}}") {
|
||||
return polkit.Result.AUTH_ADMIN_KEEP;
|
||||
}
|
||||
});
|
||||
{% endif %}
|
@ -17,7 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
|
||||
{% if Deny_All == '1' %}
|
||||
{% if Deny_All == 1 %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ((action.id == "org.freedesktop.udisks2.filesystem-mount" ||
|
||||
action.id == "org.freedesktop.udisks2.filesystem-mount-system" ||
|
||||
|
63
gpoa/templates/49-alt_group_policy_permissions.rules.j2
Normal file
63
gpoa/templates/49-alt_group_policy_permissions.rules.j2
Normal file
@ -0,0 +1,63 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
|
||||
{% if No|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in No -%}
|
||||
action.id == "{{res}}"{% if No|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.NO;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Yes|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Yes -%}
|
||||
action.id == "{{res}}"{% if Yes|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.YES;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_self|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self -%}
|
||||
action.id == "{{res}}"{% if Auth_self|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_SELF;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_admin|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin -%}
|
||||
action.id == "{{res}}"{% if Auth_admin|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_ADMIN;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_self_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_self_keep -%}
|
||||
action.id == "{{res}}"{% if Auth_self_keep|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_SELF_KEEP;
|
||||
}
|
||||
});
|
||||
{% endif %}{% if Auth_admin_keep|length %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if ({% for res in Auth_admin_keep -%}
|
||||
action.id == "{{res}}"{% if Auth_admin_keep|length == loop.index %}){ {% else %} ||{% endif %}
|
||||
{% endfor %} return polkit.Result.AUTH_ADMIN_KEEP;
|
||||
}
|
||||
});
|
||||
|
||||
{% endif %}
|
@ -17,7 +17,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
|
||||
{% if Deny_All == '1' %}
|
||||
{% if Deny_All == 1 %}
|
||||
polkit.addRule(function (action, subject) {
|
||||
if (action.id == "org.freedesktop.udisks2.filesystem-mount" ||
|
||||
action.id == "org.freedesktop.udisks2.filesystem-mount-system" ||
|
||||
|
@ -16,5 +16,5 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
{{ home_dir }}/net {{ mount_file }} -t 120 --browse
|
||||
{{ home_dir }}/{{mntTarget}} {{ mount_file }} -t {{timeout}} --browse
|
||||
|
||||
|
20
gpoa/templates/autofs_auto_hide.j2
Normal file
20
gpoa/templates/autofs_auto_hide.j2
Normal file
@ -0,0 +1,20 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
{{ home_dir }}/.{{mntTarget}} {{ mount_file }} -t {{timeout}}
|
||||
|
@ -1,7 +1,7 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -17,5 +17,11 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
{%- for drv in drives %}
|
||||
{{ drv.dir }} -fstype=cifs,cruid=$USER,sec=krb5,noperm :{{ drv.path }}
|
||||
{% endfor %}
|
||||
{% if (drv.thisDrive != 'HIDE') %}
|
||||
{% if drv.label %}
|
||||
"{{ drv.label }}" -fstype=cifs,cruid=$USER,sec=krb5,noperm{% if drv.username %}{% else %},multiuser{% endif %}{% if drv.cifsacl %},cifsacl{% endif %} :{{ drv.path }}
|
||||
{% else %}
|
||||
"{{ drv.dir }}" -fstype=cifs,cruid=$USER,sec=krb5,noperm{% if drv.username %}{% else %},multiuser{% endif %}{% if drv.cifsacl %},cifsacl{% endif %} :{{ drv.path }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
27
gpoa/templates/autofs_mountpoints_hide.j2
Normal file
27
gpoa/templates/autofs_mountpoints_hide.j2
Normal file
@ -0,0 +1,27 @@
|
||||
{#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2022 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#}
|
||||
{%- for drv in drives %}
|
||||
{% if (drv.thisDrive == 'HIDE') %}
|
||||
{% if drv.label %}
|
||||
"{{ drv.label }}" -fstype=cifs,cruid=$USER,sec=krb5,noperm{% if drv.username %}{% else %},multiuser{% endif %}{% if drv.cifsacl %},cifsacl{% endif %} :{{ drv.path }}
|
||||
{% else %}
|
||||
"{{ drv.dir }}" -fstype=cifs,cruid=$USER,sec=krb5,noperm{% if drv.username %}{% else %},multiuser{% endif %}{% if drv.cifsacl %},cifsacl{% endif %} :{{ drv.path }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,10 +18,9 @@
|
||||
|
||||
import logging
|
||||
import logging.handlers
|
||||
from enum import IntEnum
|
||||
from enum import IntEnum, Enum
|
||||
|
||||
from messages import message_with_code
|
||||
from .logging import slogm
|
||||
from .logging import log
|
||||
|
||||
|
||||
def set_loglevel(loglevel_num=None):
|
||||
@ -70,7 +69,7 @@ def process_target(target_name=None):
|
||||
target = target_name
|
||||
|
||||
logdata = dict({'target': target})
|
||||
logging.debug(slogm(message_with_code('D10'), logdata))
|
||||
log('D10', logdata)
|
||||
|
||||
return target.upper()
|
||||
|
||||
@ -84,3 +83,20 @@ class ExitCodeUpdater(IntEnum):
|
||||
FAIL_GPUPDATE_USER_NOREPLY = 3
|
||||
EXIT_SIGINT = 130
|
||||
|
||||
class FileAction(Enum):
|
||||
CREATE = 'C'
|
||||
REPLACE = 'R'
|
||||
UPDATE = 'U'
|
||||
DELETE = 'D'
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
def action_letter2enum(letter):
|
||||
if letter in ['C', 'R', 'U', 'D']:
|
||||
if letter == 'C': return FileAction.CREATE
|
||||
if letter == 'R': return FileAction.REPLACE
|
||||
if letter == 'U': return FileAction.UPDATE
|
||||
if letter == 'D': return FileAction.DELETE
|
||||
|
||||
return FileAction.CREATE
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -20,6 +20,7 @@ import dbus
|
||||
|
||||
from .logging import log
|
||||
from .users import is_root
|
||||
from storage import Dconf_registry
|
||||
|
||||
|
||||
class dbus_runner:
|
||||
@ -72,6 +73,7 @@ class dbus_runner:
|
||||
if self.username:
|
||||
logdata = dict({'username': self.username})
|
||||
log('D6', logdata)
|
||||
gpupdate = 'gpupdate' if not Dconf_registry._force else 'gpupdate_force'
|
||||
if is_root():
|
||||
# oddjobd-gpupdate's ACL allows access to this method
|
||||
# only for superuser. This method is called via PAM
|
||||
@ -95,7 +97,7 @@ class dbus_runner:
|
||||
result = self.system_bus.call_blocking(self.bus_name,
|
||||
self._object_path,
|
||||
self.interface_name,
|
||||
'gpupdate',
|
||||
gpupdate,
|
||||
None,
|
||||
[],
|
||||
timeout=self._synchronous_timeout)
|
||||
@ -106,11 +108,12 @@ class dbus_runner:
|
||||
raise exc
|
||||
else:
|
||||
log('D11')
|
||||
gpupdate_computer = 'gpupdate_computer' if not Dconf_registry._force else 'gpupdate_computer_force'
|
||||
try:
|
||||
result = self.system_bus.call_blocking(self.bus_name,
|
||||
self._object_path,
|
||||
self.interface_name,
|
||||
'gpupdate_computer',
|
||||
gpupdate_computer,
|
||||
None,
|
||||
# The following positional parameter is called "args".
|
||||
# There is no official documentation for it.
|
||||
@ -118,7 +121,6 @@ class dbus_runner:
|
||||
timeout=self._synchronous_timeout)
|
||||
print_dbus_result(result)
|
||||
except dbus.exceptions.DBusException as exc:
|
||||
print(exc)
|
||||
logdata = dict({'error': str(exc)})
|
||||
log('E22', logdata)
|
||||
raise exc
|
||||
|
@ -46,3 +46,10 @@ class NotUNCPathError(Exception):
|
||||
def __str__(self):
|
||||
return self.path
|
||||
|
||||
class GetGPOListFail(Exception):
|
||||
def __init__(self, exc):
|
||||
self.exc = exc
|
||||
|
||||
def __str__(self):
|
||||
return self.exc
|
||||
|
||||
|
356
gpoa/util/gpoa_ini_parsing.py
Normal file
356
gpoa/util/gpoa_ini_parsing.py
Normal file
@ -0,0 +1,356 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2023 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from configobj import (ConfigObj, NestingError, Section,
|
||||
DuplicateError, ParseError, UnreprError,
|
||||
UnknownType,UnreprError,
|
||||
BOM_UTF8, DEFAULT_INDENT_TYPE, BOM_LIST,
|
||||
match_utf8, unrepr)
|
||||
import six
|
||||
import re
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Michael Foord: fuzzyman AT voidspace DOT org DOT uk
|
||||
# Nicola Larosa: nico AT tekNico DOT net
|
||||
# Rob Dennis: rdennis AT gmail DOT com
|
||||
# Eli Courtwright: eli AT courtwright DOT org
|
||||
# This class based on the ConfigObj module, distributed under the BSD-3-Clause license.
|
||||
# This class includes modified code from the ConfigObj module mentioned above.
|
||||
# The original authors and their contact information are listed in the comments above.
|
||||
# For more information about ConfigObj, please visit the main repository:
|
||||
# https://github.com/DiffSK/configobj
|
||||
|
||||
|
||||
class GpoaConfigObj(ConfigObj):
|
||||
|
||||
_sectionmarker = re.compile(r'''^
|
||||
(\s*) # 1: indentation
|
||||
((?:\[\s*)+) # 2: section marker open
|
||||
( # 3: section name open
|
||||
(?:"\s*\S.*?\s*")| # at least one non-space with double quotes
|
||||
(?:'\s*\S.*?\s*')| # at least one non-space with single quotes
|
||||
(?:[^'"\s].*?) # at least one non-space unquoted
|
||||
) # section name close
|
||||
((?:\s*\])+) # 4: section marker close
|
||||
(\s*(?:[#;].*)?)? # 5: optional comment
|
||||
$''',
|
||||
re.VERBOSE)
|
||||
|
||||
_valueexp = re.compile(r'''^
|
||||
(?:
|
||||
(?:
|
||||
(
|
||||
(?:
|
||||
(?:
|
||||
(?:".*?")| # double quotes
|
||||
(?:'.*?')| # single quotes
|
||||
(?:[^'",\#][^,\#]*?) # unquoted
|
||||
)
|
||||
\s*,\s* # comma
|
||||
)* # match all list items ending in a comma (if any)
|
||||
)
|
||||
(
|
||||
(?:".*?")| # double quotes
|
||||
(?:'.*?')| # single quotes
|
||||
(?:[^'",\#\s][^,]*?)| # unquoted
|
||||
(?:(?<!,)) # Empty value
|
||||
)? # last item in a list - or string value
|
||||
)|
|
||||
(,) # alternatively a single comma - empty list
|
||||
)
|
||||
(\s*(?:[#;].*)?)? # optional comment
|
||||
$''',
|
||||
re.VERBOSE)
|
||||
|
||||
COMMENT_MARKERS = ['#', ';']
|
||||
|
||||
def _handle_comment(self, comment):
|
||||
"""Deal with a comment."""
|
||||
if not comment:
|
||||
return ''
|
||||
start = self.indent_type
|
||||
if not comment.lstrip().startswith(tuple(self.COMMENT_MARKERS)):
|
||||
start += ' # '
|
||||
return start + comment.strip()
|
||||
|
||||
def _parse(self, infile):
|
||||
"""Actually parse the config file."""
|
||||
temp_list_values = self.list_values
|
||||
if self.unrepr:
|
||||
self.list_values = False
|
||||
|
||||
comment_list = []
|
||||
done_start = False
|
||||
this_section = self
|
||||
maxline = len(infile) - 1
|
||||
cur_index = -1
|
||||
reset_comment = False
|
||||
comment_markers = tuple(self.COMMENT_MARKERS)
|
||||
|
||||
while cur_index < maxline:
|
||||
if reset_comment:
|
||||
comment_list = []
|
||||
cur_index += 1
|
||||
line = infile[cur_index]
|
||||
sline = line.strip()
|
||||
# do we have anything on the line ?
|
||||
if not sline or sline.startswith(comment_markers):
|
||||
reset_comment = False
|
||||
comment_list.append(line)
|
||||
continue
|
||||
|
||||
if not done_start:
|
||||
# preserve initial comment
|
||||
self.initial_comment = comment_list
|
||||
comment_list = []
|
||||
done_start = True
|
||||
|
||||
reset_comment = True
|
||||
# first we check if it's a section marker
|
||||
mat = self._sectionmarker.match(line)
|
||||
if mat is not None:
|
||||
# is a section line
|
||||
(indent, sect_open, sect_name, sect_close, comment) = mat.groups()
|
||||
if indent and (self.indent_type is None):
|
||||
self.indent_type = indent
|
||||
cur_depth = sect_open.count('[')
|
||||
if cur_depth != sect_close.count(']'):
|
||||
self._handle_error("Cannot compute the section depth",
|
||||
NestingError, infile, cur_index)
|
||||
continue
|
||||
|
||||
if cur_depth < this_section.depth:
|
||||
# the new section is dropping back to a previous level
|
||||
try:
|
||||
parent = self._match_depth(this_section,
|
||||
cur_depth).parent
|
||||
except SyntaxError:
|
||||
self._handle_error("Cannot compute nesting level",
|
||||
NestingError, infile, cur_index)
|
||||
continue
|
||||
elif cur_depth == this_section.depth:
|
||||
# the new section is a sibling of the current section
|
||||
parent = this_section.parent
|
||||
elif cur_depth == this_section.depth + 1:
|
||||
# the new section is a child the current section
|
||||
parent = this_section
|
||||
else:
|
||||
self._handle_error("Section too nested",
|
||||
NestingError, infile, cur_index)
|
||||
continue
|
||||
|
||||
sect_name = self._unquote(sect_name)
|
||||
if sect_name in parent:
|
||||
self._handle_error('Duplicate section name',
|
||||
DuplicateError, infile, cur_index)
|
||||
continue
|
||||
|
||||
# create the new section
|
||||
this_section = Section(
|
||||
parent,
|
||||
cur_depth,
|
||||
self,
|
||||
name=sect_name)
|
||||
parent[sect_name] = this_section
|
||||
parent.inline_comments[sect_name] = comment
|
||||
parent.comments[sect_name] = comment_list
|
||||
continue
|
||||
#
|
||||
# it's not a section marker,
|
||||
# so it should be a valid ``key = value`` line
|
||||
mat = self._keyword.match(line)
|
||||
if mat is None:
|
||||
self._handle_error(
|
||||
'Invalid line ({!r}) (matched as neither section nor keyword)'.format(line),
|
||||
ParseError, infile, cur_index)
|
||||
else:
|
||||
# is a keyword value
|
||||
# value will include any inline comment
|
||||
(indent, key, value) = mat.groups()
|
||||
if indent and (self.indent_type is None):
|
||||
self.indent_type = indent
|
||||
# check for a multiline value
|
||||
if value[:3] in ['"""', "'''"]:
|
||||
try:
|
||||
value, comment, cur_index = self._multiline(
|
||||
value, infile, cur_index, maxline)
|
||||
except SyntaxError:
|
||||
self._handle_error(
|
||||
'Parse error in multiline value',
|
||||
ParseError, infile, cur_index)
|
||||
continue
|
||||
else:
|
||||
if self.unrepr:
|
||||
comment = ''
|
||||
try:
|
||||
value = unrepr(value)
|
||||
except Exception as cause:
|
||||
if isinstance(cause, UnknownType):
|
||||
msg = 'Unknown name or type in value'
|
||||
else:
|
||||
msg = 'Parse error from unrepr-ing multiline value'
|
||||
self._handle_error(msg, UnreprError, infile, cur_index)
|
||||
continue
|
||||
else:
|
||||
if self.unrepr:
|
||||
comment = ''
|
||||
try:
|
||||
value = unrepr(value)
|
||||
except Exception as cause:
|
||||
if isinstance(cause, UnknownType):
|
||||
msg = 'Unknown name or type in value'
|
||||
else:
|
||||
msg = 'Parse error from unrepr-ing value'
|
||||
self._handle_error(msg, UnreprError, infile, cur_index)
|
||||
continue
|
||||
else:
|
||||
# extract comment and lists
|
||||
try:
|
||||
(value, comment) = self._handle_value(value)
|
||||
except SyntaxError:
|
||||
self._handle_error(
|
||||
'Parse error in value',
|
||||
ParseError, infile, cur_index)
|
||||
continue
|
||||
#
|
||||
key = self._unquote(key)
|
||||
if key in this_section:
|
||||
self._handle_error(
|
||||
'Duplicate keyword name',
|
||||
DuplicateError, infile, cur_index)
|
||||
continue
|
||||
# add the key.
|
||||
# we set unrepr because if we have got this far we will never
|
||||
# be creating a new section
|
||||
this_section.__setitem__(key, value, unrepr=True)
|
||||
this_section.inline_comments[key] = comment
|
||||
this_section.comments[key] = comment_list
|
||||
continue
|
||||
#
|
||||
if self.indent_type is None:
|
||||
# no indentation used, set the type accordingly
|
||||
self.indent_type = ''
|
||||
|
||||
# preserve the final comment
|
||||
if not self and not self.initial_comment:
|
||||
self.initial_comment = comment_list
|
||||
elif not reset_comment:
|
||||
self.final_comment = comment_list
|
||||
self.list_values = temp_list_values
|
||||
|
||||
|
||||
def write(self, outfile=None, section=None):
|
||||
if self.indent_type is None:
|
||||
# this can be true if initialised from a dictionary
|
||||
self.indent_type = DEFAULT_INDENT_TYPE
|
||||
|
||||
out = []
|
||||
comment_markers = tuple(self.COMMENT_MARKERS)
|
||||
comment_marker_default = comment_markers[0] + ' '
|
||||
if section is None:
|
||||
int_val = self.interpolation
|
||||
self.interpolation = False
|
||||
section = self
|
||||
for line in self.initial_comment:
|
||||
line = self._decode_element(line)
|
||||
stripped_line = line.strip()
|
||||
if stripped_line and not stripped_line.startswith(comment_markers):
|
||||
line = comment_marker_default + line
|
||||
out.append(line)
|
||||
|
||||
indent_string = self.indent_type * section.depth
|
||||
for entry in (section.scalars + section.sections):
|
||||
if entry in section.defaults:
|
||||
# don't write out default values
|
||||
continue
|
||||
for comment_line in section.comments[entry]:
|
||||
comment_line = self._decode_element(comment_line.lstrip())
|
||||
if comment_line and not comment_line.startswith(comment_markers):
|
||||
comment_line = comment_marker_default + comment_line
|
||||
out.append(indent_string + comment_line)
|
||||
this_entry = section[entry]
|
||||
comment = self._handle_comment(section.inline_comments[entry])
|
||||
|
||||
if isinstance(this_entry, Section):
|
||||
# a section
|
||||
out.append(self._write_marker(
|
||||
indent_string,
|
||||
this_entry.depth,
|
||||
entry,
|
||||
comment))
|
||||
out.extend(self.write(section=this_entry))
|
||||
else:
|
||||
out.append(self._write_line(
|
||||
indent_string,
|
||||
entry,
|
||||
this_entry,
|
||||
comment))
|
||||
|
||||
if section is self:
|
||||
for line in self.final_comment:
|
||||
line = self._decode_element(line)
|
||||
stripped_line = line.strip()
|
||||
if stripped_line and not stripped_line.startswith(comment_markers):
|
||||
line = comment_marker_default + line
|
||||
out.append(line)
|
||||
self.interpolation = int_val
|
||||
|
||||
if section is not self:
|
||||
return out
|
||||
|
||||
if (self.filename is None) and (outfile is None):
|
||||
# output a list of lines
|
||||
# might need to encode
|
||||
# NOTE: This will *screw* UTF16, each line will start with the BOM
|
||||
if self.encoding:
|
||||
out = [l.encode(self.encoding) for l in out]
|
||||
if (self.BOM and ((self.encoding is None) or
|
||||
(BOM_LIST.get(self.encoding.lower()) == 'utf_8'))):
|
||||
# Add the UTF8 BOM
|
||||
if not out:
|
||||
out.append('')
|
||||
out[0] = BOM_UTF8 + out[0]
|
||||
return out
|
||||
|
||||
# Turn the list to a string, joined with correct newlines
|
||||
newline = self.newlines or os.linesep
|
||||
if (getattr(outfile, 'mode', None) is not None and outfile.mode == 'w'
|
||||
and sys.platform == 'win32' and newline == '\r\n'):
|
||||
# Windows specific hack to avoid writing '\r\r\n'
|
||||
newline = '\n'
|
||||
output = newline.join(out)
|
||||
if not output.endswith(newline):
|
||||
output += newline
|
||||
|
||||
if isinstance(output, six.binary_type):
|
||||
output_bytes = output
|
||||
else:
|
||||
output_bytes = output.encode(self.encoding or
|
||||
self.default_encoding or
|
||||
'ascii')
|
||||
|
||||
if self.BOM and ((self.encoding is None) or match_utf8(self.encoding)):
|
||||
# Add the UTF8 BOM
|
||||
output_bytes = BOM_UTF8 + output_bytes
|
||||
|
||||
if outfile is not None:
|
||||
outfile.write(output_bytes)
|
||||
else:
|
||||
with open(self.filename, 'wb') as h:
|
||||
h.write(output_bytes)
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2021 BaseALT Ltd. <org@basealt.ru>
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd. <org@basealt.ru>
|
||||
# Copyright (C) 2019-2021 Igor Chudov <nir@nir.org.ru>
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
@ -21,6 +21,7 @@ import pathlib
|
||||
import os
|
||||
from pathlib import Path
|
||||
from urllib.parse import urlparse
|
||||
from util.util import get_homedir
|
||||
|
||||
from .config import GPConfig
|
||||
from .exceptions import NotUNCPathError
|
||||
@ -67,12 +68,19 @@ def file_cache_dir():
|
||||
Returns path pointing to gpupdate's cache directory
|
||||
'''
|
||||
cachedir = pathlib.Path('/var/cache/gpupdate_file_cache')
|
||||
|
||||
if not cachedir.exists():
|
||||
cachedir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return cachedir
|
||||
|
||||
def file_cache_path_home(username) -> str:
|
||||
'''
|
||||
Returns the path pointing to the gpupdate cache directory in the /home directory.
|
||||
'''
|
||||
cachedir = f'{get_homedir(username)}/.cache/gpupdate'
|
||||
|
||||
return cachedir
|
||||
|
||||
def local_policy_cache():
|
||||
'''
|
||||
Returns path to directory where lies local policy settings cache
|
||||
@ -85,6 +93,22 @@ def local_policy_cache():
|
||||
|
||||
return lpcache
|
||||
|
||||
|
||||
def get_dconf_config_path(uid = None):
|
||||
if uid:
|
||||
return f'/etc/dconf/db/policy{uid}.d/'
|
||||
else:
|
||||
return '/etc/dconf/db/policy.d/'
|
||||
|
||||
def get_dconf_config_file(uid = None):
|
||||
if uid:
|
||||
return f'/etc/dconf/db/policy{uid}.d/policy{uid}.ini'
|
||||
else:
|
||||
return '/etc/dconf/db/policy.d/policy.ini'
|
||||
|
||||
def get_desktop_files_directory():
|
||||
return '/usr/share/applications'
|
||||
|
||||
class UNCPath:
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
@ -99,7 +123,7 @@ class UNCPath:
|
||||
def get_uri(self):
|
||||
path = self.path
|
||||
if self.type == 'unc':
|
||||
path = self.path.replace('\\', '/')
|
||||
path = self.path.replace('\\\\', '/')
|
||||
path = path.replace('//', 'smb://')
|
||||
else:
|
||||
pass
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -18,7 +18,7 @@
|
||||
|
||||
|
||||
from xml.etree import ElementTree
|
||||
from storage import registry_factory
|
||||
from storage.dconf_registry import load_preg_dconf
|
||||
|
||||
from samba.gp_parse.gp_pol import GPPolParser
|
||||
|
||||
@ -80,16 +80,15 @@ def preg_keymap(preg):
|
||||
return keymap
|
||||
|
||||
|
||||
def merge_polfile(preg, sid=None, reg_name='registry', reg_path=None, policy_name='Unknown'):
|
||||
def merge_polfile(preg, sid=None, reg_name='registry', reg_path=None, policy_name='Unknown', username='Machine', gpo_info=None):
|
||||
pregfile = load_preg(preg)
|
||||
if sid is None and username == 'Machine':
|
||||
load_preg_dconf(pregfile, preg, policy_name, None, gpo_info)
|
||||
else:
|
||||
load_preg_dconf(pregfile, preg, policy_name, username, gpo_info)
|
||||
logdata = dict({'pregfile': preg})
|
||||
log('D32', logdata)
|
||||
storage = registry_factory(reg_name, reg_path)
|
||||
for entry in pregfile.entries:
|
||||
if not sid:
|
||||
storage.add_hklm_entry(entry, policy_name)
|
||||
else:
|
||||
storage.add_hkcu_entry(entry, sid, policy_name)
|
||||
|
||||
|
||||
|
||||
class entry:
|
||||
|
@ -2,7 +2,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2020 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -20,7 +20,6 @@
|
||||
from enum import Enum
|
||||
|
||||
import pwd
|
||||
import logging
|
||||
import subprocess
|
||||
import pysss_nss_idmap
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
#
|
||||
# GPOA - GPO Applier for Linux
|
||||
#
|
||||
# Copyright (C) 2019-2021 BaseALT Ltd.
|
||||
# Copyright (C) 2019-2024 BaseALT Ltd.
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@ -19,8 +19,6 @@
|
||||
import os
|
||||
import pwd
|
||||
|
||||
from .logging import log
|
||||
|
||||
|
||||
def is_root():
|
||||
'''
|
||||
@ -47,7 +45,6 @@ def username_match_uid(username):
|
||||
'''
|
||||
Check the passed username matches current process UID.
|
||||
'''
|
||||
uid = os.getuid()
|
||||
process_username = get_process_user()
|
||||
|
||||
if process_username == username:
|
||||
|
@ -23,6 +23,7 @@ import subprocess
|
||||
import re
|
||||
from pathlib import Path
|
||||
from .samba import smbopts
|
||||
import ast
|
||||
|
||||
|
||||
def get_machine_name():
|
||||
@ -105,20 +106,7 @@ def get_backends():
|
||||
'''
|
||||
Get the list of backends supported by GPOA
|
||||
'''
|
||||
command = ['/usr/sbin/gpoa', '--list-backends']
|
||||
backends = list()
|
||||
out = list()
|
||||
|
||||
with subprocess.Popen(command, stdout=subprocess.PIPE) as proc:
|
||||
out = proc.stdout.read().decode('utf-8')
|
||||
proc.wait()
|
||||
out = out.split('\n')
|
||||
for line in out:
|
||||
tmpline = line.replace('\n', '')
|
||||
if tmpline != '':
|
||||
backends.append(tmpline)
|
||||
|
||||
return backends
|
||||
return ['local', 'samba']
|
||||
|
||||
def get_default_policy_name():
|
||||
'''
|
||||
@ -180,3 +168,89 @@ def get_policy_variants():
|
||||
|
||||
return general_listing
|
||||
|
||||
def string_to_literal_eval(string):
|
||||
try:
|
||||
literaleval = ast.literal_eval(string)
|
||||
except:
|
||||
literaleval = string
|
||||
return literaleval
|
||||
|
||||
def try_dict_to_literal_eval(string):
|
||||
try:
|
||||
literaleval = ast.literal_eval(string)
|
||||
if isinstance(literaleval ,dict):
|
||||
return literaleval
|
||||
else:
|
||||
return None
|
||||
except:
|
||||
return None
|
||||
|
||||
def touch_file(filename):
|
||||
path = Path(filename)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.touch()
|
||||
|
||||
def get_uid_by_username(username):
|
||||
try:
|
||||
user_info = pwd.getpwnam(username)
|
||||
return user_info.pw_uid
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def add_prefix_to_keys(dictionary: dict, prefix: str='Previous/') -> dict:
|
||||
"""
|
||||
Adds a prefix to each key in the dictionary.
|
||||
Args: Input dictionary whose keys need to be modified
|
||||
prefix string to be added to each key. Defaults to 'Previous/'
|
||||
Returns: New dictionary with modified keys having the specified prefix
|
||||
"""
|
||||
result = {}
|
||||
for key, value in dictionary.items():
|
||||
new_key = f'{prefix}{key}'
|
||||
if isinstance(value, dict):
|
||||
result[new_key] = {deep_key:clean_data(val) if isinstance(val, str) else val for deep_key, val in value.items()}
|
||||
else:
|
||||
result[new_key] = value
|
||||
return result
|
||||
|
||||
|
||||
def remove_keys_with_prefix(dictionary: dict, prefix: tuple=('Previous/', 'Source/')) -> dict:
|
||||
"""
|
||||
Removes all keys that start with the specified prefix from the dictionary.
|
||||
By default, removes keys starting with 'Previous/' and 'Source/' prefix.
|
||||
"""
|
||||
return {key: value for key, value in dictionary.items() if not key.startswith(prefix)}
|
||||
|
||||
def remove_prefix_from_keys(dictionary: dict, prefix: str) -> dict:
|
||||
"""
|
||||
Removes the specified prefix from the keys of the dictionary.
|
||||
If a key starts with the prefix, it is removed.
|
||||
"""
|
||||
return {key[len(prefix):] if key.startswith(prefix) else key: value for key, value in dictionary.items()}
|
||||
|
||||
|
||||
def get_trans_table():
|
||||
return str.maketrans({
|
||||
'\n': '',
|
||||
'\r': '',
|
||||
'"': "'",
|
||||
'\\': '\\\\'
|
||||
})
|
||||
|
||||
def clean_data(data):
|
||||
try:
|
||||
cleaned_string = data.translate(get_trans_table())
|
||||
return cleaned_string
|
||||
except:
|
||||
return None
|
||||
|
||||
def check_local_user_exists(username):
|
||||
"""
|
||||
Checks if a local user with the given username exists on a Linux system.
|
||||
"""
|
||||
try:
|
||||
# Try to get user information from the password database
|
||||
pwd.getpwnam(username)
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
@ -18,22 +18,34 @@
|
||||
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from samba import getopt as options
|
||||
from samba import NTSTATUSError
|
||||
from samba.gpclass import get_dc_hostname, check_refresh_gpo_list
|
||||
|
||||
try:
|
||||
from samba.gpclass import get_dc_hostname, check_refresh_gpo_list
|
||||
except ImportError:
|
||||
from samba.gp.gpclass import get_dc_hostname, check_refresh_gpo_list
|
||||
|
||||
from samba.netcmd.common import netcmd_get_domain_infos_via_cldap
|
||||
from storage.dconf_registry import Dconf_registry, extract_display_name_version
|
||||
import samba.gpo
|
||||
|
||||
from storage import cache_factory
|
||||
from messages import message_with_code
|
||||
from .xdg import (
|
||||
xdg_get_desktop
|
||||
)
|
||||
from .util import get_homedir
|
||||
from .util import get_homedir, get_uid_by_username
|
||||
from .exceptions import GetGPOListFail
|
||||
from .logging import log
|
||||
from .samba import smbopts
|
||||
|
||||
from gpoa.storage import registry_factory
|
||||
from samba.samdb import SamDB
|
||||
from samba.auth import system_session
|
||||
import optparse
|
||||
import ldb
|
||||
import ipaddress
|
||||
import netifaces
|
||||
import random
|
||||
|
||||
class smbcreds (smbopts):
|
||||
|
||||
@ -42,6 +54,13 @@ class smbcreds (smbopts):
|
||||
self.credopts = options.CredentialsOptions(self.parser)
|
||||
self.creds = self.credopts.get_credentials(self.lp, fallback_machine=True)
|
||||
self.set_dc(dc_fqdn)
|
||||
self.sDomain = SiteDomainScanner(self.creds, self.lp, self.selected_dc)
|
||||
self.dc_site_servers = self.sDomain.select_site_servers()
|
||||
self.all_servers = self.sDomain.select_all_servers()
|
||||
[self.all_servers.remove(element)
|
||||
for element in self.dc_site_servers
|
||||
if element in self.all_servers]
|
||||
self.pdc_emulator_server = self.sDomain.select_pdc_emulator_server()
|
||||
|
||||
def get_dc(self):
|
||||
return self.selected_dc
|
||||
@ -91,7 +110,11 @@ class smbcreds (smbopts):
|
||||
hostname
|
||||
'''
|
||||
gpos = list()
|
||||
|
||||
if Dconf_registry.get_info('machine_name') == username:
|
||||
dconf_dict = Dconf_registry.get_dictionary_from_dconf_file_db(save_dconf_db=True)
|
||||
else:
|
||||
dconf_dict = Dconf_registry.get_dictionary_from_dconf_file_db(get_uid_by_username(username), save_dconf_db=True)
|
||||
dict_gpo_name_version = extract_display_name_version(dconf_dict, username)
|
||||
try:
|
||||
log('D48')
|
||||
ads = samba.gpo.ADS_STRUCT(self.selected_dc, self.lp, self.creds)
|
||||
@ -103,21 +126,42 @@ class smbcreds (smbopts):
|
||||
for gpo in gpos:
|
||||
# These setters are taken from libgpo/pygpo.c
|
||||
# print(gpo.ds_path) # LDAP entry
|
||||
if gpo.display_name in dict_gpo_name_version.keys() and dict_gpo_name_version.get(gpo.display_name, {}).get('version') == str(getattr(gpo, 'version', None)):
|
||||
if Path(dict_gpo_name_version.get(gpo.display_name, {}).get('correct_path')).exists():
|
||||
gpo.file_sys_path = ''
|
||||
ldata = dict({'gpo_name': gpo.display_name, 'gpo_uuid': gpo.name, 'file_sys_path_cache': True})
|
||||
log('I11', ldata)
|
||||
continue
|
||||
ldata = dict({'gpo_name': gpo.display_name, 'gpo_uuid': gpo.name, 'file_sys_path': gpo.file_sys_path})
|
||||
log('I2', ldata)
|
||||
|
||||
except Exception as exc:
|
||||
logdata = dict({'username': username, 'dc': self.selected_dc})
|
||||
if self.selected_dc != self.pdc_emulator_server:
|
||||
raise GetGPOListFail(exc)
|
||||
logdata = dict({'username': username, 'dc': self.selected_dc, 'exc': exc})
|
||||
log('E17', logdata)
|
||||
|
||||
return gpos
|
||||
|
||||
def update_gpos(self, username):
|
||||
gpos = self.get_gpos(username)
|
||||
|
||||
list_selected_dc = set()
|
||||
|
||||
|
||||
|
||||
if self.dc_site_servers:
|
||||
self.selected_dc = self.dc_site_servers.pop()
|
||||
|
||||
self.all_servers = [dc for dc in self.all_servers if dc != self.selected_dc]
|
||||
list_selected_dc.add(self.selected_dc)
|
||||
|
||||
try:
|
||||
gpos = self.get_gpos(username)
|
||||
|
||||
except GetGPOListFail:
|
||||
self.selected_dc = self.pdc_emulator_server
|
||||
gpos = self.get_gpos(username)
|
||||
|
||||
while list_selected_dc:
|
||||
logdata = dict()
|
||||
logdata['username'] = username
|
||||
@ -129,20 +173,137 @@ class smbcreds (smbopts):
|
||||
list_selected_dc.clear()
|
||||
except NTSTATUSError as smb_exc:
|
||||
logdata['smb_exc'] = str(smb_exc)
|
||||
self.selected_dc = get_dc_hostname(self.creds, self.lp)
|
||||
if self.selected_dc not in list_selected_dc:
|
||||
logdata['action'] = 'Search another dc'
|
||||
log('W11', logdata)
|
||||
list_selected_dc.add(self.selected_dc)
|
||||
if not check_scroll_enabled():
|
||||
if self.pdc_emulator_server and self.selected_dc != self.pdc_emulator_server:
|
||||
self.selected_dc = self.pdc_emulator_server
|
||||
logdata['action'] = 'Selected pdc'
|
||||
logdata['pdc'] = self.selected_dc
|
||||
log('W11', logdata)
|
||||
else:
|
||||
log('F1', logdata)
|
||||
raise smb_exc
|
||||
else:
|
||||
log('F1', logdata)
|
||||
raise smb_exc
|
||||
if self.dc_site_servers:
|
||||
self.selected_dc = self.dc_site_servers.pop()
|
||||
elif self.all_servers:
|
||||
self.selected_dc = self.all_servers.pop()
|
||||
else:
|
||||
self.selected_dc = self.pdc_emulator_server
|
||||
|
||||
|
||||
if self.selected_dc not in list_selected_dc:
|
||||
logdata['action'] = 'Search another dc'
|
||||
logdata['another_dc'] = self.selected_dc
|
||||
log('W11', logdata)
|
||||
list_selected_dc.add(self.selected_dc)
|
||||
else:
|
||||
log('F1', logdata)
|
||||
raise smb_exc
|
||||
except Exception as exc:
|
||||
logdata['exc'] = str(exc)
|
||||
log('F1', logdata)
|
||||
raise exc
|
||||
return gpos
|
||||
|
||||
|
||||
class SiteDomainScanner:
|
||||
def __init__(self, smbcreds, lp, dc):
|
||||
self.samdb = SamDB(url='ldap://{}'.format(dc), session_info=system_session(), credentials=smbcreds, lp=lp)
|
||||
Dconf_registry.set_info('samdb', self.samdb)
|
||||
self.pdc_emulator = self._search_pdc_emulator()
|
||||
|
||||
@staticmethod
|
||||
def _get_ldb_single_message_attr(ldb_message, attr_name, encoding='utf8'):
|
||||
if attr_name in ldb_message:
|
||||
return ldb_message[attr_name][0].decode(encoding)
|
||||
else:
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _get_ldb_single_result_attr(ldb_result, attr_name, encoding='utf8'):
|
||||
if len(ldb_result) == 1 and attr_name in ldb_result[0]:
|
||||
return ldb_result[0][attr_name][0].decode(encoding)
|
||||
else:
|
||||
return None
|
||||
|
||||
def _get_server_hostname(self, ds_service_name):
|
||||
ds_service_name_dn = ldb.Dn(self.samdb, ds_service_name)
|
||||
server_dn = ds_service_name_dn.parent()
|
||||
res = self.samdb.search(server_dn, scope=ldb.SCOPE_BASE)
|
||||
return self._get_ldb_single_result_attr(res, 'dNSHostName')
|
||||
|
||||
def _search_pdc_emulator(self):
|
||||
res = self.samdb.search(self.samdb.domain_dn(), scope=ldb.SCOPE_BASE)
|
||||
pdc_settings_object = self._get_ldb_single_result_attr(res, 'fSMORoleOwner')
|
||||
return self._get_server_hostname(pdc_settings_object)
|
||||
|
||||
def get_ip_addresses(self):
|
||||
interface_list = netifaces.interfaces()
|
||||
addresses = []
|
||||
for iface in interface_list:
|
||||
address_entry = netifaces.ifaddresses(iface)
|
||||
if netifaces.AF_INET in address_entry:
|
||||
addresses.extend(ipaddress.ip_address(ipv4_address_entry['addr']) for ipv4_address_entry in address_entry[netifaces.AF_INET])
|
||||
if netifaces.AF_INET6 in address_entry:
|
||||
addresses.extend(ipaddress.ip_address(ipv6_address_entry['addr']) for ipv6_address_entry in address_entry[netifaces.AF_INET6])
|
||||
return addresses
|
||||
|
||||
def get_ad_subnets_sites(self):
|
||||
subnet_dn = ldb.Dn(self.samdb, "CN=Subnets,CN=Sites")
|
||||
config_dn = self.samdb.get_config_basedn()
|
||||
subnet_dn.add_base(config_dn)
|
||||
res = self.samdb.search(subnet_dn, ldb.SCOPE_ONELEVEL, expression='objectClass=subnet', attrs=['cn', 'siteObject'])
|
||||
subnets = {ipaddress.ip_network(self._get_ldb_single_message_attr(msg, 'cn')): self._get_ldb_single_message_attr(msg, 'siteObject') for msg in res}
|
||||
return subnets
|
||||
|
||||
def get_ad_site_servers(self, site):
|
||||
servers_dn = ldb.Dn(self.samdb, "CN=Servers")
|
||||
site_dn = ldb.Dn(self.samdb, site)
|
||||
servers_dn.add_base(site_dn)
|
||||
res = self.samdb.search(servers_dn, ldb.SCOPE_ONELEVEL, expression='objectClass=server', attrs=['dNSHostName'])
|
||||
servers = [self._get_ldb_single_message_attr(msg, 'dNSHostName') for msg in res]
|
||||
random.shuffle(servers)
|
||||
return servers
|
||||
|
||||
def get_ad_all_servers(self):
|
||||
sites_dn = ldb.Dn(self.samdb, "CN=Sites")
|
||||
config_dn = self.samdb.get_config_basedn()
|
||||
sites_dn.add_base(config_dn)
|
||||
res = self.samdb.search(sites_dn, ldb.SCOPE_SUBTREE, expression='objectClass=server', attrs=['dNSHostName'])
|
||||
servers = [self._get_ldb_single_message_attr(msg, 'dNSHostName') for msg in res]
|
||||
random.shuffle(servers)
|
||||
return servers
|
||||
|
||||
def check_ip_in_subnets(self, ip_addresses, subnets_sites):
|
||||
return next((subnets_sites[subnet] for subnet in subnets_sites.keys()
|
||||
if any(ip_address in subnet for ip_address in ip_addresses)), None)
|
||||
|
||||
def select_site_servers(self):
|
||||
try:
|
||||
ip_addresses = self.get_ip_addresses()
|
||||
subnets_sites = self.get_ad_subnets_sites()
|
||||
|
||||
our_site = self.check_ip_in_subnets(ip_addresses, subnets_sites)
|
||||
|
||||
servers = []
|
||||
if our_site:
|
||||
servers = self.get_ad_site_servers(our_site)
|
||||
random.shuffle(servers)
|
||||
return servers
|
||||
except Exception as e:
|
||||
return []
|
||||
|
||||
def select_all_servers(self):
|
||||
try:
|
||||
servers = self.get_ad_all_servers()
|
||||
random.shuffle(servers)
|
||||
return servers
|
||||
except Exception as e:
|
||||
return []
|
||||
|
||||
def select_pdc_emulator_server(self):
|
||||
return self.pdc_emulator
|
||||
|
||||
def expand_windows_var(text, username=None):
|
||||
'''
|
||||
Scan the line for percent-encoded variables and expand them.
|
||||
@ -166,7 +327,9 @@ def expand_windows_var(text, username=None):
|
||||
|
||||
result = text
|
||||
for var in variables.keys():
|
||||
result = result.replace('%{}%'.format(var), variables[var])
|
||||
result = result.replace('%{}%'.format(var),
|
||||
variables[var] if variables[var][-1] == '/'
|
||||
else variables[var] +'/')
|
||||
|
||||
return result
|
||||
|
||||
@ -182,3 +345,11 @@ def transform_windows_path(text):
|
||||
|
||||
return result
|
||||
|
||||
def check_scroll_enabled():
|
||||
storage = registry_factory()
|
||||
enable_scroll = '/Software/BaseALT/Policies/GPUpdate/ScrollSysvolDC'
|
||||
if storage.get_key_value(enable_scroll):
|
||||
data = storage.get_hklm_entry(enable_scroll).data
|
||||
return bool(int(data))
|
||||
else:
|
||||
return False
|
||||
|
254
gpupdate.spec
254
gpupdate.spec
@ -1,7 +1,42 @@
|
||||
%define _unpackaged_files_terminate_build 1
|
||||
#add_python3_self_prov_path %buildroot%python3_sitelibdir/gpoa
|
||||
|
||||
%add_python3_req_skip backend
|
||||
%add_python3_req_skip frontend.frontend_manager
|
||||
%add_python3_req_skip gpt.envvars
|
||||
%add_python3_req_skip gpt.folders
|
||||
%add_python3_req_skip gpt.gpt
|
||||
%add_python3_req_skip gpt.printers
|
||||
%add_python3_req_skip gpt.shortcuts
|
||||
%add_python3_req_skip gpt.gpo_dconf_mapping
|
||||
%add_python3_req_skip gpt.dynamic_attributes
|
||||
%add_python3_req_skip messages
|
||||
%add_python3_req_skip plugin
|
||||
%add_python3_req_skip storage
|
||||
%add_python3_req_skip storage.fs_file_cache
|
||||
%add_python3_req_skip storage.dconf_registry
|
||||
%add_python3_req_skip util
|
||||
%add_python3_req_skip util.arguments
|
||||
%add_python3_req_skip util.config
|
||||
%add_python3_req_skip util.dbus
|
||||
%add_python3_req_skip util.exceptions
|
||||
%add_python3_req_skip util.kerberos
|
||||
%add_python3_req_skip util.logging
|
||||
%add_python3_req_skip util.paths
|
||||
%add_python3_req_skip util.preg
|
||||
%add_python3_req_skip util.roles
|
||||
%add_python3_req_skip util.rpm
|
||||
%add_python3_req_skip util.sid
|
||||
%add_python3_req_skip util.signals
|
||||
%add_python3_req_skip util.system
|
||||
%add_python3_req_skip util.users
|
||||
%add_python3_req_skip util.util
|
||||
%add_python3_req_skip util.windows
|
||||
%add_python3_req_skip util.xml
|
||||
%add_python3_req_skip util.gpoa_ini_parsing
|
||||
|
||||
Name: gpupdate
|
||||
Version: 0.9.11.1
|
||||
Version: 0.13.0
|
||||
Release: alt1
|
||||
|
||||
Summary: GPT applier
|
||||
@ -16,11 +51,18 @@ BuildRequires: rpm-build-python3
|
||||
BuildRequires: gettext-tools
|
||||
Requires: python3-module-rpm
|
||||
Requires: python3-module-dbus
|
||||
Requires: oddjob-%name >= 0.2.0
|
||||
Requires: python3-module-configobj
|
||||
Requires: python3-module-gssapi
|
||||
Requires: python3-module-krb5
|
||||
Requires: oddjob-%name >= 0.2.3
|
||||
Requires: libnss-role >= 0.5.0
|
||||
Requires: local-policy >= 0.4.9
|
||||
Requires: pam-config >= 1.9.0
|
||||
Requires: autofs
|
||||
Requires: dconf-profile
|
||||
Requires: packagekit
|
||||
Requires: dconf
|
||||
Requires: libgvdb-gir
|
||||
# This is needed by shortcuts_applier
|
||||
Requires: desktop-file-utils
|
||||
# This is needed for smb file cache support
|
||||
@ -87,6 +129,9 @@ install -Dm0644 dist/%name-remote-policy %buildroot%_sysconfdir/pam.d/%name-remo
|
||||
install -Dm0644 dist/%name.ini %buildroot%_sysconfdir/%name/%name.ini
|
||||
install -Dm0644 doc/gpoa.1 %buildroot/%_man1dir/gpoa.1
|
||||
install -Dm0644 doc/gpupdate.1 %buildroot/%_man1dir/gpupdate.1
|
||||
install -Dm0644 completions/gpoa %buildroot/%_datadir/bash-completion/completions/gpoa
|
||||
install -Dm0644 completions/gpupdate %buildroot/%_datadir/bash-completion/completions/gpupdate
|
||||
install -Dm0644 completions/gpupdate-setup %buildroot/%_datadir/bash-completion/completions/gpupdate-setup
|
||||
|
||||
for i in gpupdate-localusers \
|
||||
gpupdate-group-users \
|
||||
@ -108,7 +153,7 @@ fi
|
||||
# Remove storage in case we've lost compatibility between versions.
|
||||
# The storage will be regenerated on GPOA start.
|
||||
%define active_policy %_sysconfdir/local-policy/active
|
||||
%triggerpostun -- %name < 0.9.10
|
||||
%triggerpostun -- %name < 0.9.13.6
|
||||
rm -f %_cachedir/%name/registry.sqlite
|
||||
if test -L %active_policy; then
|
||||
sed -i "s|^\s*local-policy\s*=.*|local-policy = $(readlink -f %active_policy)|" \
|
||||
@ -133,9 +178,12 @@ fi
|
||||
%_unitdir/%name.timer
|
||||
%_man1dir/gpoa.1.*
|
||||
%_man1dir/gpupdate.1.*
|
||||
/usr/lib/systemd/user/%name-user.service
|
||||
/usr/lib/systemd/user/%name-user.timer
|
||||
/usr/lib/systemd/user/%name-scripts-run-user.service
|
||||
%_datadir/bash-completion/completions/gpoa
|
||||
%_datadir/bash-completion/completions/gpupdate
|
||||
%_datadir/bash-completion/completions/gpupdate-setup
|
||||
%_user_unitdir/%name-user.service
|
||||
%_user_unitdir/%name-user.timer
|
||||
%_user_unitdir/%name-scripts-run-user.service
|
||||
%dir %_sysconfdir/%name
|
||||
%_sysconfdir/control.d/facilities/*
|
||||
%config(noreplace) %_sysconfdir/%name/environment
|
||||
@ -151,6 +199,200 @@ fi
|
||||
%exclude %python3_sitelibdir/gpoa/test
|
||||
|
||||
%changelog
|
||||
* Thu Mar 06 2025 Valery Sinelnikov <greh@altlinux.org> 0.13.0-alt1
|
||||
- Implemented Local Administrator Password Solution (LAPS) functionality,
|
||||
including support for Group Policy Object (GPO) keys to
|
||||
configure LAPS settings
|
||||
- Added support for disabling cifsacl in autofs mounts (closes:52333)
|
||||
- Implemented the ability to merge computer and user GPO shortcuts
|
||||
- Added access restrictions to network directories of other users
|
||||
- Added cleaning functionality for the autofs configuration catalog
|
||||
- Added ability to configure KDE 6 files
|
||||
|
||||
* Tue Jan 14 2025 Valery Sinelnikov <greh@altlinux.org> 0.12.2-alt1
|
||||
- Fixed interpretation of boolean values (closes:52683)
|
||||
|
||||
* Fri Jan 10 2025 Valery Sinelnikov <greh@altlinux.org> 0.12.1-alt1
|
||||
- Fixed checking the path for existence (closes:52597)
|
||||
|
||||
* Tue Dec 10 2024 Valery Sinelnikov <greh@altlinux.org> 0.12.0-alt1
|
||||
- Special thanks to Andrey Belgorodtsev (andrey@net55.su)
|
||||
for valuable pre-release testing and feedback
|
||||
- Added applier thunderbird
|
||||
- Added environment file cleaning (closes: 51016)
|
||||
- Added the ability to set the name of the directory to automount
|
||||
- Added the ability to remove the prefix from a sylink
|
||||
to the catalog in automount
|
||||
- Added the ability to set the timeout in automount
|
||||
- Added messages using the force mode
|
||||
- Improved KDE update logic
|
||||
- Added preservation of previous keys
|
||||
|
||||
* Fri Oct 11 2024 Valery Sinelnikov <greh@altlinux.org> 0.11.4-alt1
|
||||
- Added skip plugin (closes: 51631)
|
||||
- Fixed getting the network path (closes:51606)
|
||||
- The _appliers sequence has been changed,
|
||||
package_applier has been moved to the end
|
||||
|
||||
* Fri Sep 06 2024 Valery Sinelnikov <greh@altlinux.org> 0.11.3-alt1
|
||||
- Optimized string cleaning using str.translate()
|
||||
|
||||
* Wed Sep 04 2024 Valery Sinelnikov <greh@altlinux.org> 0.11.2-alt1
|
||||
- Fixed data type handling in kde_applier
|
||||
- Removing legacy unused code
|
||||
- Added saving policy data without polfile
|
||||
- Added escaping of special characters in data (closes: 51201)
|
||||
|
||||
* Tue Aug 27 2024 Valery Sinelnikov <greh@altlinux.org> 0.11.1-alt1
|
||||
- Fixed setting links in shortcuts (closes: 51275)
|
||||
|
||||
* Fri Aug 09 2024 Valery Sinelnikov <greh@altlinux.org> 0.11.0-alt1
|
||||
- Added saving preferences in dconf
|
||||
- Added versioning support for gpt
|
||||
- Added the ability to force gpt download
|
||||
- Added completions for --force
|
||||
- Added new exceptions for Chromium 126
|
||||
- Added information to the man pages
|
||||
- Fixed handling of incorrect valuename
|
||||
|
||||
* Mon Jul 08 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.6-alt1
|
||||
- Fixed firefox_applier errors
|
||||
|
||||
* Fri Jun 28 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.5-alt1
|
||||
- Correction of missing entries with a upper case
|
||||
- Fixed string processing in date (closes: 50782)
|
||||
- Fixed getting correct data for the user for pkcon_runner
|
||||
|
||||
* Thu Jun 27 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.4-alt1
|
||||
- Fixed the definition of the module activation check (closes: 50755)
|
||||
- Fixed sorting of scripts (closes: 50756)
|
||||
- Fixed reading key values from dconf
|
||||
- Changed the method for getting the list of packages for pkcon_runner
|
||||
|
||||
* Wed Jun 19 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.3-alt1
|
||||
- Added autocompletion for gpoa, gpupdate, gpupdate-setup
|
||||
- Added correct work with json data in keys for the Firefox browser
|
||||
- Polkit_appliers changed to non-experimental
|
||||
- Fixed bug of not clearing kde applier settings (closes: 50336)
|
||||
- Fixed registry key reading (closes: 50553)
|
||||
- Added waiting for data generation for scripts (closes: 50667)
|
||||
|
||||
* Fri Jun 07 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.2-alt1
|
||||
- Added some fixes to dconf_registry and scripts
|
||||
- Fixed windows registry key reading for loopback
|
||||
|
||||
* Tue Jun 04 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.1-alt1
|
||||
- Added handling of unexpected data types when writing to dconf
|
||||
|
||||
* Mon May 13 2024 Valery Sinelnikov <greh@altlinux.org> 0.10.0-alt1
|
||||
- A method for storing registry keys obtained from GPOs (Group Policy Objects)
|
||||
has undergone significant repairs. We have switched from using SQLite
|
||||
to using Dconf to improve data storage efficiency
|
||||
|
||||
* Wed Mar 13 2024 Valery Sinelnikov <greh@altlinux.org> 0.9.13.9-alt1
|
||||
- Fixed premature removal of double slash
|
||||
|
||||
* Thu Feb 22 2024 Valery Sinelnikov <greh@altlinux.org> 0.9.13.8-alt1
|
||||
- Added search for dc on the site
|
||||
- Added compatibility support for the oldest versions of SQLAlchemy
|
||||
|
||||
* Mon Feb 05 2024 Valery Sinelnikov <greh@altlinux.org> 0.9.13.7-alt1
|
||||
- Editing the cache size in the Yandex browser has returned (closes: 44621)
|
||||
- Removed unnecessary calls to subprocess
|
||||
|
||||
* Wed Jan 31 2024 Valery Sinelnikov <greh@altlinux.org> 0.9.13.6-alt1
|
||||
- Added support for hidden attribute for folders (closes: 48964)
|
||||
- Added support for Cyrillic and spaces for mounting disks (closes: 49229)
|
||||
|
||||
* Fri Jan 12 2024 Valery Sinelnikov <greh@altlinux.org> 0.9.13.5-alt1
|
||||
- Fixed blocking check for machine policies with multiple sections (closes: 48971)
|
||||
- Extension of the valuename_typeint list for the admx-chromium 120.0
|
||||
- Extension of the valuename_typeint list for the admx-yandex 118.0
|
||||
- Changed PAM logic to prevent re-call (closes: 48973)
|
||||
- Changed timer option OnStartupSec to prevent re-call
|
||||
|
||||
* Mon Dec 18 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.13.4-alt1
|
||||
- Fixed regular expression to search for wallpaper management section (closes: 48828)
|
||||
|
||||
* Wed Dec 13 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.13.3-alt1
|
||||
- Fixed bug handling of invalid username
|
||||
when requesting cache (closes: 48310)
|
||||
|
||||
* Tue Nov 28 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.13.2-alt1
|
||||
- Fixed kde_applier bug (closes: 47995)
|
||||
|
||||
* Wed Oct 18 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.13.1-alt1
|
||||
- Fixed kde_applier bug (closes: 47995)
|
||||
- Fixed kde_applier bug (closes: 47996)
|
||||
- Fixed kde_applier bug (closes: 47998)
|
||||
- Fixed kde_applier bug (closes: 47820)
|
||||
- Fixed shortcut_applier bug (closes: 47638)
|
||||
- Fixed shortcut_applier bug (closes: 47641)
|
||||
- Fixed systemd_applier bug (closes: 47652)
|
||||
|
||||
* Tue Sep 19 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.13.0-alt1
|
||||
- Added KDE applier
|
||||
- Fixed loopback policy processing
|
||||
- Fixed appliers exception for some chromium policies
|
||||
- Fixed ntp error
|
||||
- cifs_appliers, polkit_appliers changed to non-experimental
|
||||
|
||||
* Wed Jun 14 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.12.6-alt1
|
||||
- Added support for dictionaries as policy values for
|
||||
yandex_browser_applier and chromium_applier
|
||||
- Extended functionality of ConfigObj to save comments ';'
|
||||
- Added support for SQLAlchemy2 in storage
|
||||
- Added 'cifsacl' option to mount templates
|
||||
|
||||
* Fri May 26 2023 Valery Sinelnikov <greh@altlinux.org> 0.9.12.5-alt1
|
||||
- Fixed editing cache volume (DiskCacheSize) in Yandex browser (closes: 44621)
|
||||
- The access to caching files has been fixed
|
||||
|
||||
* Sun Mar 19 2023 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12.4-alt1
|
||||
- Fixed an implementation of replace action in folder applier
|
||||
- Improve file cache store() with copy in temporary file before saving
|
||||
- Added implementation of using executable bit in file copy applier
|
||||
- Fixed debug messages typos in file copy applier
|
||||
|
||||
* Tue Feb 28 2023 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12.3-alt1
|
||||
- Add support of set copyied files to be executed by paths and suffixes (extensions).
|
||||
- Add support of saving comments in ini files.
|
||||
- Add support samba-4.17 python interface for gp.gpclass instead of gpclass.
|
||||
|
||||
* Thu Dec 29 2022 Valery Sinelnikov <greh@altlinux.org> 0.9.12.2-alt2
|
||||
- Fixed a typo in cifs_applier.py
|
||||
|
||||
* Thu Dec 29 2022 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12.2-alt1
|
||||
- Add support of create and delete symlinks in user home directory for mapped
|
||||
network drives in cifs applier
|
||||
- Fix file copy applier support of delete files with substitution
|
||||
|
||||
* Tue Dec 13 2022 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12.1-alt1
|
||||
- Update file copy applier with substitution support
|
||||
- Update translations for several logs
|
||||
|
||||
* Mon Dec 12 2022 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12-alt2
|
||||
- Update release with forgotten changes
|
||||
|
||||
* Sun Dec 11 2022 Evgeny Sinelnikov <sin@altlinux.org> 0.9.12-alt1
|
||||
- Fixed mapped drive maps for user and add support for machine
|
||||
+ Added label option support
|
||||
+ Fixed letters collisions and assigning as Windows
|
||||
- Replaced cifs applier mountpoints into shown gvfs directories:
|
||||
+ /media/gpupdate/Drive - for system shares
|
||||
+ /media/gpupdate/.Drive - for system hidden shares
|
||||
+ /run/media/USERNAME/DriveUser - for user shares
|
||||
+ /run/media/USERNAME/.DriveUser - for user hidden shares
|
||||
- Added network shares support for user
|
||||
- Fixed bug (closes: 44026) for chromium applier
|
||||
- Added keylist handling when generating firefox settings (closes: 44209)
|
||||
- Added a check of the need to scroll DC (scrolling DCs disabled by default!)
|
||||
- Added the ability to generate rules for all polkit actions
|
||||
- Added applier for Yandex.Browser
|
||||
|
||||
* Fri Sep 30 2022 Valery Sinelnikov <greh@altlinux.org> 0.9.11.2-alt1
|
||||
- Fixed formation of the correct path for creating a user directory
|
||||
|
||||
* Tue Sep 27 2022 Valery Sinelnikov <greh@altlinux.org> 0.9.11.1-alt1
|
||||
- Fixed merge for nodomain_backend
|
||||
- Added support for complex types in chromium_applier
|
||||
|
0
tools/parsing_chrom_admx_intvalues.py
Normal file → Executable file
0
tools/parsing_chrom_admx_intvalues.py
Normal file → Executable file
Loading…
x
Reference in New Issue
Block a user