[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
/*
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
* udevd . c - hotplug event serializer
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
*
2005-03-06 12:15:51 +03:00
* Copyright ( C ) 2004 - 2005 Kay Sievers < kay . sievers @ vrfy . org >
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
* Copyright ( C ) 2004 Chris Friesen < chris_friesen @ sympatico . ca >
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
*
*
* This program is free software ; you can redistribute it and / or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation version 2 of the License .
*
* This program is distributed in the hope that it will be useful , but
* WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the GNU
* General Public License for more details .
*
* You should have received a copy of the GNU General Public License along
* with this program ; if not , write to the Free Software Foundation , Inc . ,
* 675 Mass Ave , Cambridge , MA 0213 9 , USA .
*
*/
2004-01-27 05:19:33 +03:00
# include <stddef.h>
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include <signal.h>
# include <unistd.h>
# include <errno.h>
# include <stdio.h>
# include <stdlib.h>
# include <string.h>
2005-01-17 02:53:08 +03:00
# include <ctype.h>
# include <dirent.h>
# include <fcntl.h>
2005-03-10 02:58:01 +03:00
# include <sys/select.h>
# include <sys/wait.h>
2004-04-01 11:03:46 +04:00
# include <sys/time.h>
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
# include <sys/types.h>
# include <sys/socket.h>
# include <sys/un.h>
2004-04-01 11:03:46 +04:00
# include <sys/sysinfo.h>
2004-10-14 09:37:59 +04:00
# include <sys/stat.h>
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2004-01-27 05:19:33 +03:00
# include "list.h"
2005-03-07 06:29:43 +03:00
# include "udev_libc_wrapper.h"
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include "udev.h"
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
# include "udev_version.h"
2004-11-25 04:44:38 +03:00
# include "udev_utils.h"
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include "udevd.h"
# include "logging.h"
2004-11-12 08:18:28 +03:00
/* global variables*/
static int udevsendsock ;
2005-01-17 02:53:08 +03:00
static pid_t sid ;
2004-11-12 08:18:28 +03:00
2004-04-01 11:03:07 +04:00
static int pipefds [ 2 ] ;
2005-01-05 07:33:26 +03:00
static long startup_time ;
2004-09-16 09:36:31 +04:00
static unsigned long long expected_seqnum = 0 ;
2004-10-19 15:37:30 +04:00
static volatile int sigchilds_waiting ;
2004-10-06 11:48:10 +04:00
static volatile int run_msg_q ;
static volatile int sig_flag ;
2004-04-01 11:03:07 +04:00
static int run_exec_q ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2004-03-27 12:21:46 +03:00
static LIST_HEAD ( msg_list ) ;
static LIST_HEAD ( exec_list ) ;
static LIST_HEAD ( running_list ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void exec_queue_manager ( void ) ;
static void msg_queue_manager ( void ) ;
2004-04-01 11:03:07 +04:00
static void user_sighandler ( void ) ;
2004-10-19 15:37:30 +04:00
static void reap_sigchilds ( void ) ;
2004-04-17 10:58:05 +04:00
char * udev_bin ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2005-03-06 12:15:51 +03:00
# ifdef USE_LOG
2004-02-13 05:57:06 +03:00
void log_message ( int level , const char * format , . . . )
2004-02-12 09:10:26 +03:00
{
2004-02-13 05:57:06 +03:00
va_list args ;
va_start ( args , format ) ;
vsyslog ( level , format , args ) ;
va_end ( args ) ;
2004-02-12 09:10:26 +03:00
}
2004-02-13 05:57:06 +03:00
# endif
2004-02-12 09:10:26 +03:00
2004-04-03 08:10:37 +04:00
# define msg_dump(msg) \
2004-09-16 09:36:31 +04:00
dbg ( " msg_dump: sequence %llu, '%s', '%s', '%s' " , \
2004-04-03 08:10:37 +04:00
msg - > seqnum , msg - > action , msg - > devpath , msg - > subsystem ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static void msg_dump_queue ( void )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-04-01 11:03:07 +04:00
# ifdef DEBUG
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * msg ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2005-02-24 22:13:25 +03:00
list_for_each_entry ( msg , & msg_list , node )
2004-09-16 09:36:31 +04:00
dbg ( " sequence %llu in queue " , msg - > seqnum ) ;
2004-04-01 11:03:07 +04:00
# endif
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void run_queue_delete ( struct hotplug_msg * msg )
2004-02-02 19:00:07 +03:00
{
2005-02-24 22:13:25 +03:00
list_del ( & msg - > node ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
free ( msg ) ;
2004-02-02 19:00:07 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* orders the message in the queue by sequence number */
static void msg_queue_insert ( struct hotplug_msg * msg )
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
2004-04-01 11:03:46 +04:00
struct sysinfo info ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
2005-01-05 07:37:50 +03:00
if ( msg - > seqnum = = 0 ) {
dbg ( " no SEQNUM, move straight to the exec queue " ) ;
2005-02-24 22:13:25 +03:00
list_add ( & msg - > node , & exec_list ) ;
2005-01-05 07:37:50 +03:00
run_exec_q = 1 ;
return ;
}
/* sort message by sequence number into list */
2005-02-24 22:13:25 +03:00
list_for_each_entry_reverse ( loop_msg , & msg_list , node ) {
2004-04-01 11:03:07 +04:00
if ( loop_msg - > seqnum < msg - > seqnum )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
break ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
2005-01-16 06:08:54 +03:00
if ( loop_msg - > seqnum = = msg - > seqnum ) {
dbg ( " ignoring duplicate message seq %llu " , msg - > seqnum ) ;
return ;
}
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* store timestamp of queuing */
2004-04-01 11:03:46 +04:00
sysinfo ( & info ) ;
msg - > queue_time = info . uptime ;
2005-02-24 22:13:25 +03:00
list_add ( & msg - > node , & loop_msg - > node ) ;
2004-09-16 09:36:31 +04:00
dbg ( " queued message seq %llu " , msg - > seqnum ) ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* run msg queue manager */
2004-04-01 11:03:07 +04:00
run_msg_q = 1 ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2005-01-16 06:08:54 +03:00
return ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* forks event and removes event from run queue when finished */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void udev_run ( struct hotplug_msg * msg )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-11-28 15:56:22 +03:00
char * const argv [ ] = { " udev " , msg - > subsystem , NULL } ;
2004-01-23 15:01:09 +03:00
pid_t pid ;
pid = fork ( ) ;
switch ( pid ) {
case 0 :
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
/* child */
2004-11-12 08:18:28 +03:00
close ( udevsendsock ) ;
2004-11-23 05:30:13 +03:00
logging_close ( ) ;
2005-01-17 02:53:08 +03:00
setpriority ( PRIO_PROCESS , 0 , UDEV_PRIORITY ) ;
2004-11-28 15:56:22 +03:00
execve ( udev_bin , argv , msg - > envp ) ;
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
dbg ( " exec of child failed " ) ;
2004-10-19 05:15:10 +04:00
_exit ( 1 ) ;
2004-01-23 15:01:09 +03:00
break ;
case - 1 :
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
dbg ( " fork of child failed " ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
run_queue_delete ( msg ) ;
break ;
2004-01-23 15:01:09 +03:00
default :
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* get SIGCHLD in main loop */
2004-09-16 09:36:31 +04:00
dbg ( " ==> exec seq %llu [%d] working at '%s' " , msg - > seqnum , pid , msg - > devpath ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
msg - > pid = pid ;
2004-01-23 15:01:09 +03:00
}
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
2005-01-17 02:53:08 +03:00
static int running_processes ( void )
{
int f ;
static char buf [ 4096 ] ;
int len ;
int running ;
const char * pos ;
f = open ( " /proc/stat " , O_RDONLY ) ;
if ( f = = - 1 )
return - 1 ;
len = read ( f , buf , sizeof ( buf ) ) ;
close ( f ) ;
if ( len < = 0 )
return - 1 ;
else
buf [ len ] = ' \0 ' ;
pos = strstr ( buf , " procs_running " ) ;
if ( pos = = NULL )
return - 1 ;
if ( sscanf ( pos , " procs_running %u " , & running ) ! = 1 )
return - 1 ;
return running ;
}
/* return the number of process es in our session, count only until limit */
static int running_processes_in_session ( pid_t session , int limit )
{
DIR * dir ;
struct dirent * dent ;
int running = 0 ;
dir = opendir ( " /proc " ) ;
if ( ! dir )
return - 1 ;
/* read process info from /proc */
for ( dent = readdir ( dir ) ; dent ! = NULL ; dent = readdir ( dir ) ) {
int f ;
char procdir [ 64 ] ;
char line [ 256 ] ;
const char * pos ;
char state ;
pid_t ppid , pgrp , sess ;
int len ;
if ( ! isdigit ( dent - > d_name [ 0 ] ) )
continue ;
snprintf ( procdir , sizeof ( procdir ) , " /proc/%s/stat " , dent - > d_name ) ;
procdir [ sizeof ( procdir ) - 1 ] = ' \0 ' ;
f = open ( procdir , O_RDONLY ) ;
if ( f = = - 1 )
continue ;
len = read ( f , line , sizeof ( line ) ) ;
close ( f ) ;
if ( len < = 0 )
continue ;
else
line [ len ] = ' \0 ' ;
/* skip ugly program name */
pos = strrchr ( line , ' ) ' ) + 2 ;
if ( pos = = NULL )
continue ;
if ( sscanf ( pos , " %c %d %d %d " , & state , & ppid , & pgrp , & sess ) ! = 4 )
continue ;
/* count only processes in our session */
if ( sess ! = session )
continue ;
/* count only running, no sleeping processes */
if ( state ! = ' R ' )
continue ;
running + + ;
if ( limit > 0 & & running > = limit )
break ;
}
closedir ( dir ) ;
return running ;
}
2005-01-05 07:35:24 +03:00
static int compare_devpath ( const char * running , const char * waiting )
{
int i ;
2005-03-07 06:29:43 +03:00
for ( i = 0 ; i < PATH_SIZE ; i + + ) {
2005-01-05 07:35:24 +03:00
/* identical device event found */
if ( running [ i ] = = ' \0 ' & & waiting [ i ] = = ' \0 ' )
return 1 ;
/* parent device event found */
if ( running [ i ] = = ' \0 ' & & waiting [ i ] = = ' / ' )
return 2 ;
/* child device event found */
if ( running [ i ] = = ' / ' & & waiting [ i ] = = ' \0 ' )
return 3 ;
/* no matching event */
if ( running [ i ] ! = waiting [ i ] )
break ;
}
return 0 ;
}
/* returns still running task for the same device, its parent or its physical device */
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static struct hotplug_msg * running_with_devpath ( struct hotplug_msg * msg )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
2005-01-05 07:35:24 +03:00
if ( msg - > devpath = = NULL )
return NULL ;
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
2005-02-24 22:13:25 +03:00
list_for_each_entry ( loop_msg , & running_list , node ) {
2005-01-05 07:35:24 +03:00
if ( loop_msg - > devpath = = NULL )
2004-11-19 05:49:13 +03:00
continue ;
2005-01-05 07:35:24 +03:00
/* return running parent/child device event */
if ( compare_devpath ( loop_msg - > devpath , msg - > devpath ) ! = 0 )
return loop_msg ;
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
2005-01-05 07:35:24 +03:00
/* return running physical device event */
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
if ( msg - > physdevpath & & msg - > action & & strcmp ( msg - > action , " add " ) = = 0 )
2005-01-05 07:35:24 +03:00
if ( compare_devpath ( loop_msg - > devpath , msg - > physdevpath ) ! = 0 )
return loop_msg ;
2004-11-19 05:49:13 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
return NULL ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
/* exec queue management routine executes the events and serializes events in the same sequence */
2004-10-19 10:14:20 +04:00
static void exec_queue_manager ( void )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
struct hotplug_msg * tmp_msg ;
2004-01-27 05:19:33 +03:00
struct hotplug_msg * msg ;
2005-01-17 02:53:08 +03:00
int running ;
running = running_processes ( ) ;
dbg ( " %d processes runnning on system " , running ) ;
if ( running < 0 )
running = THROTTLE_MAX_RUNNING_CHILDS ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
2005-02-24 22:13:25 +03:00
list_for_each_entry_safe ( loop_msg , tmp_msg , & exec_list , node ) {
2005-01-17 02:53:08 +03:00
/* check running processes in our session and possibly throttle */
if ( running > = THROTTLE_MAX_RUNNING_CHILDS ) {
running = running_processes_in_session ( sid , THROTTLE_MAX_RUNNING_CHILDS + 10 ) ;
dbg ( " %d processes running in session " , running ) ;
if ( running > = THROTTLE_MAX_RUNNING_CHILDS ) {
dbg ( " delay seq %llu, cause too many processes already running " , loop_msg - > seqnum ) ;
return ;
}
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
msg = running_with_devpath ( loop_msg ) ;
if ( ! msg ) {
/* move event to run list */
2005-02-24 22:13:25 +03:00
list_move_tail ( & loop_msg - > node , & running_list ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
udev_run ( loop_msg ) ;
2005-01-17 02:53:08 +03:00
running + + ;
2004-09-16 09:36:31 +04:00
dbg ( " moved seq %llu to running list " , loop_msg - > seqnum ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
} else {
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
dbg ( " delay seq %llu (%s), cause seq %llu (%s) is still running " ,
loop_msg - > seqnum , loop_msg - > devpath , msg - > seqnum , msg - > devpath ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
}
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void msg_move_exec ( struct hotplug_msg * msg )
2004-02-05 12:35:08 +03:00
{
2005-02-24 22:13:25 +03:00
list_move_tail ( & msg - > node , & exec_list ) ;
2004-04-01 11:03:07 +04:00
run_exec_q = 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
expected_seqnum = msg - > seqnum + 1 ;
2004-09-16 09:36:31 +04:00
dbg ( " moved seq %llu to exec, next expected is %llu " ,
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
msg - > seqnum , expected_seqnum ) ;
2004-02-05 12:35:08 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* msg queue management routine handles the timeouts and dispatches the events */
2004-10-19 10:14:20 +04:00
static void msg_queue_manager ( void )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
{
struct hotplug_msg * loop_msg ;
2004-01-27 05:19:33 +03:00
struct hotplug_msg * tmp_msg ;
2004-04-01 11:03:46 +04:00
struct sysinfo info ;
long msg_age = 0 ;
2005-01-05 07:33:26 +03:00
static int timeout = EVENT_INIT_TIMEOUT_SEC ;
static int init = 1 ;
2004-01-27 05:19:33 +03:00
2004-09-16 09:36:31 +04:00
dbg ( " msg queue manager, next expected is %llu " , expected_seqnum ) ;
2004-01-27 05:19:33 +03:00
recheck :
2005-02-24 22:13:25 +03:00
list_for_each_entry_safe ( loop_msg , tmp_msg , & msg_list , node ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* move event with expected sequence to the exec list */
if ( loop_msg - > seqnum = = expected_seqnum ) {
msg_move_exec ( loop_msg ) ;
continue ;
2004-01-27 05:19:33 +03:00
}
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
2005-01-05 07:33:26 +03:00
/* see if we are in the initialization phase and wait for the very first events */
if ( init & & ( info . uptime - startup_time > = INIT_TIME_SEC ) ) {
init = 0 ;
timeout = EVENT_TIMEOUT_SEC ;
dbg ( " initialization phase passed, set timeout to %i seconds " , EVENT_TIMEOUT_SEC ) ;
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* move event with expired timeout to the exec list */
2004-04-01 11:03:46 +04:00
sysinfo ( & info ) ;
msg_age = info . uptime - loop_msg - > queue_time ;
2004-09-16 09:36:31 +04:00
dbg ( " seq %llu is %li seconds old " , loop_msg - > seqnum , msg_age ) ;
2005-01-05 07:33:26 +03:00
if ( msg_age > = timeout ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
msg_move_exec ( loop_msg ) ;
goto recheck ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
} else {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
msg_dump_queue ( ) ;
2004-04-01 11:03:46 +04:00
/* set timeout for remaining queued events */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( list_empty ( & msg_list ) = = 0 ) {
2005-01-05 07:33:26 +03:00
struct itimerval itv = { { 0 , 0 } , { timeout - msg_age , 0 } } ;
dbg ( " next event expires in %li seconds " , timeout - msg_age ) ;
2004-10-19 10:14:20 +04:00
setitimer ( ITIMER_REAL , & itv , NULL ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
}
2005-01-05 07:37:50 +03:00
/* receive the udevsend message and do some sanity checks */
static struct hotplug_msg * get_udevsend_msg ( void )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-11-06 16:30:15 +03:00
static struct udevsend_msg usend_msg ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * msg ;
2004-11-06 16:30:15 +03:00
int bufpos ;
int i ;
ssize_t size ;
2004-02-12 09:32:11 +03:00
struct msghdr smsg ;
struct cmsghdr * cmsg ;
struct iovec iov ;
struct ucred * cred ;
char cred_msg [ CMSG_SPACE ( sizeof ( struct ucred ) ) ] ;
2004-11-06 16:30:15 +03:00
int envbuf_size ;
2004-01-27 05:19:33 +03:00
2004-11-06 16:30:15 +03:00
memset ( & usend_msg , 0x00 , sizeof ( struct udevsend_msg ) ) ;
iov . iov_base = & usend_msg ;
iov . iov_len = sizeof ( struct udevsend_msg ) ;
2004-02-12 09:32:11 +03:00
memset ( & smsg , 0x00 , sizeof ( struct msghdr ) ) ;
smsg . msg_iov = & iov ;
smsg . msg_iovlen = 1 ;
smsg . msg_control = cred_msg ;
smsg . msg_controllen = sizeof ( cred_msg ) ;
2005-01-05 07:37:50 +03:00
size = recvmsg ( udevsendsock , & smsg , 0 ) ;
2004-11-06 16:30:15 +03:00
if ( size < 0 ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( errno ! = EINTR )
2005-01-05 07:37:50 +03:00
dbg ( " unable to receive udevsend message " ) ;
return NULL ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
2004-02-12 09:32:11 +03:00
cmsg = CMSG_FIRSTHDR ( & smsg ) ;
cred = ( struct ucred * ) CMSG_DATA ( cmsg ) ;
2004-02-12 12:23:59 +03:00
if ( cmsg = = NULL | | cmsg - > cmsg_type ! = SCM_CREDENTIALS ) {
dbg ( " no sender credentials received, message ignored " ) ;
2005-01-05 07:37:50 +03:00
return NULL ;
2004-02-12 12:23:59 +03:00
}
2004-02-12 09:32:11 +03:00
if ( cred - > uid ! = 0 ) {
dbg ( " sender uid=%i, message ignored " , cred - > uid ) ;
2005-01-05 07:37:50 +03:00
return NULL ;
2004-11-06 16:30:15 +03:00
}
if ( strncmp ( usend_msg . magic , UDEV_MAGIC , sizeof ( UDEV_MAGIC ) ) ! = 0 ) {
dbg ( " message magic '%s' doesn't match, ignore it " , usend_msg . magic ) ;
2005-01-05 07:37:50 +03:00
return NULL ;
2004-02-12 09:32:11 +03:00
}
2004-11-06 16:30:15 +03:00
envbuf_size = size - offsetof ( struct udevsend_msg , envbuf ) ;
dbg ( " envbuf_size=%i " , envbuf_size ) ;
msg = malloc ( sizeof ( struct hotplug_msg ) + envbuf_size ) ;
2005-01-05 07:37:50 +03:00
if ( msg = = NULL )
return NULL ;
2004-11-06 16:30:15 +03:00
memset ( msg , 0x00 , sizeof ( struct hotplug_msg ) + envbuf_size ) ;
/* copy environment buffer and reconstruct envp */
memcpy ( msg - > envbuf , usend_msg . envbuf , envbuf_size ) ;
bufpos = 0 ;
2004-11-23 08:14:21 +03:00
for ( i = 0 ; ( bufpos < envbuf_size ) & & ( i < HOTPLUG_NUM_ENVP - 2 ) ; i + + ) {
2004-11-06 16:30:15 +03:00
int keylen ;
char * key ;
key = & msg - > envbuf [ bufpos ] ;
keylen = strlen ( key ) ;
msg - > envp [ i ] = key ;
bufpos + = keylen + 1 ;
dbg ( " add '%s' to msg.envp[%i] " , msg - > envp [ i ] , i ) ;
/* remember some keys for further processing */
if ( strncmp ( key , " ACTION= " , 7 ) = = 0 )
msg - > action = & key [ 7 ] ;
if ( strncmp ( key , " DEVPATH= " , 8 ) = = 0 )
msg - > devpath = & key [ 8 ] ;
if ( strncmp ( key , " SUBSYSTEM= " , 10 ) = = 0 )
msg - > subsystem = & key [ 10 ] ;
if ( strncmp ( key , " SEQNUM= " , 7 ) = = 0 )
msg - > seqnum = strtoull ( & key [ 7 ] , NULL , 10 ) ;
[PATCH] udevd: serialization of the event sequence of a chain of devices
Currently udevd delays only events for the same DEVPATH.
Example of an "add" event sequence:
/block/sda
/block/sda/sda1
With this change, we make sure, that the udev process handling
/block/sda has finished its work (waited for all attributes,
created the node) before we fork the udev event for /block/sda/sda1.
This way the event for sda1 can be sure, that the node for the
main device is already created (may be useful for disk labels).
It will not affect any parallel device handling, only the sequence
of the devices directory chain is serialized. The 10.000 disks
plugged in will still run as parallel events. :)
The main motivation to do this is the program execution of the
dev.d/ and hotplug.d/ directory. If we don't wait for the parent
event to exit, we can't be sure that the executed scripts are
run in the right order.
On Thu, Dec 09, 2004 at 09:18:28AM +0100, Kay Sievers wrote:
> On Wed, 2004-12-08 at 19:07 -0800, David Brownell wrote:
> > Could that argument apply to the underlying hardware, too?
> We now make sure that the sequence of events for a device
> is serialized for every device chain and the class/block
> devices which have a "device" link to a physical device are
> handled after the physical device is fully populated and
> notified to userspace. It will only work this way on kernels
> later than 2.6.10-rc1 cause it depends on the PHYSDEVPATH
> value in the hotplug environment.
2004-12-11 23:43:08 +03:00
if ( strncmp ( key , " PHYSDEVPATH= " , 12 ) = = 0 )
msg - > physdevpath = & key [ 12 ] ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
2004-12-20 10:57:31 +03:00
msg - > envp [ i + + ] = " UDEVD_EVENT=1 " ;
2004-11-06 16:30:15 +03:00
msg - > envp [ i ] = NULL ;
2004-01-27 05:19:33 +03:00
2005-01-05 07:37:50 +03:00
return msg ;
2004-01-27 05:19:33 +03:00
}
2004-01-24 09:26:19 +03:00
2004-10-14 09:38:15 +04:00
static void asmlinkage sig_handler ( int signum )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-04-01 11:03:07 +04:00
int rc ;
2004-06-07 13:56:47 +04:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
switch ( signum ) {
case SIGINT :
case SIGTERM :
exit ( 20 + signum ) ;
break ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
case SIGALRM :
2004-04-01 11:03:07 +04:00
/* set flag, then write to pipe if needed */
run_msg_q = 1 ;
goto do_write ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
case SIGCHLD :
2004-04-01 11:03:07 +04:00
/* set flag, then write to pipe if needed */
2004-10-19 15:37:30 +04:00
sigchilds_waiting = 1 ;
2004-04-01 11:03:07 +04:00
goto do_write ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
2004-04-01 11:03:07 +04:00
}
2004-11-05 15:16:32 +03:00
2004-04-01 11:03:07 +04:00
do_write :
/* if pipe is empty, write to pipe to force select to return
* immediately when it gets called
*/
if ( ! sig_flag ) {
rc = write ( pipefds [ 1 ] , & signum , sizeof ( signum ) ) ;
2004-11-05 15:16:32 +03:00
if ( rc > = 0 )
2004-04-01 11:03:07 +04:00
sig_flag = 1 ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
}
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void udev_done ( int pid )
{
/* find msg associated with pid and delete it */
struct hotplug_msg * msg ;
2005-02-24 22:13:25 +03:00
list_for_each_entry ( msg , & running_list , node ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( msg - > pid = = pid ) {
2004-09-16 09:36:31 +04:00
dbg ( " <== exec seq %llu came back " , msg - > seqnum ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
run_queue_delete ( msg ) ;
2004-09-05 20:05:29 +04:00
2004-04-01 11:03:07 +04:00
/* we want to run the exec queue manager since there may
* be events waiting with the devpath of the one that
* just finished
*/
run_exec_q = 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
return ;
}
}
}
2004-10-19 15:37:30 +04:00
static void reap_sigchilds ( void )
2004-04-01 11:03:07 +04:00
{
while ( 1 ) {
2004-10-19 10:14:20 +04:00
int pid = waitpid ( - 1 , NULL , WNOHANG ) ;
2004-04-01 11:03:07 +04:00
if ( ( pid = = - 1 ) | | ( pid = = 0 ) )
break ;
udev_done ( pid ) ;
}
}
/* just read everything from the pipe and clear the flag,
2004-10-19 15:37:30 +04:00
* the flags was set in the signal handler
2004-04-01 11:03:07 +04:00
*/
2004-10-19 10:14:20 +04:00
static void user_sighandler ( void )
2004-04-01 11:03:07 +04:00
{
int sig ;
2005-01-16 06:39:02 +03:00
2004-04-01 11:03:07 +04:00
while ( 1 ) {
2004-10-19 15:37:30 +04:00
int rc = read ( pipefds [ 0 ] , & sig , sizeof ( sig ) ) ;
2004-04-01 11:03:07 +04:00
if ( rc < 0 )
break ;
sig_flag = 0 ;
}
}
2005-01-16 06:08:54 +03:00
static int init_udevsend_socket ( void )
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct sockaddr_un saddr ;
2004-02-06 11:11:24 +03:00
socklen_t addrlen ;
2004-10-19 15:37:30 +04:00
const int feature_on = 1 ;
2005-01-16 06:08:54 +03:00
int retval ;
memset ( & saddr , 0x00 , sizeof ( saddr ) ) ;
saddr . sun_family = AF_LOCAL ;
/* use abstract namespace for socket path */
strcpy ( & saddr . sun_path [ 1 ] , UDEVD_SOCK_PATH ) ;
addrlen = offsetof ( struct sockaddr_un , sun_path ) + strlen ( saddr . sun_path + 1 ) + 1 ;
udevsendsock = socket ( AF_LOCAL , SOCK_DGRAM , 0 ) ;
if ( udevsendsock = = - 1 ) {
dbg ( " error getting socket, %s " , strerror ( errno ) ) ;
return - 1 ;
}
/* the bind takes care of ensuring only one copy running */
retval = bind ( udevsendsock , ( struct sockaddr * ) & saddr , addrlen ) ;
if ( retval < 0 ) {
dbg ( " bind failed, %s " , strerror ( errno ) ) ;
close ( udevsendsock ) ;
return - 1 ;
}
/* enable receiving of the sender credentials */
setsockopt ( udevsendsock , SOL_SOCKET , SO_PASSCRED , & feature_on , sizeof ( feature_on ) ) ;
return 0 ;
}
int main ( int argc , char * argv [ ] , char * envp [ ] )
{
struct sysinfo info ;
int maxsockplus ;
int retval ;
int fd ;
2004-02-12 09:29:15 +03:00
struct sigaction act ;
2004-04-01 11:03:07 +04:00
fd_set readfds ;
2005-01-16 07:53:29 +03:00
const char * udevd_expected_seqnum ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
2004-10-19 05:15:10 +04:00
logging_init ( " udevd " ) ;
2004-02-17 09:31:15 +03:00
dbg ( " version %s " , UDEV_VERSION ) ;
2004-02-02 19:19:41 +03:00
2004-02-12 12:23:59 +03:00
if ( getuid ( ) ! = 0 ) {
dbg ( " need to be root, exit " ) ;
2004-11-23 05:28:41 +03:00
goto exit ;
2004-02-12 12:23:59 +03:00
}
2004-10-19 15:37:30 +04:00
2005-01-16 06:06:22 +03:00
/* daemonize on request */
if ( argc = = 2 & & strcmp ( argv [ 1 ] , " -d " ) = = 0 ) {
pid_t pid ;
pid = fork ( ) ;
switch ( pid ) {
case 0 :
dbg ( " damonized fork running " ) ;
break ;
case - 1 :
dbg ( " fork of daemon failed " ) ;
goto exit ;
default :
logging_close ( ) ;
exit ( 0 ) ;
}
}
2005-01-17 02:53:08 +03:00
/* become session leader */
sid = setsid ( ) ;
dbg ( " our session is %d " , sid ) ;
2004-10-19 15:37:30 +04:00
/* make sure we don't lock any path */
2004-10-06 11:48:10 +04:00
chdir ( " / " ) ;
2004-10-19 15:37:30 +04:00
umask ( umask ( 077 ) | 022 ) ;
2005-01-17 02:53:08 +03:00
/*set a reasonable scheduling priority for the daemon */
setpriority ( PRIO_PROCESS , 0 , UDEVD_PRIORITY ) ;
2004-10-06 11:48:10 +04:00
/* Set fds to dev/null */
fd = open ( " /dev/null " , O_RDWR ) ;
2005-01-16 06:39:02 +03:00
if ( fd > = 0 ) {
dup2 ( fd , 0 ) ;
dup2 ( fd , 1 ) ;
dup2 ( fd , 2 ) ;
if ( fd > 2 )
close ( fd ) ;
} else
2004-10-06 11:48:10 +04:00
dbg ( " error opening /dev/null %s " , strerror ( errno ) ) ;
2004-10-19 15:37:30 +04:00
2004-04-01 11:03:07 +04:00
/* setup signal handler pipe */
2004-04-01 11:03:46 +04:00
retval = pipe ( pipefds ) ;
if ( retval < 0 ) {
dbg ( " error getting pipes: %s " , strerror ( errno ) ) ;
2004-11-23 05:28:41 +03:00
goto exit ;
2004-04-01 11:03:46 +04:00
}
retval = fcntl ( pipefds [ 0 ] , F_SETFL , O_NONBLOCK ) ;
2004-10-06 11:48:10 +04:00
if ( retval < 0 ) {
dbg ( " error fcntl on read pipe: %s " , strerror ( errno ) ) ;
2004-11-23 05:28:41 +03:00
goto exit ;
2004-10-06 11:48:10 +04:00
}
retval = fcntl ( pipefds [ 0 ] , F_SETFD , FD_CLOEXEC ) ;
2005-01-16 06:39:02 +03:00
if ( retval < 0 )
2004-04-01 11:03:46 +04:00
dbg ( " error fcntl on read pipe: %s " , strerror ( errno ) ) ;
retval = fcntl ( pipefds [ 1 ] , F_SETFL , O_NONBLOCK ) ;
if ( retval < 0 ) {
dbg ( " error fcntl on write pipe: %s " , strerror ( errno ) ) ;
2004-11-23 05:28:41 +03:00
goto exit ;
2004-04-01 11:03:46 +04:00
}
2004-10-06 11:48:10 +04:00
retval = fcntl ( pipefds [ 1 ] , F_SETFD , FD_CLOEXEC ) ;
2005-01-16 06:39:02 +03:00
if ( retval < 0 )
2004-10-06 11:48:10 +04:00
dbg ( " error fcntl on write pipe: %s " , strerror ( errno ) ) ;
2004-04-01 11:03:07 +04:00
/* set signal handlers */
2005-02-06 02:09:34 +03:00
memset ( & act , 0x00 , sizeof ( struct sigaction ) ) ;
2004-10-14 09:37:59 +04:00
act . sa_handler = ( void ( * ) ( int ) ) sig_handler ;
2004-04-01 11:03:07 +04:00
sigemptyset ( & act . sa_mask ) ;
2004-02-12 09:29:15 +03:00
act . sa_flags = SA_RESTART ;
sigaction ( SIGINT , & act , NULL ) ;
sigaction ( SIGTERM , & act , NULL ) ;
sigaction ( SIGALRM , & act , NULL ) ;
sigaction ( SIGCHLD , & act , NULL ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2005-01-16 06:08:54 +03:00
if ( init_udevsend_socket ( ) < 0 ) {
if ( errno = = EADDRINUSE )
2005-01-16 06:39:02 +03:00
dbg ( " another udevd running, exit " ) ;
2005-01-16 06:08:54 +03:00
else
dbg ( " error initialising udevsend socket: %s " , strerror ( errno ) ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
goto exit ;
}
2004-04-17 10:58:05 +04:00
/* possible override of udev binary, used for testing */
udev_bin = getenv ( " UDEV_BIN " ) ;
if ( udev_bin ! = NULL )
dbg ( " udev binary is set to '%s' " , udev_bin ) ;
else
udev_bin = UDEV_BIN ;
2004-02-12 09:32:11 +03:00
2005-01-17 02:53:08 +03:00
/* possible init of expected_seqnum value */
2005-01-16 07:53:29 +03:00
udevd_expected_seqnum = getenv ( " UDEVD_EXPECTED_SEQNUM " ) ;
if ( udevd_expected_seqnum ! = NULL ) {
expected_seqnum = strtoull ( udevd_expected_seqnum , NULL , 10 ) ;
dbg ( " initialize expected_seqnum to %llu " , expected_seqnum ) ;
}
2005-01-17 02:53:08 +03:00
/* get current time to provide shorter timeout on startup */
2005-01-05 07:33:26 +03:00
sysinfo ( & info ) ;
startup_time = info . uptime ;
2004-04-01 11:03:46 +04:00
FD_ZERO ( & readfds ) ;
2004-11-12 08:18:28 +03:00
FD_SET ( udevsendsock , & readfds ) ;
2004-04-01 11:03:46 +04:00
FD_SET ( pipefds [ 0 ] , & readfds ) ;
2004-11-12 08:18:28 +03:00
maxsockplus = udevsendsock + 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
while ( 1 ) {
2005-01-05 07:37:50 +03:00
struct hotplug_msg * msg ;
2004-04-01 11:03:07 +04:00
fd_set workreadfds = readfds ;
retval = select ( maxsockplus , & workreadfds , NULL , NULL , NULL ) ;
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( retval < 0 ) {
2004-04-01 11:03:46 +04:00
if ( errno ! = EINTR )
dbg ( " error in select: %s " , strerror ( errno ) ) ;
2004-04-01 11:03:07 +04:00
continue ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
2004-04-01 11:03:46 +04:00
2005-01-05 07:37:50 +03:00
if ( FD_ISSET ( udevsendsock , & workreadfds ) ) {
msg = get_udevsend_msg ( ) ;
if ( msg )
msg_queue_insert ( msg ) ;
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( FD_ISSET ( pipefds [ 0 ] , & workreadfds ) )
user_sighandler ( ) ;
2004-04-01 11:03:46 +04:00
2004-10-19 15:37:30 +04:00
if ( sigchilds_waiting ) {
sigchilds_waiting = 0 ;
reap_sigchilds ( ) ;
2004-04-01 11:03:07 +04:00
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( run_msg_q ) {
run_msg_q = 0 ;
msg_queue_manager ( ) ;
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( run_exec_q ) {
2004-10-19 15:37:30 +04:00
/* clean up running_list before calling exec_queue_manager() */
if ( sigchilds_waiting ) {
sigchilds_waiting = 0 ;
reap_sigchilds ( ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
2004-04-01 11:03:07 +04:00
run_exec_q = 0 ;
exec_queue_manager ( ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
}
2004-11-23 05:28:41 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
exit :
2004-10-19 05:15:10 +04:00
logging_close ( ) ;
2004-10-19 15:37:30 +04:00
return 1 ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}