[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
/*
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
* udevd . c - hotplug event serializer
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
*
* Copyright ( C ) 2004 Kay Sievers < kay . sievers @ vrfy . org >
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
* Copyright ( C ) 2004 Chris Friesen < chris_friesen @ sympatico . ca >
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
*
*
* This program is free software ; you can redistribute it and / or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation version 2 of the License .
*
* This program is distributed in the hope that it will be useful , but
* WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the GNU
* General Public License for more details .
*
* You should have received a copy of the GNU General Public License along
* with this program ; if not , write to the Free Software Foundation , Inc . ,
* 675 Mass Ave , Cambridge , MA 0213 9 , USA .
*
*/
2004-01-27 05:19:33 +03:00
# include <stddef.h>
2004-01-23 15:01:09 +03:00
# include <sys/wait.h>
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include <signal.h>
# include <unistd.h>
# include <errno.h>
# include <stdio.h>
# include <stdlib.h>
# include <string.h>
2004-04-01 11:03:46 +04:00
# include <sys/time.h>
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
# include <sys/types.h>
# include <sys/socket.h>
# include <sys/un.h>
2004-04-01 11:03:07 +04:00
# include <fcntl.h>
2004-04-01 11:03:46 +04:00
# include "klibc_fixups.h"
# ifndef __KLIBC__
# include <sys/sysinfo.h>
# endif
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2004-01-27 05:19:33 +03:00
# include "list.h"
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include "udev.h"
2004-03-23 09:22:20 +03:00
# include "udev_lib.h"
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
# include "udev_version.h"
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
# include "udevd.h"
# include "logging.h"
2004-04-01 11:03:07 +04:00
static int pipefds [ 2 ] ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static int expected_seqnum = 0 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
volatile static int children_waiting ;
2004-04-01 11:03:07 +04:00
volatile static int run_msg_q ;
volatile static int sig_flag ;
static int run_exec_q ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2004-03-27 12:21:46 +03:00
static LIST_HEAD ( msg_list ) ;
static LIST_HEAD ( exec_list ) ;
static LIST_HEAD ( running_list ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void exec_queue_manager ( void ) ;
static void msg_queue_manager ( void ) ;
2004-04-01 11:03:07 +04:00
static void user_sighandler ( void ) ;
static void reap_kids ( void ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
2004-02-13 05:57:06 +03:00
# ifdef LOG
2004-03-04 11:57:29 +03:00
unsigned char logname [ LOGNAME_SIZE ] ;
2004-02-13 05:57:06 +03:00
void log_message ( int level , const char * format , . . . )
2004-02-12 09:10:26 +03:00
{
2004-02-13 05:57:06 +03:00
va_list args ;
va_start ( args , format ) ;
vsyslog ( level , format , args ) ;
va_end ( args ) ;
2004-02-12 09:10:26 +03:00
}
2004-02-13 05:57:06 +03:00
# endif
2004-02-12 09:10:26 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static void msg_dump_queue ( void )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-04-01 11:03:07 +04:00
# ifdef DEBUG
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * msg ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
list_for_each_entry ( msg , & msg_list , list )
dbg ( " sequence %d in queue " , msg - > seqnum ) ;
2004-04-01 11:03:07 +04:00
# endif
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static void msg_dump ( struct hotplug_msg * msg )
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
dbg ( " sequence %d, '%s', '%s', '%s' " ,
msg - > seqnum , msg - > action , msg - > devpath , msg - > subsystem ) ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static struct hotplug_msg * msg_create ( void )
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * new_msg ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
new_msg = malloc ( sizeof ( struct hotplug_msg ) ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( new_msg = = NULL )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
dbg ( " error malloc " ) ;
return new_msg ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void run_queue_delete ( struct hotplug_msg * msg )
2004-02-02 19:00:07 +03:00
{
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
list_del ( & msg - > list ) ;
free ( msg ) ;
2004-02-02 19:00:07 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* orders the message in the queue by sequence number */
static void msg_queue_insert ( struct hotplug_msg * msg )
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
2004-04-01 11:03:46 +04:00
struct sysinfo info ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
2004-04-01 11:03:07 +04:00
/* sort message by sequence number into list. events
* will tend to come in order , so scan the list backwards
*/
list_for_each_entry_reverse ( loop_msg , & msg_list , list )
if ( loop_msg - > seqnum < msg - > seqnum )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
break ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* store timestamp of queuing */
2004-04-01 11:03:46 +04:00
sysinfo ( & info ) ;
msg - > queue_time = info . uptime ;
list_add ( & msg - > list , & loop_msg - > list ) ;
dbg ( " queued message seq %d " , msg - > seqnum ) ;
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* run msg queue manager */
2004-04-01 11:03:07 +04:00
run_msg_q = 1 ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
return ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* forks event and removes event from run queue when finished */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void udev_run ( struct hotplug_msg * msg )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-01-23 15:01:09 +03:00
pid_t pid ;
2004-02-27 06:40:32 +03:00
char action [ ACTION_SIZE ] ;
char devpath [ DEVPATH_SIZE ] ;
2004-02-12 09:29:15 +03:00
char * env [ ] = { action , devpath , NULL } ;
2004-04-01 12:59:58 +04:00
strcpy ( action , " ACTION= " ) ;
strfieldcat ( action , msg - > action ) ;
strcpy ( devpath , " DEVPATH= " ) ;
strfieldcat ( devpath , msg - > devpath ) ;
2004-01-23 15:01:09 +03:00
pid = fork ( ) ;
switch ( pid ) {
case 0 :
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
/* child */
2004-02-12 09:29:15 +03:00
execle ( UDEV_BIN , " udev " , msg - > subsystem , NULL , env ) ;
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
dbg ( " exec of child failed " ) ;
exit ( 1 ) ;
2004-01-23 15:01:09 +03:00
break ;
case - 1 :
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
dbg ( " fork of child failed " ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
run_queue_delete ( msg ) ;
2004-04-01 11:03:07 +04:00
/* note: we never managed to run, so we had no impact on
* running_with_devpath ( ) , so don ' t bother setting run_exec_q
*/
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
2004-01-23 15:01:09 +03:00
default :
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* get SIGCHLD in main loop */
dbg ( " ==> exec seq %d [%d] working at '%s' " , msg - > seqnum , pid , msg - > devpath ) ;
msg - > pid = pid ;
2004-01-23 15:01:09 +03:00
}
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
/* returns already running task with devpath */
static struct hotplug_msg * running_with_devpath ( struct hotplug_msg * msg )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
list_for_each_entry ( loop_msg , & running_list , list )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
if ( strncmp ( loop_msg - > devpath , msg - > devpath , sizeof ( loop_msg - > devpath ) ) = = 0 )
return loop_msg ;
return NULL ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* exec queue management routine executes the events and delays events for the same devpath */
static void exec_queue_manager ( )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * loop_msg ;
struct hotplug_msg * tmp_msg ;
2004-01-27 05:19:33 +03:00
struct hotplug_msg * msg ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
list_for_each_entry_safe ( loop_msg , tmp_msg , & exec_list , list ) {
msg = running_with_devpath ( loop_msg ) ;
if ( ! msg ) {
/* move event to run list */
list_move_tail ( & loop_msg - > list , & running_list ) ;
udev_run ( loop_msg ) ;
dbg ( " moved seq %d to running list " , loop_msg - > seqnum ) ;
} else {
dbg ( " delay seq %d, cause seq %d already working on '%s' " ,
loop_msg - > seqnum , msg - > seqnum , msg - > devpath ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
}
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void msg_move_exec ( struct hotplug_msg * msg )
2004-02-05 12:35:08 +03:00
{
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
list_move_tail ( & msg - > list , & exec_list ) ;
2004-04-01 11:03:07 +04:00
run_exec_q = 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
expected_seqnum = msg - > seqnum + 1 ;
dbg ( " moved seq %d to exec, next expected is %d " ,
msg - > seqnum , expected_seqnum ) ;
2004-02-05 12:35:08 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* msg queue management routine handles the timeouts and dispatches the events */
static void msg_queue_manager ( )
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
{
struct hotplug_msg * loop_msg ;
2004-01-27 05:19:33 +03:00
struct hotplug_msg * tmp_msg ;
2004-04-01 11:03:46 +04:00
struct sysinfo info ;
long msg_age = 0 ;
2004-01-27 05:19:33 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
dbg ( " msg queue manager, next expected is %d " , expected_seqnum ) ;
2004-01-27 05:19:33 +03:00
recheck :
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
list_for_each_entry_safe ( loop_msg , tmp_msg , & msg_list , list ) {
/* move event with expected sequence to the exec list */
if ( loop_msg - > seqnum = = expected_seqnum ) {
msg_move_exec ( loop_msg ) ;
continue ;
2004-01-27 05:19:33 +03:00
}
[PATCH] udevd - next round of fixes
Here is the next round. We have three queues now. All incoming messages
are queued in msg_list and if nothing is missing we move it to the
running_list and exec in the background.
If the exec comes back, it removes the message from the running_list and
frees the message.
Before we exec, we check the running_list if there is a udev running on
the same device path. If yes, we move the message to the delay_list. If
the former exec comes back, we move the message to the running_list and
exec it.
The very first event is delayed now to catch possible earlier sequences,
every following event is executed without delay if no sequence is missing.
The daemon doesn't exit by itself any longer, cause we don't want to
delay every first exec.
I've put a $(PWD) for now in the Makefile for testing this beast. Only
the local binaries are executed, not the /sbin/udev. We can change it
if we are ready for real testing.
And SIGKILL can't be cought, so I removed it from the handler :)
06:58:36 sig_handler: caught signal 15
06:58:36 main: using ipc queue 0x2d548
06:58:37 message is still in the ipc queue, starting daemon...
06:58:37 work: received sequence 3, expected sequence 0
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 set_timeout: set timeout in 1 seconds
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 1, expected sequence 1
06:58:37 msg_dump_queue: sequence 1 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 1, 'add', '/block/sda', 'block'
06:58:37 msg_exec: child [8038] created
06:58:37 running_moveto_queue: move sequence 1 [8038] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 2, expected sequence 2
06:58:37 msg_dump_queue: sequence 2 in queue
06:58:37 msg_dump_queue: sequence 3 in queue
06:58:37 msg_dump: sequence 2, 'add', '/block/sdb', 'block'
06:58:37 msg_exec: child [8039] created
06:58:37 running_moveto_queue: move sequence 2 [8039] to running queue '/block/sdb'
06:58:37 msg_dump: sequence 3, 'add', '/block/sdc', 'block'
06:58:37 msg_exec: child [8040] created
06:58:37 running_moveto_queue: move sequence 3 [8040] to running queue '/block/sdc'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 4, expected sequence 4
06:58:37 msg_dump_queue: sequence 4 in queue
06:58:37 msg_dump: sequence 4, 'remove', '/block/sdc', 'block'
06:58:37 msg_exec: delay exec of sequence 4, [8040] already working on '/block/sdc'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:37 msg_exec: child [8043] created
06:58:37 running_moveto_queue: move sequence 4 [8043] to running queue '/block/sdc'
06:58:37 work: received sequence 5, expected sequence 5
06:58:37 msg_dump_queue: sequence 5 in queue
06:58:37 msg_dump: sequence 5, 'remove', '/block/sdb', 'block'
06:58:37 msg_exec: delay exec of sequence 5, [8039] already working on '/block/sdb'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:37 msg_exec: child [8044] created
06:58:37 running_moveto_queue: move sequence 5 [8044] to running queue '/block/sdb'
06:58:37 main: using ipc queue 0x2d548
06:58:37 main: using ipc queue 0x2d548
06:58:37 work: received sequence 8, expected sequence 6
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 set_timeout: set timeout in 5 seconds
06:58:37 work: received sequence 6, expected sequence 6
06:58:37 msg_dump_queue: sequence 6 in queue
06:58:37 msg_dump_queue: sequence 8 in queue
06:58:37 msg_dump: sequence 6, 'remove', '/block/sda', 'block'
06:58:37 msg_exec: delay exec of sequence 6, [8038] already working on '/block/sda'
06:58:37 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:37 msg_exec: child [8047] created
06:58:37 running_moveto_queue: move sequence 6 [8047] to running queue '/block/sda'
06:58:37 set_timeout: set timeout in 5 seconds
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8038
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8039
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8040
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8043
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8044
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:38 sig_handler: caught signal 17
06:58:38 sig_handler: exec finished, pid 8047
06:58:38 set_timeout: set timeout in 4 seconds
06:58:38 msg_dump_queue: sequence 8 in queue
06:58:39 main: using ipc queue 0x2d548
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 9, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 work: received sequence 11, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 10, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 13, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 14, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:39 main: using ipc queue 0x2d548
06:58:39 work: received sequence 15, expected sequence 7
06:58:39 msg_dump_queue: sequence 8 in queue
06:58:39 msg_dump_queue: sequence 9 in queue
06:58:39 msg_dump_queue: sequence 10 in queue
06:58:39 msg_dump_queue: sequence 11 in queue
06:58:39 msg_dump_queue: sequence 13 in queue
06:58:39 msg_dump_queue: sequence 14 in queue
06:58:39 msg_dump_queue: sequence 15 in queue
06:58:39 set_timeout: set timeout in 3 seconds
06:58:41 main: using ipc queue 0x2d548
06:58:41 work: received sequence 12, expected sequence 7
06:58:41 msg_dump_queue: sequence 8 in queue
06:58:41 msg_dump_queue: sequence 9 in queue
06:58:41 msg_dump_queue: sequence 10 in queue
06:58:41 msg_dump_queue: sequence 11 in queue
06:58:41 msg_dump_queue: sequence 12 in queue
06:58:41 msg_dump_queue: sequence 13 in queue
06:58:41 msg_dump_queue: sequence 14 in queue
06:58:41 msg_dump_queue: sequence 15 in queue
06:58:41 set_timeout: set timeout in 1 seconds
06:58:42 sig_handler: caught signal 14
06:58:42 sig_handler: event timeout reached
06:58:42 event 8, age 5 seconds, skip event 7-7
06:58:42 msg_dump: sequence 8, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: child [8057] created
06:58:42 running_moveto_queue: move sequence 8 [8057] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 9, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: child [8058] created
06:58:42 running_moveto_queue: move sequence 9 [8058] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 10, 'remove', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 10, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8059] created
06:58:42 running_moveto_queue: move sequence 10 [8059] to running queue '/block/sdc'
06:58:42 msg_dump: sequence 11, 'remove', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 11, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8060] created
06:58:42 running_moveto_queue: move sequence 11 [8060] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 12, 'remove', '/block/sda', 'block'
06:58:42 msg_exec: child [8061] created
06:58:42 running_moveto_queue: move sequence 12 [8061] to running queue '/block/sda'
06:58:42 msg_dump: sequence 13, 'add', '/block/sda', 'block'
06:58:42 msg_exec: delay exec of sequence 13, [8061] already working on '/block/sda'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sda'
06:58:42 msg_exec: child [8062] created
06:58:42 running_moveto_queue: move sequence 13 [8062] to running queue '/block/sda'
06:58:42 msg_dump: sequence 14, 'add', '/block/sdb', 'block'
06:58:42 msg_exec: delay exec of sequence 14, [8057] already working on '/block/sdb'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdb'
06:58:42 msg_exec: child [8063] created
06:58:42 running_moveto_queue: move sequence 14 [8063] to running queue '/block/sdb'
06:58:42 msg_dump: sequence 15, 'add', '/block/sdc', 'block'
06:58:42 msg_exec: delay exec of sequence 15, [8058] already working on '/block/sdc'
06:58:42 delayed_moveto_queue: move event to delayed queue '/block/sdc'
06:58:42 msg_exec: child [8064] created
06:58:42 running_moveto_queue: move sequence 15 [8064] to running queue '/block/sdc'
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8057
06:58:43 sig_handler: exec finished, pid 8058
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8059
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8060
06:58:43 sig_handler: exec finished, pid 8061
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8062
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8063
06:58:43 sig_handler: caught signal 17
06:58:43 sig_handler: exec finished, pid 8064
2004-01-28 05:57:36 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* move event with expired timeout to the exec list */
2004-04-01 11:03:46 +04:00
sysinfo ( & info ) ;
msg_age = info . uptime - loop_msg - > queue_time ;
dbg ( " seq %d is %li seconds old " , loop_msg - > seqnum , msg_age ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( msg_age > EVENT_TIMEOUT_SEC - 1 ) {
msg_move_exec ( loop_msg ) ;
goto recheck ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
} else {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
msg_dump_queue ( ) ;
2004-04-01 11:03:46 +04:00
/* set timeout for remaining queued events */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( list_empty ( & msg_list ) = = 0 ) {
struct itimerval itv = { { 0 , 0 } , { EVENT_TIMEOUT_SEC - msg_age , 0 } } ;
2004-04-01 11:03:46 +04:00
dbg ( " next event expires in %li seconds " , EVENT_TIMEOUT_SEC - msg_age ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
setitimer ( ITIMER_REAL , & itv , 0 ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* receive the msg, do some basic sanity checks, and queue it */
static void handle_msg ( int sock )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct hotplug_msg * msg ;
int retval ;
2004-02-12 09:32:11 +03:00
struct msghdr smsg ;
struct cmsghdr * cmsg ;
struct iovec iov ;
struct ucred * cred ;
char cred_msg [ CMSG_SPACE ( sizeof ( struct ucred ) ) ] ;
2004-01-27 05:19:33 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
msg = msg_create ( ) ;
if ( msg = = NULL ) {
dbg ( " unable to store message " ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
return ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
2004-02-12 09:32:11 +03:00
iov . iov_base = msg ;
iov . iov_len = sizeof ( struct hotplug_msg ) ;
memset ( & smsg , 0x00 , sizeof ( struct msghdr ) ) ;
smsg . msg_iov = & iov ;
smsg . msg_iovlen = 1 ;
smsg . msg_control = cred_msg ;
smsg . msg_controllen = sizeof ( cred_msg ) ;
retval = recvmsg ( sock , & smsg , 0 ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
if ( retval < 0 ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( errno ! = EINTR )
dbg ( " unable to receive message " ) ;
return ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
2004-02-12 09:32:11 +03:00
cmsg = CMSG_FIRSTHDR ( & smsg ) ;
cred = ( struct ucred * ) CMSG_DATA ( cmsg ) ;
2004-02-12 12:23:59 +03:00
if ( cmsg = = NULL | | cmsg - > cmsg_type ! = SCM_CREDENTIALS ) {
dbg ( " no sender credentials received, message ignored " ) ;
goto skip ;
}
2004-02-12 09:32:11 +03:00
if ( cred - > uid ! = 0 ) {
dbg ( " sender uid=%i, message ignored " , cred - > uid ) ;
2004-02-12 12:23:59 +03:00
goto skip ;
2004-02-12 09:32:11 +03:00
}
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
if ( strncmp ( msg - > magic , UDEV_MAGIC , sizeof ( UDEV_MAGIC ) ) ! = 0 ) {
dbg ( " message magic '%s' doesn't match, ignore it " , msg - > magic ) ;
2004-02-12 12:23:59 +03:00
goto skip ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
2004-01-27 05:19:33 +03:00
2004-02-05 12:35:08 +03:00
/* if no seqnum is given, we move straight to exec queue */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
if ( msg - > seqnum = = - 1 ) {
2004-02-05 12:35:08 +03:00
list_add ( & msg - > list , & exec_list ) ;
2004-04-01 11:03:07 +04:00
run_exec_q = 1 ;
2004-02-05 12:35:08 +03:00
} else {
msg_queue_insert ( msg ) ;
}
2004-02-12 12:23:59 +03:00
return ;
skip :
free ( msg ) ;
return ;
2004-01-27 05:19:33 +03:00
}
2004-01-24 09:26:19 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
static void sig_handler ( int signum )
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
{
2004-04-01 11:03:07 +04:00
int rc ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
switch ( signum ) {
case SIGINT :
case SIGTERM :
exit ( 20 + signum ) ;
break ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
case SIGALRM :
2004-04-01 11:03:07 +04:00
/* set flag, then write to pipe if needed */
run_msg_q = 1 ;
goto do_write ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
case SIGCHLD :
2004-04-01 11:03:07 +04:00
/* set flag, then write to pipe if needed */
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
children_waiting = 1 ;
2004-04-01 11:03:07 +04:00
goto do_write ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
break ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
default :
dbg ( " unhandled signal " ) ;
2004-04-01 11:03:07 +04:00
return ;
}
do_write :
/* if pipe is empty, write to pipe to force select to return
* immediately when it gets called
*/
if ( ! sig_flag ) {
rc = write ( pipefds [ 1 ] , & signum , sizeof ( signum ) ) ;
if ( rc < 0 )
dbg ( " unable to write to pipe " ) ;
else
sig_flag = 1 ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
}
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
static void udev_done ( int pid )
{
/* find msg associated with pid and delete it */
struct hotplug_msg * msg ;
list_for_each_entry ( msg , & running_list , list ) {
if ( msg - > pid = = pid ) {
dbg ( " <== exec seq %d came back " , msg - > seqnum ) ;
run_queue_delete ( msg ) ;
2004-04-01 11:03:07 +04:00
/* we want to run the exec queue manager since there may
* be events waiting with the devpath of the one that
* just finished
*/
run_exec_q = 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
return ;
}
}
}
2004-04-01 11:03:07 +04:00
static void reap_kids ( )
{
/* reap all dead children */
while ( 1 ) {
int pid = waitpid ( - 1 , 0 , WNOHANG ) ;
if ( ( pid = = - 1 ) | | ( pid = = 0 ) )
break ;
udev_done ( pid ) ;
}
}
/* just read everything from the pipe and clear the flag,
* the useful flags were set in the signal handler
*/
static void user_sighandler ( )
{
int sig ;
while ( 1 ) {
int rc = read ( pipefds [ 0 ] , & sig , sizeof ( sig ) ) ;
if ( rc < 0 )
break ;
sig_flag = 0 ;
}
}
[PATCH] udev - next round of udev event order daemon
Here is the next round of udevd/udevsend:
udevsend - If the IPC message we send is not catched by a receiver we fork
the udevd daemon to process this and the following events
udevd - We reorder the events we receive and execute our current udev for
every event. If one or more events are missing, we wait
10 seconds and then go ahead in the queue.
If the queue is empty and we don't receive any event for the next
30 seconds, the daemon exits.
The next incoming event will fork the daemon again.
config - The path's to the executable are specified in udevd.h
Now they are pointing to the current directory only.
I don't like daemons hiding secrets (and mem leaks :)) inside,
so I want to try this model. It should be enough logic to get all possible
hotplug events executed in the right order.
If no event, then no daemon! So everybody should be happy :)
Here we see:
1. the daemon fork,
2. the udev work,
3. the 10 sec timeout and the skipped events,
4. the udev work,
...,
5. and the 30 sec timeout and exit.
EVENTS:
pim:/home/kay/src/udev.kay# test/udevd_test.sh
pim:/home/kay/src/udev.kay# SEQNUM=15 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=16 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=17 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=18 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=20 ./udevsend block
pim:/home/kay/src/udev.kay# SEQNUM=21 ./udevsend block
LOG:
Jan 23 15:35:35 pim udev[11795]: message is still in the ipc queue, starting daemon...
Jan 23 15:35:35 pim udev[11799]: configured rule in '/etc/udev/udev.rules' at line 19 applied, 'sda' becomes '%k-flash'
Jan 23 15:35:35 pim udev[11799]: creating device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11800]: creating device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11804]: creating device node '/udev/sdc'
Jan 23 15:35:35 pim udev[11805]: removing device node '/udev/sda-flash'
Jan 23 15:35:35 pim udev[11808]: removing device node '/udev/sdb'
Jan 23 15:35:35 pim udev[11809]: removing device node '/udev/sdc'
Jan 23 15:35:45 pim udev[11797]: timeout reached, skip events 7 - 7
Jan 23 15:35:45 pim udev[11811]: creating device node '/udev/sdb'
Jan 23 15:35:45 pim udev[11812]: creating device node '/udev/sdc'
Jan 23 15:36:01 pim udev[11797]: timeout reached, skip events 10 - 14
Jan 23 15:36:01 pim udev[11814]: creating device node '/udev/sdc'
Jan 23 15:36:04 pim udev[11816]: creating device node '/udev/sdc'
Jan 23 15:36:12 pim udev[11818]: creating device node '/udev/sdc'
Jan 23 15:36:16 pim udev[11820]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11797]: timeout reached, skip events 19 - 19
Jan 23 15:36:38 pim udev[11823]: creating device node '/udev/sdc'
Jan 23 15:36:38 pim udev[11824]: creating device node '/udev/sdc'
Jan 23 15:37:08 pim udev[11797]: we have nothing to do, so daemon exits...
2004-01-24 08:25:17 +03:00
int main ( int argc , char * argv [ ] )
{
2004-04-01 11:03:07 +04:00
int ssock , maxsockplus ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
struct sockaddr_un saddr ;
2004-02-06 11:11:24 +03:00
socklen_t addrlen ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
int retval ;
2004-02-12 09:32:11 +03:00
const int on = 1 ;
2004-02-12 09:29:15 +03:00
struct sigaction act ;
2004-04-01 11:03:07 +04:00
fd_set readfds ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
2004-02-02 19:19:41 +03:00
init_logging ( " udevd " ) ;
2004-02-17 09:31:15 +03:00
dbg ( " version %s " , UDEV_VERSION ) ;
2004-02-02 19:19:41 +03:00
2004-02-12 12:23:59 +03:00
if ( getuid ( ) ! = 0 ) {
dbg ( " need to be root, exit " ) ;
exit ( 1 ) ;
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
/* setup signal handler pipe */
2004-04-01 11:03:46 +04:00
retval = pipe ( pipefds ) ;
if ( retval < 0 ) {
dbg ( " error getting pipes: %s " , strerror ( errno ) ) ;
exit ( 1 ) ;
}
retval = fcntl ( pipefds [ 0 ] , F_SETFL , O_NONBLOCK ) ;
if ( retval < 0 ) {
dbg ( " error fcntl on read pipe: %s " , strerror ( errno ) ) ;
exit ( 1 ) ;
}
retval = fcntl ( pipefds [ 1 ] , F_SETFL , O_NONBLOCK ) ;
if ( retval < 0 ) {
dbg ( " error fcntl on write pipe: %s " , strerror ( errno ) ) ;
exit ( 1 ) ;
}
2004-04-01 11:03:07 +04:00
/* set signal handlers */
2004-02-12 09:29:15 +03:00
act . sa_handler = sig_handler ;
2004-04-01 11:03:07 +04:00
sigemptyset ( & act . sa_mask ) ;
2004-02-12 09:29:15 +03:00
act . sa_flags = SA_RESTART ;
sigaction ( SIGINT , & act , NULL ) ;
sigaction ( SIGTERM , & act , NULL ) ;
sigaction ( SIGALRM , & act , NULL ) ;
sigaction ( SIGCHLD , & act , NULL ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
memset ( & saddr , 0x00 , sizeof ( saddr ) ) ;
saddr . sun_family = AF_LOCAL ;
2004-02-05 12:35:15 +03:00
/* use abstract namespace for socket path */
strcpy ( & saddr . sun_path [ 1 ] , UDEVD_SOCK_PATH ) ;
2004-02-06 11:11:24 +03:00
addrlen = offsetof ( struct sockaddr_un , sun_path ) + strlen ( saddr . sun_path + 1 ) + 1 ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
ssock = socket ( AF_LOCAL , SOCK_DGRAM , 0 ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
if ( ssock = = - 1 ) {
2004-02-12 12:23:59 +03:00
dbg ( " error getting socket, exit " ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
exit ( 1 ) ;
}
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
/* the bind takes care of ensuring only one copy running */
2004-02-12 09:29:15 +03:00
retval = bind ( ssock , ( struct sockaddr * ) & saddr , addrlen ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
if ( retval < 0 ) {
2004-02-12 12:23:59 +03:00
dbg ( " bind failed, exit " ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
goto exit ;
}
2004-02-12 09:32:11 +03:00
/* enable receiving of the sender credentials */
setsockopt ( ssock , SOL_SOCKET , SO_PASSCRED , & on , sizeof ( on ) ) ;
2004-04-01 11:03:46 +04:00
FD_ZERO ( & readfds ) ;
FD_SET ( ssock , & readfds ) ;
FD_SET ( pipefds [ 0 ] , & readfds ) ;
2004-04-01 11:03:07 +04:00
maxsockplus = ssock + 1 ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
while ( 1 ) {
2004-04-01 11:03:07 +04:00
fd_set workreadfds = readfds ;
retval = select ( maxsockplus , & workreadfds , NULL , NULL , NULL ) ;
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( retval < 0 ) {
2004-04-01 11:03:46 +04:00
if ( errno ! = EINTR )
dbg ( " error in select: %s " , strerror ( errno ) ) ;
2004-04-01 11:03:07 +04:00
continue ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( FD_ISSET ( ssock , & workreadfds ) )
handle_msg ( ssock ) ;
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( FD_ISSET ( pipefds [ 0 ] , & workreadfds ) )
user_sighandler ( ) ;
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( children_waiting ) {
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
children_waiting = 0 ;
2004-04-01 11:03:07 +04:00
reap_kids ( ) ;
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( run_msg_q ) {
run_msg_q = 0 ;
msg_queue_manager ( ) ;
}
2004-04-01 11:03:46 +04:00
2004-04-01 11:03:07 +04:00
if ( run_exec_q ) {
/* this is tricky. exec_queue_manager() loops over exec_list, and
* calls running_with_devpath ( ) , which loops over running_list . This gives
* O ( N * M ) , which can get * nasty * . Clean up running_list before
* calling exec_queue_manager ( ) .
*/
if ( children_waiting ) {
children_waiting = 0 ;
reap_kids ( ) ;
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 09:21:15 +03:00
}
2004-04-01 11:03:07 +04:00
run_exec_q = 0 ;
exec_queue_manager ( ) ;
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 20:12:36 +03:00
}
}
exit :
close ( ssock ) ;
exit ( 1 ) ;
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 11:28:57 +03:00
}