Having gained a couple of month's experience with a DCS9930L and DCS-932L both uploading via ftp to a server, based on the camera's notion of "Motion detection", I see it's quite a waste of bandwidth and storage (not to mention the time to sift through the result).
It appears that the "Motion detection" may really just be "changed average light level in the selected box(es)".
My question, for the signal processing experts among you, is this: Since the pictures are apparently (immediately) converted to jpg, and since jpg involves cosine transforms which store information according to perceived spatial frequency, can't a better motion detection be implemented by observing the amplitude of the high-frequency components in the jpg file - corresponding closely, I would think, to edge detection?
There are more sophisticated computer-based software packages available, but the idea is to avoid having a computer running 24/7. Seems like the examining of selected jpg coefficients could be done in-camera as easily as the average-light comparisons?