From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM, FREEMAIL_REPLY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,8c8550b9f2cf7d40 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-06-09 03:06:59 PST Path: archiver1.google.com!postnews1.google.com!not-for-mail From: danniowen@aol.com (Mango Jones) Newsgroups: comp.lang.ada Subject: Re: Is ther any sense in *= and matrices? Date: 9 Jun 2003 03:06:58 -0700 Organization: http://groups.google.com/ Message-ID: References: NNTP-Posting-Host: 62.171.194.45 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Trace: posting.google.com 1055153219 574 127.0.0.1 (9 Jun 2003 10:06:59 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: 9 Jun 2003 10:06:59 GMT Xref: archiver1.google.com comp.lang.ada:38851 Date: 2003-06-09T10:06:59+00:00 List-Id: That reminds me of a time at www.spacemag.co.uk 18k11tm001@sneakemail.com (Russ) wrote in message news:... > "John R. Strohm" wrote in message news:... > > "Russ" <18k11tm001@sneakemail.com> wrote in message > > news:bebbba07.0306081051.5f3ac24b@posting.google.com... > > > "John R. Strohm" wrote in message > news:... > > > > > > > "Russ" <18k11tm001@sneakemail.com> wrote in message > > > > news:bebbba07.0306071138.6bf9784f@posting.google.com... > > > > > > "*=" is not so useful for multipying non-square matrices. However, it > > > > > is very useful for multiplying a matrix (or a vector) by a scalar. For > > > > > example, > > > > > > > > > > A *= 2.0 > > > > > > > > > > can be used to multiply matrix A by two in place. This is potentially > > > > > more efficient than A := A * 2 for all the same reasons discussed with > > > > > respect to "+=". > > > > > > > > With all due respect, ladies and gentlemen, it has been known for a very > > > > long time that the difference in "efficiency" between A := A + B and A > += B > > > > is lost in the noise floor compared to the improvements that can be > gotten > > > > by improving the algorithms involved. > > > > > > Oh, really? I just did a test in C++ with 3x3 matrices. I added them > > > together 10,000,000 times using "+", then "+=". The "+=" version took > > > about 19 seconds, and the "+" version took about 55 seconds. That's > > > just shy of a factor of 3, folks. If that's your "noise floor," I > > > can't help wonder what kind of "algorithms" you are dealing with! > > > > 19 seconds divided by 10,000,000 is 1.9 microseconds per matrix addition. > > 55 seconds divided by 10,000,000 is 5.5 microseconds per matrix addition. I > > find it VERY difficult to believe that your total processing chain is going > > to be such that 3 microseconds per matrix addition is a DOMINANT part of the > > timeline. > > > > In other words, if your TOTAL timeline is, say, 50 microseconds, 3 > > microseconds isn't going to buy you that much. I'm still wondering what you > > are doing that makes that 3 usec interesting. > > Ten years ago I was using my C++ vector/matrix code for an > experimental, real-time GPS/INS precision landing and guidance system, > including a 16-state Kalman filter (if I recall correctly). We had a > dozen or so tasks, and the guidance update rate was 20 Hz. It was a > fairly ambitious project at the time, and it worked out well. It might > NOT have worked had we been generating temporary vectors and matrices > at a high rate. > > But that's all beside the point. Who are you to question whether a > certain level of efficiency is enough? Just because *you* don't need > high efficiency for your work, that doesn't mean that *nobody* needs > it. And by the way, I'm not talking about ultra-efficiency here. I'm > just talking about the basics. You could write the slickest algorithm > ever written, but if you generate temporary vectors and matrices at a > high rate, it's like running a 100-yard dash carrying a freakin' > bowling ball. > > > > > And I'd be REALLY interested to know what you are doing with matrix > > > > multiplication such that the product of a matrix and a scalar is the > > > > slightest bit interesting. (Usually, when I'm multiplying matrices, I'm > > > > playing with direction cosine matrices, and I don't recall ever hearing > > > > about any use for multiplying a DCM by a scalar.) > > > > > > Sorry, but I'm having some trouble with your "logic". DCM matrices > > > don't get multiplied by scalars, so you doubt that any vector or > > > matrix ever gets multiplied by a scalar? > > > > Since you want to get nasty personal about it... > > > > I am fully aware that VECTOR * scalar happens, frequently. I was asking for > > an application that did MATRIX * scalar. > > Well, first of all, a vector is really just a matrix with one column. > More importantly, as I wrote earlier (and which you apparently > missed), all the efficiency concerns regarding "=" and "+=" are > EXACTLY the same for both vectors *and* matrices. Not one iota of > difference. So it really doesn't matter whether we are talking about > vectors or matrices. The fact that I referred to matrices in my > original post is irrrelevant. > > > > Actually, it is more common to scale a vector than a matrix, but the > > > same principles regarding the efficiency of "+"and "+=" still apply. > > > Vectors often need to be scaled for conversion of units, for example. > > > Another very common example would be determining a change in position > > > by multiplying a velocity vector by a scalar time increment. Ditto for > > > determining a change in a velocity vector by multiplying an > > > acceleration vector by a time increment. > > > > Determining a change in a position vector by multiplying a velocity vector > > by a scalar time increment, or determining a change in a velocity vector by > > multiplying an acceleration vector by a time increment, just like that, is > > Euler's Method for numerical integration. It is about the worst method > > known to man, BOTH in terms of processing time AND error buildup (and > > demonstrating this fact is a standard assignment in the undergraduate > > numerical methods survey course). If you are using Euler's Method for a > > high-performance sim, you are out of your mind. Runge-Kutta methods offer > > orders of magnitude better performance, in time AND error control. > > Oh, good lord. You still need to multiply velocity by time, regardless > of which method of numerical integration you are using. In fact, you > need to do it more with the higher-order methods. > > > And don't change the subject. We are talking about MATRIX * scalar, not > > vector * scalar. Quaternions are vectors, not matrices. > > No, *you* are talking only about multiplying matrices by scalars, but > I can't understand why, since the principle in question applies > equally to both vectors *and* matrices. > > > > Since you know about DCMs, you must also be familiar with quaternions, > > > otherwise known as Euler parameters. Like DCMs, Euler parameter > > > "vectors" represents the attitude of a rigid body. Well, an Euler > > > parameter vector consists of 4 scalar components or "Euler > > > parameters", and the norm (roor sum square) of the 4 parameters is > > > unity by definition. However, due to numerical roundoff error, the > > > norm can drift away from one. However, you can renormalize by dividing > > > the vector by its own norm. This can be written very concisely and > > > efficiently in C++ as > > > > > > eulpar /= eulpar.norm() > > > > Concisely, yes. Efficiently, not necessarily. You still have to compute > > the norm, which involves taking a square root, which, off the top of my > > head, probably costs quite a bit more than four divisions. If you are > > really into microefficiency, you would code this as renorm(eulpar), or > > eulpar.renorm(), and do it in-place, computing 1/sqrt(norm) directly and > > multplying, rather than computing sqrt() and dividing. > > Taking a square root is ultra-fast on modern processors, but that is > irrelevant because there is no way around it for normalizing a > quaternion. And, no, I'm not into micro-efficiency at all. However, > you may have a point: if multiplicatioin is slightly more efficient > than division (is it?), then it *would* make sense to do the division > once and then do multiplication after that. But *that's* > micro-efficiency.