From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,8c8550b9f2cf7d40 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-06-08 15:41:06 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.airnews.net!cabal12.airnews.net!usenet From: "John R. Strohm" Newsgroups: comp.lang.ada Subject: Re: Is ther any sense in *= and matrices? Date: Sun, 8 Jun 2003 17:23:54 -0500 Organization: Airnews.net! at Internet America Message-ID: References: Abuse-Reports-To: abuse at airmail.net to report improper postings NNTP-Proxy-Relay: library1-aux.airnews.net NNTP-Posting-Time: Sun, 08 Jun 2003 17:37:10 -0500 (CDT) NNTP-Posting-Host: !c%6N1k-WrWf2=X (Encoded at Airnews!) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2800.1106 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 Xref: archiver1.google.com comp.lang.ada:38838 Date: 2003-06-08T17:23:54-05:00 List-Id: "Russ" <18k11tm001@sneakemail.com> wrote in message news:bebbba07.0306081051.5f3ac24b@posting.google.com... > "John R. Strohm" wrote in message news:... > > > "Russ" <18k11tm001@sneakemail.com> wrote in message > > news:bebbba07.0306071138.6bf9784f@posting.google.com... > > > > "*=" is not so useful for multipying non-square matrices. However, it > > > is very useful for multiplying a matrix (or a vector) by a scalar. For > > > example, > > > > > > A *= 2.0 > > > > > > can be used to multiply matrix A by two in place. This is potentially > > > more efficient than A := A * 2 for all the same reasons discussed with > > > respect to "+=". > > > > With all due respect, ladies and gentlemen, it has been known for a very > > long time that the difference in "efficiency" between A := A + B and A += B > > is lost in the noise floor compared to the improvements that can be gotten > > by improving the algorithms involved. > > Oh, really? I just did a test in C++ with 3x3 matrices. I added them > together 10,000,000 times using "+", then "+=". The "+=" version took > about 19 seconds, and the "+" version took about 55 seconds. That's > just shy of a factor of 3, folks. If that's your "noise floor," I > can't help wonder what kind of "algorithms" you are dealing with! 19 seconds divided by 10,000,000 is 1.9 microseconds per matrix addition. 55 seconds divided by 10,000,000 is 5.5 microseconds per matrix addition. I find it VERY difficult to believe that your total processing chain is going to be such that 3 microseconds per matrix addition is a DOMINANT part of the timeline. In other words, if your TOTAL timeline is, say, 50 microseconds, 3 microseconds isn't going to buy you that much. I'm still wondering what you are doing that makes that 3 usec interesting. > > And I'd be REALLY interested to know what you are doing with matrix > > multiplication such that the product of a matrix and a scalar is the > > slightest bit interesting. (Usually, when I'm multiplying matrices, I'm > > playing with direction cosine matrices, and I don't recall ever hearing > > about any use for multiplying a DCM by a scalar.) > > Sorry, but I'm having some trouble with your "logic". DCM matrices > don't get multiplied by scalars, so you doubt that any vector or > matrix ever gets multiplied by a scalar? Since you want to get nasty personal about it... I am fully aware that VECTOR * scalar happens, frequently. I was asking for an application that did MATRIX * scalar. > Actually, it is more common to scale a vector than a matrix, but the > same principles regarding the efficiency of "+"and "+=" still apply. > Vectors often need to be scaled for conversion of units, for example. > Another very common example would be determining a change in position > by multiplying a velocity vector by a scalar time increment. Ditto for > determining a change in a velocity vector by multiplying an > acceleration vector by a time increment. Determining a change in a position vector by multiplying a velocity vector by a scalar time increment, or determining a change in a velocity vector by multiplying an acceleration vector by a time increment, just like that, is Euler's Method for numerical integration. It is about the worst method known to man, BOTH in terms of processing time AND error buildup (and demonstrating this fact is a standard assignment in the undergraduate numerical methods survey course). If you are using Euler's Method for a high-performance sim, you are out of your mind. Runge-Kutta methods offer orders of magnitude better performance, in time AND error control. And don't change the subject. We are talking about MATRIX * scalar, not vector * scalar. Quaternions are vectors, not matrices. > Since you know about DCMs, you must also be familiar with quaternions, > otherwise known as Euler parameters. Like DCMs, Euler parameter > "vectors" represents the attitude of a rigid body. Well, an Euler > parameter vector consists of 4 scalar components or "Euler > parameters", and the norm (roor sum square) of the 4 parameters is > unity by definition. However, due to numerical roundoff error, the > norm can drift away from one. However, you can renormalize by dividing > the vector by its own norm. This can be written very concisely and > efficiently in C++ as > > eulpar /= eulpar.norm() Concisely, yes. Efficiently, not necessarily. You still have to compute the norm, which involves taking a square root, which, off the top of my head, probably costs quite a bit more than four divisions. If you are really into microefficiency, you would code this as renorm(eulpar), or eulpar.renorm(), and do it in-place, computing 1/sqrt(norm) directly and multplying, rather than computing sqrt() and dividing. And all that is irrelevant to the original question, because quaternion renormalization is VECTOR * scalar, not MATRIX * scalar. > Incidentally, this procedure is much simpler than the corresponding > procedure for orthogonalizing a DCM. No argument on the microissue. I haven't looked at quaternion methods in a few years, so I don't recall offhand what all is involved in building a sim around quaternions as opposed to DCMs. My recollection is that you still need the DCMs, and you still have to compute between quaternions and DCMs. You use quaternions because (a) you can interpolate/integrate them easily (good) and (b) they don't suffer from gimbal lock (much better in some applications). You pay a price for that flexibility. > I could come up with other examples, but this should suffice. > Actually, I'm a bit dissappointed in myself for wasting as much time > as I did in replying to your meritless post. So is my wife.