comp.lang.ada
 help / color / mirror / Atom feed
From: danniowen@aol.com (Mango Jones)
Subject: Re: Is ther any sense in *= and matrices?
Date: 9 Jun 2003 03:06:58 -0700
Date: 2003-06-09T10:06:59+00:00	[thread overview]
Message-ID: <e1a2cbee.0306090206.65a1e5b9@posting.google.com> (raw)
In-Reply-To: bebbba07.0306082206.69603145@posting.google.com

That reminds me of a time at www.spacemag.co.uk

18k11tm001@sneakemail.com (Russ) wrote in message news:<bebbba07.0306082206.69603145@posting.google.com>...
> "John R. Strohm" <strohm@airmail.net> wrote in message news:<bc0dql$qnh@library1.airnews.net>...
> > "Russ" <18k11tm001@sneakemail.com> wrote in message
> > news:bebbba07.0306081051.5f3ac24b@posting.google.com...
> > > "John R. Strohm" <strohm@airmail.net> wrote in message
>  news:<bbumef$fnq@library2.airnews.net>...
> > >
> > > > "Russ" <18k11tm001@sneakemail.com> wrote in message
> > > > news:bebbba07.0306071138.6bf9784f@posting.google.com...
>  
> > > > > "*=" is not so useful for multipying non-square matrices. However, it
> > > > > is very useful for multiplying a matrix (or a vector) by a scalar. For
> > > > > example,
> > > > >
> > > > >     A *= 2.0
> > > > >
> > > > > can be used to multiply matrix A by two in place. This is potentially
> > > > > more efficient than A := A * 2 for all the same reasons discussed with
> > > > > respect to "+=".
> > > >
> > > > With all due respect, ladies and gentlemen, it has been known for a very
> > > > long time that the difference in "efficiency" between A := A + B and A
>  += B
> > > > is lost in the noise floor compared to the improvements that can be
>  gotten
> > > > by improving the algorithms involved.
> > >
> > > Oh, really? I just did a test in C++ with 3x3 matrices. I added them
> > > together 10,000,000 times using "+", then "+=". The "+=" version took
> > > about 19 seconds, and the "+" version took about 55 seconds. That's
> > > just shy of a factor of 3, folks. If that's your "noise floor," I
> > > can't help wonder what kind of "algorithms" you are dealing with!
> > 
> > 19 seconds divided by 10,000,000 is 1.9 microseconds per matrix addition.
> > 55 seconds divided by 10,000,000 is 5.5 microseconds per matrix addition.  I
> > find it VERY difficult to believe that your total processing chain is going
> > to be such that 3 microseconds per matrix addition is a DOMINANT part of the
> > timeline.
> > 
> > In other words, if your TOTAL timeline is, say, 50 microseconds, 3
> > microseconds isn't going to buy you that much.  I'm still wondering what you
> > are doing that makes that 3 usec interesting.
> 
> Ten years ago I was using my C++ vector/matrix code for an
> experimental, real-time GPS/INS precision landing and guidance system,
> including a 16-state Kalman filter (if I recall correctly). We had a
> dozen or so tasks, and the guidance update rate was 20 Hz. It was a
> fairly ambitious project at the time, and it worked out well. It might
> NOT have worked had we been generating temporary vectors and matrices
> at a high rate.
> 
> But that's all beside the point. Who are you to question whether a
> certain level of efficiency is enough? Just because *you* don't need
> high efficiency for your work, that doesn't mean that *nobody* needs
> it. And by the way, I'm not talking about ultra-efficiency here. I'm
> just talking about the basics. You could write the slickest algorithm
> ever written, but if you generate temporary vectors and matrices at a
> high rate, it's like running a 100-yard dash carrying a freakin'
> bowling ball.
> 
> > > > And I'd be REALLY interested to know what you are doing with matrix
> > > > multiplication such that the product of a matrix and a scalar is the
> > > > slightest bit interesting.  (Usually, when I'm multiplying matrices, I'm
> > > > playing with direction cosine matrices, and I don't recall ever hearing
> > > > about any use for multiplying a DCM by a scalar.)
> > >
> > > Sorry, but I'm having some trouble with your "logic". DCM matrices
> > > don't get multiplied by scalars, so you doubt that any vector or
> > > matrix ever gets multiplied by a scalar?
> > 
> > Since you want to get nasty personal about it...
> > 
> > I am fully aware that VECTOR * scalar happens, frequently.  I was asking for
> > an application that did MATRIX * scalar.
> 
> Well, first of all, a vector is really just a matrix with one column.
> More importantly, as I wrote earlier (and which you apparently
> missed), all the efficiency concerns regarding "=" and "+=" are
> EXACTLY the same for both vectors *and* matrices. Not one iota of
> difference. So it really doesn't matter whether we are talking about
> vectors or matrices. The fact that I referred to matrices in my
> original post is irrrelevant.
> 
> > > Actually, it is more common to scale a vector than a matrix, but the
> > > same principles regarding the efficiency of "+"and "+=" still apply.
> > > Vectors often need to be scaled for conversion of units, for example.
> > > Another very common example would be determining a change in position
> > > by multiplying a velocity vector by a scalar time increment. Ditto for
> > > determining a change in a velocity vector by multiplying an
> > > acceleration vector by a time increment.
> > 
> > Determining a change in a position vector by multiplying a velocity vector
> > by a scalar time increment, or determining a change in a velocity vector by
> > multiplying an acceleration vector by a time increment, just like that, is
> > Euler's Method for numerical integration.  It is about the worst method
> > known to man, BOTH in terms of processing time AND error buildup (and
> > demonstrating this fact is a standard assignment in the undergraduate
> > numerical methods survey course).  If you are using Euler's Method for a
> > high-performance sim, you are out of your mind.  Runge-Kutta methods offer
> > orders of magnitude better performance, in time AND error control.
> 
> Oh, good lord. You still need to multiply velocity by time, regardless
> of which method of numerical integration you are using. In fact, you
> need to do it more with the higher-order methods.
> 
> > And don't change the subject.  We are talking about MATRIX * scalar, not
> > vector * scalar.  Quaternions are vectors, not matrices.
> 
> No, *you* are talking only about multiplying matrices by scalars, but
> I can't understand why, since the principle in question applies
> equally to both vectors *and* matrices.
> 
> > > Since you know about DCMs, you must also be familiar with quaternions,
> > > otherwise known as Euler parameters. Like DCMs, Euler parameter
> > > "vectors" represents the attitude of a rigid body. Well, an Euler
> > > parameter vector consists of 4 scalar components or "Euler
> > > parameters", and the norm (roor sum square) of the 4 parameters is
> > > unity by definition. However, due to numerical roundoff error, the
> > > norm can drift away from one. However, you can renormalize by dividing
> > > the vector by its own norm. This can be written very concisely and
> > > efficiently in C++ as
> > >
> > >     eulpar /= eulpar.norm()
> > 
> > Concisely, yes.  Efficiently, not necessarily.  You still have to compute
> > the norm, which involves taking a square root, which, off the top of my
> > head, probably costs quite a bit more than four divisions.  If you are
> > really into microefficiency, you would code this as renorm(eulpar), or
> > eulpar.renorm(), and do it in-place, computing 1/sqrt(norm) directly and
> > multplying, rather than computing sqrt() and dividing.
> 
> Taking a square root is ultra-fast on modern processors, but that is
> irrelevant because there is no way around it for normalizing a
> quaternion. And, no, I'm not into micro-efficiency at all. However,
> you may have a point: if multiplicatioin is slightly more efficient
> than division (is it?), then it *would* make sense to do the division
> once and then do multiplication after that. But *that's*
> micro-efficiency.



  reply	other threads:[~2003-06-09 10:06 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-06-05 10:45 Is ther any sense in *= and matrices? Preben Randhol
2003-06-05 11:13 ` Vinzent Hoefler
2003-06-05 11:28   ` Preben Randhol
2003-06-05 11:53     ` Vinzent Hoefler
2003-06-05 15:27       ` Preben Randhol
2003-06-05 15:40         ` Vinzent Hoefler
2003-06-05 15:47           ` Preben Randhol
2003-06-05 16:38             ` Vinzent Hoefler
2003-06-05 17:16               ` Preben Randhol
2003-06-05 17:17               ` Preben Randhol
2003-06-05 17:59                 ` Vinzent Hoefler
2003-06-07 19:38             ` Russ
2003-06-08  6:46               ` John R. Strohm
2003-06-08 18:51                 ` Russ
2003-06-08 20:52                   ` tmoran
2003-06-09  4:24                     ` Russ
2003-06-09  5:13                       ` John R. Strohm
2003-06-10  9:38                         ` Ole-Hjalmar Kristensen
2003-06-10 16:11                           ` Wesley Groleau
2003-06-10 19:24                             ` Ole Kristensen
2003-06-10 18:33                           ` Russ
2003-06-10 23:16                             ` John R. Strohm
2003-06-09  6:58                       ` tmoran
2003-06-08 22:23                   ` John R. Strohm
2003-06-09  6:06                     ` Russ
2003-06-09 10:06                       ` Mango Jones [this message]
2003-06-08 22:56                   ` Bobby D. Bryant
2003-06-09  4:27                     ` Russ
2003-06-09  5:17                       ` John R. Strohm
2003-06-09 14:53                       ` Bobby D. Bryant
2003-06-09 17:46                         ` Russ
2003-06-10  9:57                           ` Ole-Hjalmar Kristensen
2003-06-05 12:33     ` John R. Strohm
2003-06-05 19:25   ` Wesley Groleau
2003-06-05 20:17     ` David C. Hoos
2003-06-05 20:52       ` Wesley Groleau
  -- strict thread matches above, loose matches on Subject: below --
2003-06-10 19:00 tmoran
2003-06-10 19:37 ` Ole Kristensen
2003-06-10 19:37 ` Ole Kristensen
2003-06-10 19:48 ` Ole Kristensen
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox