From: "Randy Brukardt" <randy@rrsoftware.com>
Subject: Re: BDD package in Ada.
Date: Thu, 9 Apr 2015 19:05:34 -0500
Date: 2015-04-09T19:05:34-05:00 [thread overview]
Message-ID: <mg744e$5lu$1@loke.gir.dk> (raw)
In-Reply-To: mg5fcg$maf$1@dont-email.me
"Georg Bauhaus" <bauhaus@futureapps.invalid> wrote in message
news:mg5fcg$maf$1@dont-email.me...
> On 08.04.15 23:30, Randy Brukardt wrote:
>> "Vincent DIEMUNSCH" <vincent.diemunsch@gmail.com> wrote in message
>> news:c7db12c1-10fa-40ee-a2df-ff8a8b2177bc@googlegroups.com...
>>> Le mercredi 8 avril 2015 08:38:19 UTC+2, Dmitry A. Kazakov a ?crit :
>>
>>>> I would not call it "fundamental". It is simply a tree with factored
>>>> out
>>>> subtrees. Need not to be binary, BTW.
>>> A BINARY decision diagram needs to be binary, by definition. It is not
>>> "simply" a tree,
>>> but a highly optimised data structure, not only by factoring the tree,
>>> but
>>> through variable
>>> ordering, which is the key idea. (Briant's paper is about ORDERED BDD).
>>> It
>>> is usefull
>>> for the implementation of Sets and operations on sets.
>>
>> Sounds like premature optimization to me.
>
> The presentations say that in the worst case, some fundamental operations
> are O(linear) if the graph is ordered, but are O(exponential) if not.
> I am not sure, then, if that makes "premature optimization" an adequate
> characterization.
That's pretty much the definition of premature optimization. Everything
interesting is O(exponential) when you start out.
> If a compiler needs to sort out whether or not
> something-and-then-this-and-...
> -then-if-that...-and-finally-this, then wouldn't the above ordering of
> Boolean
> trees have a major impact on feasibility?
It depends on N. Most N's are small. The Janus/Ada optimizer is (worst-case)
O(N**4). It proved to be easier to manage N than to try to redo it to reduce
that to a smaller O. Part of the problem of course is that the constant
factor matters in practice. If the runtime of the original version is
10*(N**4), but the new version is 1000*(N**2), you will need pretty large Ns
before you'll gain anything. In the Janus/Ada case, N (size of a code tree)
has to be over 2000 before the runtime becomes noticable to a human. The
average N is around 200, only large aggregates or giant case statements give
those massive Ns that cause trouble.
That's why almost all optimization is premature. I would never have guessed
right on the distribution of Ns or what is important in our compiler. I
would have wasted time on resolution and other areas whose time was
essentially irrelevant. (And I probably did at some point. ;-) The biggest
performance problem in our early Ada 95 compilers turned out to be a
premature optimization that didn't work (but had a lot of overhead). I only
figured that out when I got tired of waiting for a long compilation and
stuck a profiler on the appropriate part of the compiler. (I had thought it
was doing something useful.)
Randy.
Randy.
next prev parent reply other threads:[~2015-04-10 0:05 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-07 20:35 BDD package in Ada Vincent DIEMUNSCH
2015-04-08 6:38 ` Dmitry A. Kazakov
2015-04-08 7:44 ` Vincent DIEMUNSCH
2015-04-08 12:25 ` jan.de.kruyf
2015-04-08 18:39 ` vincent.diemunsch
2015-04-09 9:31 ` jan.de.kruyf
2015-04-09 16:51 ` jan.de.kruyf
2015-04-09 18:23 ` vincent.diemunsch
2015-04-08 21:30 ` Randy Brukardt
2015-04-08 23:40 ` Paul Rubin
2015-04-09 9:05 ` gautier_niouzes
2015-04-09 23:49 ` Randy Brukardt
2015-04-09 9:06 ` Georg Bauhaus
2015-04-09 9:29 ` Dmitry A. Kazakov
2015-04-10 0:05 ` Randy Brukardt [this message]
2015-04-08 21:20 ` Randy Brukardt
2015-04-08 18:27 ` Per Sandberg
2015-04-09 15:24 ` Paul Rubin
2015-04-09 20:02 ` vincent.diemunsch
replies disabled
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox