From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 5b1e799cdb,3ef3e78eacf6f938 X-Google-Attributes: gid5b1e799cdb,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news4.google.com!feeder.news-service.com!feeder.erje.net!feeder.eternal-september.org!eternal-september.org!aioe.org!not-for-mail From: Oxide Scrubber Newsgroups: comp.lang.scheme,comp.lang.ada,comp.lang.functional,comp.lang.c++,comp.programming Subject: Re: Alternatives to C: ObjectPascal, Eiffel, Ada or Modula-3? Followup-To: atl.olympics Date: Wed, 29 Jul 2009 00:46:25 -0400 Organization: Aioe.org NNTP Server Message-ID: References: <2009a75f-63e7-485e-9d9f-955e456578ed@v37g2000prg.googlegroups.com> <76d51f01-725b-4146-82f6-cd89bfcb90eb@v15g2000prn.googlegroups.com> NNTP-Posting-Host: UPADzacy6hUmpG8kCE4xzg.user.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: abuse@aioe.org X-Antivirus-Status: Clean X-Notice: Filtered by postfilter v. 0.7.9 X-Antivirus: avast! (VPS 090728-0, 28/07/2009), Outbound message Cancel-Lock: sha1:2CR5iQmHYe52d9HL+HmGIPnIt4E= User-Agent: Thunderbird 2.0.0.22 (Windows/20090605) Xref: g2news2.google.com comp.lang.scheme:6140 comp.lang.ada:7389 comp.lang.functional:2503 comp.lang.c++:48451 comp.programming:12097 Date: 2009-07-29T00:46:25-04:00 List-Id: fft1976 wrote: > On Jul 28, 5:40 pm, Oxide Scrubber wrote: > > Clojure is kind of cool, but many corrections are in order No, they are not. >> fft1976 wrote: >>> Java: 1.5x slower than C as a rule of thumb. >> I think it can achieve parity. > > I disagree. I don't think the JIT can do much about the memory layout > of the data structures. Compare a vector of complex numbers or of 3D > vectors (NOT of pointers to them) in C/C++ and Java. Run some tests. I > did and I looked at others'. > > For me, 1.5x is a good trade-off for safety though. The JIT can't, but the coder can. If you want a vector of N double-precision complex numbers in Java that is contiguous in memory, for example, you could use a Java array of 2xN doubles and index into it appropriately. This won't work with encapsulated-object bignums, but at that point you're probably spending most of your time in individual bignum ops, particularly multiplies, anyway, so traversal from one bignum to another becomes an insignificant factor in run-time. >> With Java, in most cases you can slap "implements Cloneable" on a >> class and make the clone method public, > > Java's "clone" does a shallow copy only (check the docs). True. If you want a deep copy you will have to implement it yourself. Or you can make a deepCopy static utility method that exploits "Serializable" to deep copy anything that's serializable simply by serializing and deserializing it (which can use memory, or a temp file, and not much memory if you make an auxiliary class implement both InputStream and OutputStream, with its OutputStream methods writing to a growable buffer and its InputStream methods consuming (possibly blocking if necessary) from same. Closing the OutputStream aspect EOFs the InputStream aspect. Then wrap in ObjectInput/OutputStream. You can use a BlockingQueue of byte arrays of length 8192 (or whatever), serialize to one end and deserialize from the other in concurrent threads, and it will tend to not use much memory. Limit the BlockingQueue length to 1 item and it will never use much more than 8Kbyte (the OutputStream aspect will block until the InputStream aspect consumes from the queue). >> or slap "implements >> Serializable" on a class to make its instances marshallable. > > Does this handle cycles and memory sharing among data structures? It does. >> (This doesn't, however, >> interoperate with generic Java objects or Java serialization, and I'm >> not sure it works with data structures with circularities. It won't work >> with data structures with infinite sequences in them, but if you >> represent such sequences symbolically it can.) > > But you'll need Java's data structures and mutations on Java's arrays > to compete with it in speed of numerical code Not necessarily. It depends on what sort of numerical code. If the bulk of the time is spent performing arithmetic operations on just a few values, these inner loops can be tightened up and use only local variables, no arrays or other structures at all. If the arithmetic is bignum, the bulk of your time will be spent performing individual bignum operations. Algorithmic smartness (Karatsuba multiplication, memoizing or otherwise saving multiply-used intermediate values, and exploiting concurrency) will win big while speeding up the loops that invoke bignum operations won't. If the arithmetic is on smallnums in arrays and the computations are mostly mutations, then you may want to use Java arrays, and you may want to use a Java method instead of a Clojure function to do the operation (and may still call this from Clojure easily enough). Note above about Java: if you want contiguous memory you'll have to give up using Java objects for things like complexes. But you can still make quasi-methods for quasi-objects (e.g. methods that take an array and an index and treat the two array elements starting at that index as real and imaginary parts of a complex numbers and perform a complex op on them). With Clojure code, you can make macros that expand to complex operations on adjacent pairs of values in a Java primitive array. In practice, Clojure access to Java arrays, even primitive arrays, seems to be slow. This may be addressed in a future version. Alternatively, use Java for this. Macros can also be used to turn code that operates on notional vector-like data structures of lengths known at compile time into code that actually operates on large sequences of local variables. The "vectors" should then end up as contiguous data on the *stack*. If the lengths are not known until run-time, the same macros can be used along with "eval" to compile functions on the fly to perform efficient operations on them. In fact, if other aspects of the computations are also not known until run-time "eval" can be useful. If at compile time, for example, an unknown value will be added to the diagonal entries of lots of 1000x1000 matrices, and at run time that value is pure real, at run time "eval" can emit in this case specialized code that doesn't waste time adding thousands of zeros to imaginary parts of matrix entries. Last but certainly not least, ask if any such algorithms can be rewritten to be based less on mutation and more on map-reduce type operations. >> Clojure has strong support for parallelism and threading. > > Clojure's support for multithreading is good only as long as your code > is pure-functional. Let's see you add 1.0 to all diagonal elements of > a 1000x1000 matrix. Clojure has all of Java's support for multithreading too. You could on a dual-core machine get two threads each mutating 500 of those diagonal entries -- no locking required since the changes are independent of one another. Just as you could in Java. Or do it in Java and call this from Clojure. >> You need to work a bit to get the most speed out of Clojure too, but you >> can then get C-like performance out of it in tight loops. > > In theory, imperative and Java array using Clojure can be made as fast > as Java (which is slower than C) See above: not if you lay out your data right, and in Clojure (unlike pure Java) you can use macros or similarly to emulate having object operations on the data despite it really being in a flat array. > Aside: do you remember to add -O3 when you are compiling C/C++? I use > "-server" when running the JVM. I am assuming both. Without -server, Clojure and Java will be significantly slower than C/C++. With -server, I've seen numerical calculations in both run at tuned-assembly speeds.