From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 5b1e799cdb,3ef3e78eacf6f938 X-Google-Attributes: gid5b1e799cdb,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII Path: g2news2.google.com!postnews.google.com!e11g2000yqo.googlegroups.com!not-for-mail From: frankenstein Newsgroups: comp.lang.scheme,comp.lang.ada,comp.lang.functional,comp.lang.c++,comp.programming Subject: Re: Alternatives to C: ObjectPascal, Eiffel, Ada or Modula-3? Date: Sun, 2 Aug 2009 02:48:47 -0700 (PDT) Organization: http://groups.google.com Message-ID: <788d6773-e3e1-4140-a44d-e1a5bf69fb05@e11g2000yqo.googlegroups.com> References: <2009a75f-63e7-485e-9d9f-955e456578ed@v37g2000prg.googlegroups.com> NNTP-Posting-Host: 80.109.108.253 Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Trace: posting.google.com 1249206527 20049 127.0.0.1 (2 Aug 2009 09:48:47 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Sun, 2 Aug 2009 09:48:47 +0000 (UTC) Complaints-To: groups-abuse@google.com Injection-Info: e11g2000yqo.googlegroups.com; posting-host=80.109.108.253; posting-account=dhvWPAoAAADlsA6TMEkmupyFjVl54yNM User-Agent: G2/1.0 X-HTTP-UserAgent: Opera/9.50 (Macintosh; Intel Mac OS X; U; en),gzip(gfe),gzip(gfe) Xref: g2news2.google.com comp.lang.scheme:6287 comp.lang.ada:7521 comp.lang.functional:2596 comp.lang.c++:48974 comp.programming:12260 Date: 2009-08-02T02:48:47-07:00 List-Id: As an addendum: I shall post in the following some bits of my matrix class. Sorry there are no comments and it lacks all my pretty printing, slicing and mapping over matrix stuff. However, you should get an idea. Scroll down until you reach the example of the matrix matrix multiplication. If you want to bench it againts a C code: use google for the C code of a matrix matrix multiplication: copy the code enclosed into a file e.g. foo.scm and on the command line: bigloo -Obench foo.scm, and time a.out. If you want to increase the size use 1024 e.g. in (do-main 1024). Some basics to the class: read through it and you will encounter 3 classes: 64-bit, 32-bit and realmat you create an n-dimensioanal matrix as follows: (mk-f64mat i j ... n val: 0.0e0) you access the values: (f64m& i j ... n) you store values: (f64m! i j ... n value) same goes for f32, and realmat. update and accessing is done by the macro and should be sufficient fast. Raueber Hotzenplotz =3D=3D (module matrix (export (class f64mat mat (rank::bint (default 1)) (dims::pair-nil (default '(1))) (print-index?::bool (default #t))) (class f32mat mat (rank::bint (default 1)) (dims::pair-nil (default '(1))) (print-index?::bool (default #t))) (class realmat mat (rank::bint (default 1)) (dims::pair-nil (default '(1))) (print-index?::bool (default #t))) (inline make-matrix::obj op::pair-nil #!key (val 0.0e0) (type (lambda (x y) (make-vector x y)))) (inline make-matrix-local::obj op::pair-nil #!key (val 0.0e0) (type (lambda (x y) (make-vector x y)))) (inline mk-f64mat::f64mat #!rest op::pair-nil #!key (val 0.0e0)) (inline mk-f32mat::f32mat #!rest op::pair-nil #!key (val 0.0)) (inline mk-realmat::realmat #!rest op::pair-nil #!key (val 0.0)) (macro aref-set-helper fun::obj x::obj i::bint . op::obj) (macro f64m& mat::f64mat . op::obj) (macro f32m& mat::f32mat . op::obj) (macro realm& mat::realmat . op::obj) (macro f64m!c mat::f64mat val::double . op::obj) (macro f64m! mat::f64mat . op::obj) (macro f32m!c mat::f32mat val::real . op::obj) (macro f32m! mat::f32mat val::real . op::obj) (macro realm!c mat::realmat val::real . op::obj) (macro realm! mat::realmat val::real . op::obj))) (define-inline (.1st. x::pair-nil) (car x)) (define-inline (.2nd. x::pair-nil) (cadr x)) (define-inline (.3rd. x::pair-nil) (caddr x)) (define-inline (make-matrix-local::obj op::pair-nil #!key (val 0.0e0) (type (lambda (x y) (make-vector x y)))) (if (=3Dfx 1 (length op)) (type (car op) val) (let ([mx::vector (make-vector (car op))]) (do [(oi 0 (+fx oi 1))] [(=3Dfx oi (car op))] (vector-set! mx oi (make-matrix-local (cdr op) val: val type: type))) mx))) (define-inline (make-matrix::obj op::pair-nil #!key (val 0.0e0) (type (lambda (x y) (make-vector x y)))) (if (=3Dfx 1 (length op)) (type (car op) val) (let ([mx::vector (make-vector (car op))]) (do [(oi 0 (+fx oi 1))] [(=3Dfx oi (car op))] (vector-set! mx oi (make-matrix (cdr op) val: val type: type))) mx))) (define-inline (mk-f64mat::f64mat #!rest op::pair-nil #!key (val 0.0e0)) (let* ((indx::bint (-fx (length op) 1))) (if ( wrote: > On Jul 29, 12:14=A0am, Jon Harrop wrote: > > > fft1976 wrote: > > If you're using VS then I highly recommend F# for numerical work, large= ly > > because it makes parallelism so easy. > > > > Gambit-C Scheme (one of the best of LISPs, IMO): about 2x slower than > > > C (single core only), but you have to work to get within 2x (unlike > > > OCaml), and if you want it fast, it can't be safe (switch controlled)= . > > > Bigloo? > > Bigloo is a good option for numerical work and sometimes beats my > Fortran F90 code. I am not a Fortran 90 expert but that Bigloo stacks > up =A0well compared to Fortran tells a lot. > > Fact #1: We must forget about the language shootout page because it is > and always has been a kinda like of Micky Mouse benchmark (any idot > who thought he might make up for an excellent programmer posted crappy > code and once posted it is forever in google's history and a lot of > other idiots will use the result from the benchmark). RIP language > shotout page. > > Fact 2: The performance of Bigloo especillay for larger problems where > your simulations will consume 12 hours and more on processor times and > will use 2GB of associated main memory and more =A0does not come for > free. You will have to program your intended code with this in mind. > HOWEVER, turning Bigloo into a numerical powerhorse for large data > simulations is straightforward: > > a) use from the very beginning on *fx, *fl ... native Bigloo operators > (IT IS VERY easy to use them and the compiler will help you out a lot > to spotting type mismatches; however, you won't have to use your gun > as you likely would by using OCaml and shooting yourself to stop the > battling for types). > > b) use f32 or f64 arrays (I created my own array class) whenever > possible. especially use f32 for large data simulations since it makes > a whole lot of a difference if your data are half the size in the main > memory in 32-bits mode than as it would be in 64-bits even though > internal calculations are always automatically casted to Bigloo its C > type double. > > c) use classes to make code clearer: classes are very fast in Bigloo. > > d) whenever you have to read in binary data (note there are some > issues with f32 bits; read Bigloo its mailing list) use or check for > the following undocumented functions and your code will fly: (ieee- > string->double), (ieee-string->float), (double->ieee-string), (float- > > >ieee-string), etc. > > e) use -Obench option when compiling, -Obench covers more or less all > Bigloo to C related and associated compiler options with speed in mind > (no bound checking etc.). > > f) add types to your code to make it readable for others and for your > pleasure to read and understand your own code during your debugging > excercise: > > =3D=3D > (define (add-vector::vector vec1::vector vec2::vector name::bstring > x::bint y::my-class) > =A0 (...)) > =3D=3D > > I haven't realaesed it yet but I have fully fledged with a whole lot > of bells and whistles bindings to: > > - full clapack (linear algebra package converted from Fortran to C by > f2c freely downloadable from the net) > - full DISLIN (high level plotting routine) > - binding to a random number generator > - binding to a mie scattering code > - a matrix class (for creating n dimensional f32 and f64 matrices, > mapping over n-dimensional matrices, pretty printing, slicing, etc.) > which does a fantastic job and is up to the task speed wise. > > NOTE: Translating code from an imperative language lets say Fortran to > Bigloo is easy. A lot of Fortran code consists of do loops which > Bigloo you might uses as well: > > =3D=3D > (do ((i 0 (+fx i 1)) > =A0 =A0 =A0((=3Dfx i dim)) > =A0 =A0 =A0(do ... etc. > =3D=3D > > The only issue: Bigloo like C is 0 based and in my case I always think > in row order instead of Fortran column order scheme. > > If you don't use Bigloo and recipies and suggestions posted above > Bigloo is dog slow. However, it is really very simple and comes more > or less at no cost to tailor it to a bespoken powerhorse. > > Whenever anyone writes a binding for a well respected external C > library which a lot of people might be =A0interested in please make it > public (yes, yes, I for mself haven't done it yet for dislin and > clapack, etc.). In the hope scientists will stop using Micky Mouse > languages like Matlab or Python with is a pain in the ass. > > Bigloo also makes available classes for java. However, I have no idea > if this works well or not and if there are people out there who are > using Bigloo in Java for numerical work. > > That said: a big question mark: I haven't seen any detailed > descriptions of Bigloo and how to employ threads to use dual-core or > multi processors even in numerical code. > > If anone likes to come forward please do so and report of your > experience using Bigloo in a multi processor farm. > > Thanks, Rumpelstilzchen